resolved merge conflicts

pull/19/head
Scott Anderson 2019-01-22 23:45:26 -07:00
commit 240e82847f
224 changed files with 11594 additions and 181 deletions

View File

@ -240,3 +240,35 @@ to seeing the full content block.
Truncated markdown content here.
{{% /truncate %}}
```
### Generate a list of children articles
Section landing pages often contain just a list of articles with links and descriptions for each.
This can be cumbersome to maintain as content is added.
To automate the listing of articles in a section, use the `{{< children >}}` shortcode.
```md
{{< children >}}
```
### Reference content
The InfluxDB documentation is "task-based," meaning content primarily focuses on
what a user is **doing**, not what they are **using**.
However, there is a need to document tools and other things that don't necessarily
fit in the task-based style.
This is referred to as "reference content."
Reference content is styled just as the rest of the InfluxDB documentation.
The only difference is the `menu` reference in the page's frontmatter.
When defining the menu for reference content, use the following pattern:
```yaml
# Pattern
menu:
v<major-version-number>_<minor-version-number>_ref:
# ...
# Example
menu:
v2_0_ref:
# ...
```

View File

@ -1,6 +1,10 @@
///////////////////////////// Make headers linkable /////////////////////////////
$("h2,h3,h4,h5,h6").each(function() {
$(".article--content h2, \
.article--content h3, \
.article--content h4, \
.article--content h5, \
.article--content h6" ).each(function() {
var link = "<a href=\"#" + $(this).attr("id") + "\"></a>"
$(this).wrapInner( link );
})
@ -10,7 +14,8 @@ $("h2,h3,h4,h5,h6").each(function() {
var elementWhiteList = [
".tabs p a",
".code-tabs p a",
".truncate-toggle"
".truncate-toggle",
".children-links a"
]
$('.article a[href^="#"]:not(' + elementWhiteList + ')').click(function (e) {
@ -79,3 +84,10 @@ $(".truncate-toggle").click(function(e) {
e.preventDefault()
$(this).closest('.truncate').toggleClass('closed');
})
//////////////////// Replace Missing Images with Placeholder ///////////////////
$(".article--content img").on("error", function() {
$(this).attr("src", "/img/coming-soon.svg");
$(this).attr("style", "max-width:500px;");
});

View File

@ -139,14 +139,17 @@
code,pre {
background: $article-code-bg;
font-family: 'Inconsolata', monospace;
color: $article-code;
}
p code {
padding: .15rem .45rem .25rem;
border-radius: $border-radius;
color: $article-code;
white-space: nowrap;
font-style: normal;
p,li,table,h2,h3,h4,h5,h6 {
code {
padding: .15rem .45rem .25rem;
border-radius: $border-radius;
color: $article-code;
white-space: nowrap;
font-style: normal;
}
}
a {
@ -237,6 +240,26 @@
}
}
///////////////////////// Landing Page Article Links /////////////////////////
.children-links {
h2,h3,h4 {
margin-top: -.5rem;
a a:after {
content: "\e919";
font-family: "icomoon";
color: rgba($article-heading, .35);
vertical-align: bottom;
transition: color .2s;
margin-left: .4rem;
}
a:hover {
&:after { color: $article-link; }
}
}
}
////////////////// Blockquotes, Notes, Warnings, & Messages //////////////////
blockquote,
@ -544,7 +567,31 @@
}
}
///////////////////////////////// Scroll Bars //////////////////////////////////
/////////////////////////////////// Buttons //////////////////////////////////
a.btn {
display: inline-block;
margin: .5rem 0 1rem;
padding: .5rem 1rem;
background: $article-btn;
color: $article-btn-text;
border-radius: $border-radius;
font-size: .95rem;
&:hover {
background: $article-btn-hover;
color: $article-btn-text-hover;
}
&.download:before {
content: "\e91c";
font-family: "icomoon";
margin-right: .5rem;
}
}
//////////////////////////////// Scroll Bars /////////////////////////////////
pre { @include scrollbar($article-code-bg, $article-code-scrollbar); }
table { @include scrollbar($article-table-row-alt, $article-code-scrollbar);}
@ -557,6 +604,42 @@
pre { @include scrollbar($article-warn-code-bg, $article-warn-code-scrollbar); }
table { @include scrollbar($article-warn-table-row-alt, $article-warn-code-scrollbar); }
}
////////////////////////// Guides Pagination Buttons /////////////////////////
.page-nav-btns {
display: flex;
justify-content: space-between;
margin: 3rem 0 1rem;
.btn {
display: flex;
max-width: 49%;
text-align: center;
align-items: center;
&.prev{
margin: 0 auto 0 0;
padding: .75rem 1.25rem .75rem .75rem;
&:before {
content: "\e90a";
font-family: "icomoon";
margin-right: .5rem;
vertical-align: middle;
}
}
&.next {
margin: 0 0 0 auto;
padding: .75rem .75rem .75rem 1.25rem;
&:after {
content: "\e90c";
font-family: "icomoon";
margin-left: .5rem;
vertical-align: middle;
}
}
}
}
}

View File

@ -3,6 +3,7 @@
width: 75%;
position: relative;
border-radius: $border-radius 0 0 $border-radius;
overflow: hidden;
.copyright {
padding: .5rem 1rem .5rem .5rem;

View File

@ -0,0 +1,101 @@
.cards {
display: flex;
justify-content: space-between;
flex-direction: column;
a {
text-decoration: none;
color: inherit;
}
.group {
display: flex;
background: linear-gradient(127deg, $landing-sm-gradient-left, $landing-sm-gradient-right);
flex-wrap: wrap;
}
.card {
text-align: center;
&.full {
width: 100%;
padding: 5rem 2rem;
}
&.quarter {
flex-grow: 2;
margin: 1px;
padding: 2rem;
background: rgba($landing-sm-gradient-overlay, .5);
transition: background-color .4s;
&:hover {
background: rgba($landing-sm-gradient-overlay, 0);
}
}
h1,h2,h3,h4 {
font-family: $klavika;
font-weight: 300;
font-style: italic;
text-align: center;
color: $g20-white;
}
h1 {
margin: 0 0 1.25rem;
font-size: 2.25rem;
}
h3 { font-size: 1.35rem;}
&#get-started {
background: linear-gradient(127deg, $landing-lg-gradient-left, $landing-lg-gradient-right );
text-align: center;
.btn {
display: inline-block;
padding: .85rem 1.5rem;
color: $g20-white;
font-weight: bold;
background: rgba($g20-white, .1);
border: 2px solid rgba($g20-white, .5);
border-radius: $border-radius;
transition: background-color .2s, color .2s;
&:hover {
background: $g20-white;
color: $b-pool;
}
}
}
}
}
////////////////////////////////////////////////////////////////////////////////
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
@include media(large) {
.cards {
.card {
&.full { padding: 3.5rem;}
&.quarter { width: 48%; }
}
}
}
@include media(small) {
.cards {
.group { flex-direction: column; }
.card{
&.full { padding: 2.5rem;}
&.quarter {
width: 100%;
max-width: 100%;
padding: 1.25rem;
}
h1 { font-size: 2rem; }
}
}
}

View File

@ -207,6 +207,17 @@
&:after { transform: rotate(180deg); }
}
}
// Reference title styles
h4 {
margin: 2rem 0 0 -1rem;
color: rgba($article-heading, .8);
font-style: italic;
font-weight: 700;
text-transform: uppercase;
font-size: .85rem;
letter-spacing: .08rem;
}
}
}

View File

@ -130,6 +130,10 @@
&#theme-switch-dark { display: $theme-switch-dark; }
&#theme-switch-light { display: $theme-switch-light; }
}
#search-btn {
opacity: 0;
}
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -17,5 +17,6 @@
"layouts/layout-content-wrapper",
"layouts/layout-article",
"layouts/syntax-highlighting",
"layouts/layout-error-page",
"layouts/algolia-search-overrides";
"layouts/algolia-search-overrides",
"layouts/layout-landing",
"layouts/layout-error-page";

View File

@ -116,6 +116,12 @@ $article-tab-code-text-hover: $g20-white !default;
$article-tab-code-bg-hover: $b-ocean !default;
$article-tab-code-active-text: $g20-white !default;
// Article page buttons
$article-btn: $b-ocean !default;
$article-btn-text: $g20-white !default;
$article-btn-hover: $b-pool !default;
$article-btn-text-hover: $g20-white !default;
// Left Navigation
$nav-category: $b-ocean !default;
$nav-category-hover: $g20-white !default;
@ -131,3 +137,10 @@ $error-page-btn: $b-ocean !default;
$error-page-btn-text: $g20-white !default;
$error-page-btn-hover: $g20-white !default;
$error-page-btn-hover-text: $b-ocean !default;
// Landing Page colors
$landing-lg-gradient-left: $b-ocean !default;
$landing-lg-gradient-right: $b-pool !default;
$landing-sm-gradient-left: $p-planet !default;
$landing-sm-gradient-right: $p-star !default;
$landing-sm-gradient-overlay: $p-shadow !default;

View File

@ -115,6 +115,12 @@ $article-tab-code-text-hover: $g20-white;
$article-tab-code-bg-hover: $p-comet;
$article-tab-code-active-text: $p-star;
// Article page buttons
$article-btn: $b-pool;
$article-btn-text: $g20-white;
$article-btn-hover: $b-ocean;
$article-btn-text-hover: $g20-white;
// Left Navigation
$nav-category: $b-ocean;
$nav-category-hover: $gr-viridian;
@ -130,3 +136,10 @@ $error-page-btn: $b-ocean;
$error-page-btn-text: $g20-white;
$error-page-btn-hover: $b-pool;
$error-page-btn-hover-text: $g20-white;
// Landing Page colors
$landing-lg-gradient-left: $b-ocean;
$landing-lg-gradient-right: $b-pool;
$landing-sm-gradient-left: $p-star;
$landing-sm-gradient-right: $p-comet;
$landing-sm-gradient-overlay: $p-planet;

View File

@ -1,10 +1,10 @@
@font-face {
font-family: 'icomoon';
src: url('fonts/icomoon.eot?o2njz5');
src: url('fonts/icomoon.eot?o2njz5#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?o2njz5') format('truetype'),
url('fonts/icomoon.woff?o2njz5') format('woff'),
url('fonts/icomoon.svg?o2njz5#icomoon') format('svg');
src: url('fonts/icomoon.eot?rws1o3');
src: url('fonts/icomoon.eot?rws1o3#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?rws1o3') format('truetype'),
url('fonts/icomoon.woff?rws1o3') format('woff'),
url('fonts/icomoon.svg?rws1o3#icomoon') format('svg');
font-weight: normal;
font-style: normal;
}
@ -78,12 +78,21 @@
.icon-chevron-up:before {
content: "\e91a";
}
.icon-download:before {
content: "\e91c";
}
.icon-heart1:before {
content: "\e913";
}
.icon-menu:before {
content: "\e91b";
}
.icon-minus:before {
content: "\e91d";
}
.icon-plus:before {
content: "\e91e";
}
.icon-settings:before {
content: "\e914";
}
@ -114,12 +123,6 @@
.icon-map2:before {
content: "\e94c";
}
.icon-download:before {
content: "\e960";
}
.icon-upload:before {
content: "\e961";
}
.icon-cog:before {
content: "\e994";
}

View File

@ -1,40 +0,0 @@
---
title: Exploring metrics
description:
menu:
v2_0:
name: Exploring metrics
weight: 1
---
Explore and visualize your data in the **Data Explorer**. The user interface allows you to move seamlessly between using the builder or templates and manually editing the query; when possible, the interface automatically populates the builder with the information from your raw query. Choose between [visualization types](/chronograf/latest/guides/visualization-types/) for your query.
To open the **Data Explorer**, click the **Explore** icon in the navigation bar:
<img src="/img/chronograf/v1.7/data-explorer-icon.png" style="width:100%; max-width:400px; margin:2em 0; display: block;">
## Explore data with Flux
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/flux/v0.7/introduction/getting-started).
1. Click the **Data Explorer** icon in the sidebar.
2. Use the builder to select from your existing data and have the query automatically formatted for you.
Alternatively, click **Edit Query As Flux** to manually edit the query. To switch back to the query builder, click **Visual Query Builder**.
3. Use the **Functions** pane to review the available Flux functions. Click on a function from the list to add it to your query.
4. Click **Submit** to run your query. You can then preview your graph in the above pane.
## Visualize your query
**To visualize your query**:
* Select a visualization type from the dropdown menu in the upper-left.
<<SCREENSHOT>>
* Select the **Visualization** tab at the bottom of the **Data Explorer**. For details about all of the available visualization options, see [Visualization types](/chronograf/latest/guides/visualization-types/).
## Save your query as a dashboard cell or task
**To save your query**:
1. Click **Save as** in the upper right.
2. Click **Dashboard Cell** to add your query to a dashboard.
3. Click **Task** to save your query as a task.

View File

@ -5,6 +5,7 @@ menu:
v2_0:
name: Getting started
weight: 1
parent: Placeholder parent
---
## Buckets

View File

@ -5,6 +5,7 @@ menu:
v2_0:
name: Managing organizations
weight: 1
parent: Placeholder parent
---
Everything is scoped by/contained within organization--dashboards, tasks, buckets, users, !!collectors and scrapers!!.

View File

@ -1,37 +0,0 @@
---
title: Using tasks
description: This is just an example post to show the format of new 2.0 posts
menu:
v2_0:
name: Using tasks
weight: 1
---
A task is a scheduled Flux query. Main use case is replacement for continuous queries, add info about CQs.
**To filter the list of tasks**:
1. Enable the **Show Inactive** option to include inactive tasks on the list.
2. Enter text in the **Filter tasks by name** field to search for tasks by name.
3. Select an organization from the **All Organizations** dropdown to filter the list by organization.
4. Click on the heading of any column to sort by that field.
**To import a task**:
1. Click the Tasks (calendar) icon in the left navigation menu.
2. Click **Import** in the upper right.
3. Drag and drop or select a file to upload.
4. !!!
**To create a task**:
1. Click **+ Create Task**.
2. In the left sidebar panel, enter the following details:
* **Name**: The name of your task.
* **Owner**: Select an organization from the drop-down menu.
* **Schedule Task**: Select **Interval** for !!!! or **Cron** to !!!. Also enter value below (interval window or Cron thing).
* **Offset**: Enter an offset time. If you schedule it to run at the hour but you have an offset of ten minutes, then it runs at an hour and ten minutes.
3. In the right panel, enter your task script.
4. Click **Save**.
**Disable tasks**

View File

@ -1,14 +1,26 @@
---
title: InfluxDB v2.0
seotitle: This is the SEO title for InfluxDB v2.0
description: placeholder
description: >
InfluxDB is an open source time series database designed to handle high write and query loads.
Learn how to use and leverage InfluxDB in use cases such as monitoring metrics, IoT data, and events.
layout: version-landing
menu:
versions:
name: v2.0
v2_0:
name: Introduction
weight: 1
---
_This placeholder content for the landing page for v2.0._
#### Welcome
Welcome to the InfluxDB v2.0 documentation!
InfluxDB is an open source time series database designed to handle high write and query loads.
This documentation is meant to help you learn how to use and leverage InfluxDB to meet your needs.
Common use cases include infrastructure monitoring, IoT data collection, events handling and more.
If your use case involves time series data, InfluxDB is purpose-built to handle it.
{{% note %}}
This is an alpha release of InfluxDB v2.0.
Feedback and bug reports are welcome and encouraged both for InfluxDB and this documentation.
Issue tracking is managed through Github.
[Submit an InfluxDB issue](https://github.com/influxdata/influxdb/issues/new)
[Submit a documentation issue](https://github.com/influxdata/docs-v2/issues/new)
{{% /note %}}

View File

@ -7,6 +7,7 @@ menu:
weight: 1
#enterprise_all: true
enterprise_some: true
draft: true
---
This is a paragraph. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc rutrum, metus id scelerisque euismod, erat ante suscipit nibh, ac congue enim risus id est. Etiam tristique nisi et tristique auctor. Morbi eu bibendum erat. Sed ullamcorper, dui id lobortis efficitur, mauris odio pharetra neque, vel tempor odio dolor blandit justo.

View File

@ -0,0 +1,12 @@
---
title: Get started with InfluxDB
description: Simple steps for downloading, installing, and setting up InfluxDB.
menu:
v2_0:
name: Get started
weight: 1
---
The following guide you through downloading, installing, and setting up InfluxDB.
{{< children >}}

View File

@ -0,0 +1,28 @@
---
title: Install InfluxDB v2.0
seotitle: Download and install InfluxDB v2.0
description: >
Visit the InfluxData downloads page to download InfluxDB v2.0.
Add the influx and influxd binaries to your system $PATH.
menu:
v2_0:
name: Install InfluxDB
parent: Get started
weight: 1
---
### Download InfluxDB v2.0
Visit [InfluxData Downloads page](https://portal.influxdata.com/downloads/) and
download the InfluxDB v2.0 package appropriate for your operating system.
<a class="btn download" href="https://portal.influxdata.com/downloads/" target="\_blank">Download InfluxDB</a>
### Place the executables in your $PATH
Place the `influx` and `influxd` executables in your system `$PATH`.
### Networking ports
By default, InfluxDB uses TCP port `9999` for client-server communication over InfluxDBs HTTP API.
<div class="page-nav-btns">
<a class="btn next" href="/v2.0/get-started/setup/">Setup InfluxDB</a>
</div>

View File

@ -0,0 +1,76 @@
---
title: Set up InfluxDB
seotitle: Run the initial InfluxDB setup process
description: The initial setup process for walks through creating a default organization, user, and bucket.
menu:
v2_0:
name: Set up InfluxDB
parent: Get started
weight: 2
---
The initial setup process for InfluxDB walks through creating a default organization,
user, and bucket.
The setup process is available in both the InfluxDB user interface (UI) and in
the `influx` command line interface (CLI).
## Start the influxd daemon
In order to setup InfluxDB via the UI or the CLI, first start the `influxd` daemon by running:
```bash
influxd
```
_See the [`influxd` documentation](/v2.0/reference/cli/influxd) for information about
available flags and options._
{{% note %}}
#### InfluxDB "phone-home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
Top opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
influxd --reporting-disabled
```
{{% /note %}}
## Set up InfluxDB through the UI
1. With `influxd` running, visit [localhost:9999](http://localhost:9999).
2. Click **Get Started**
### Set up your initial user
1. Enter a **Username** for your initial user.
2. Enter a **Password** and **Confirm Password** for your user.
3. Enter your initial **Organization Name**.
4. Enter your initial **Bucket Name**.
5. Click **Continue**.
InfluxDB is now initialized with a primary user, organization, and bucket.
You are ready to [collect data](#).
## Set up InfluxDB through the influx CLI
Begin the InfluxDB setup process via the `influx` CLI by running:
```bash
influx setup
```
1. Enter a **primary username**.
2. Enter a **password** for your user.
3. **Confirm your password** by entering it again.
4. Enter a name for your **primary organization**.
5. Enter a name for your **primary bucket**.
6. Enter a **retention period** (in hours) for your primary bucket.
Enter nothing for an infinite retention period.
7. Confirm the details for your primary user, organization, and bucket.
InfluxDB is now initialized with a primary user, organization, and bucket.
You are ready to [collect data](#).

View File

@ -0,0 +1,29 @@
---
title: Process Data with InfluxDB tasks
seotitle: Process Data with InfluxDB tasks
description: >
InfluxDB's task engine runs scheduled Flux tasks that process and analyze data.
This collection of articles provides information about creating and managing InfluxDB tasks.
menu:
v2_0:
name: Process data
weight: 5
---
InfluxDB's _**task engine**_ is designed for processing and analyzing data.
A task is a scheduled Flux query that take a stream of input data, modify or
analyze it in some way, then perform an action.
Examples include data downsampling, anomaly detection _(Coming)_, alerting _(Coming)_, etc.
{{% note %}}
Tasks are a replacement for InfluxDB v1.x's continuous queries.
{{% /note %}}
The following articles explain how to configure and build tasks using the InfluxDB user interface (UI)
and via raw Flux scripts with the `influx` command line interface (CLI).
They also provide examples of commonly used tasks.
[Write a task](/v2.0/process-data/write-a-task)
[Manage Tasks](/v2.0/process-data/manage-tasks)
[Common Tasks](/v2.0/process-data/common-tasks)
[Task Options](/v2.0/process-data/task-options)

View File

@ -0,0 +1,22 @@
---
title: Common data processing tasks
seotitle: Common data processing tasks performed with with InfluxDB
description: >
InfluxDB Tasks process data on specified schedules.
This collection of articles walks through common use cases for InfluxDB tasks.
menu:
v2_0:
name: Common tasks
parent: Process data
weight: 4
---
The following articles walk through common task use cases.
[Downsample Data with InfluxDB](/v2.0/process-data/common-tasks/downsample-data)
{{% note %}}
This list will continue to grow.
If you have suggestions, please [create an issue](https://github.com/influxdata/docs-v2/issues/new)
on the InfluxData documentation repository on Github.
{{% /note %}}

View File

@ -0,0 +1,85 @@
---
title: Downsample data with InfluxDB
seotitle: Downsample data in an InfluxDB task
description: >
How to create a task that downsamples data much like continuous queries
in previous versions of InfluxDB.
menu:
v2_0:
name: Downsample data
parent: Common tasks
weight: 4
---
One of the most common use cases for InfluxDB tasks is downsampling data to reduce
the overall disk usage as data collects over time.
In previous versions of InfluxDB, continuous queries filled this role.
This article walks through creating a continuous-query-like task that downsamples
data by aggregating data within windows of time, then storing the aggregate value in a new bucket.
### Requirements
To perform a downsampling task, you need to the following:
##### A "source" bucket
The bucket from which data is queried.
##### A "destination" bucket
A separate bucket where aggregated, downsampled data is stored.
##### Some type of aggregation
To downsample data, it must be aggregated in some way.
What specific method of aggregation you use depends on your specific use case,
but examples include mean, median, top, bottom, etc.
View [Flux's aggregate functions](/v2.0/reference/flux/functions/transformations/aggregates/)
for more information and ideas.
## Create a destination bucket
By design, tasks cannot write to the same bucket from which they are reading.
You need another bucket where the task can store the aggregated, downsampled data.
_For information about creating buckets, see [Create a bucket](#)._
## Example downsampling task script
The example task script below is a very basic form of data downsampling that does the following:
1. Defines a task named "cq-mem-data-1w" that runs once a week.
2. Defines a `data` variable that represents all data from the last 2 weeks in the
`mem` measurement of the `system-data` bucket.
3. Uses the [`aggregateWindow()` function](/v2.0/reference/flux/functions/transformations/aggregates/aggregatewindow/)
to window the data into 1 hour intervals and calculate the average of each interval.
4. Stores the aggregated data in the `system-data-downsampled` bucket under the
`my-org` organization.
```js
// Task Options
option task = {
name: "cq-mem-data-1w",
every: 1w,
}
// Defines a data source
data = from(bucket: "system-data")
|> range(start: -task.every * 2)
|> filter(fn: (r) => r._measurement == "mem")
data
// Windows and aggregates the data in to 1h averages
|> aggregateWindow(fn: mean, every: 1h)
// Stores the aggregated data in a new bucket
|> to(bucket: "system-data-downsampled", org: "my-org")
```
Again, this is a very basic example, but it should provide you with a foundation
to build more complex downsampling tasks.
## Add your task
Once your task is ready, see [Create a task](/v2.0/process-data/manage-tasks/create-task) for information about adding it to InfluxDB.
## Things to consider
- If there is a chance that data may arrive late, specify an `offset` in your
task options long enough to account for late-data.
- If running a task against a bucket with a finite retention policy, do not schedule
tasks to run too closely to the end of the retention policy.
Always provide a "cushion" for downsampling tasks to complete before the data
is dropped by the retention policy.

View File

@ -0,0 +1,21 @@
---
title: Manage tasks in InfluxDB
seotitle: Manage data processing tasks in InfluxDB
description: >
InfluxDB provides options for managing the creation, reading, updating, and deletion
of tasks using both the 'influx' CLI and the InfluxDB UI.
menu:
v2_0:
name: Manage tasks
parent: Process data
weight: 2
---
InfluxDB provides two options for managing the creation, reading, updating, and deletion (CRUD) of tasks -
through the InfluxDB user interface (UI) or using the `influx` command line interface (CLI).
Both tools can perform all task CRUD operations.
[Create a task](/v2.0/process-data/manage-tasks/create-task)
[View tasks](/v2.0/process-data/manage-tasks/view-tasks)
[Update a task](/v2.0/process-data/manage-tasks/update-task)
[Delete a task](/v2.0/process-data/manage-tasks/delete-task)

View File

@ -0,0 +1,85 @@
---
title: Create a task
seotitle: Create a task for processing data in InfluxDB
description: >
How to create a task that processes data in InfluxDB using the InfluxDB user
interface or the 'influx' command line interface.
menu:
v2_0:
name: Create a task
parent: Manage tasks
weight: 1
---
InfluxDB provides multiple ways to create tasks both in the InfluxDB user interface (UI)
and the `influx` command line interface (CLI).
_This article assumes you have already [written a task](/v2.0/process-data/write-a-task)._
## Create a task in the InfluxDB UI
The InfluxDB UI provides multiple ways to create a task:
- [Create a task from the Data Explorer](#create-a-task-from-the-data-explorer)
- [Create a task in the Task UI](#create-a-task-in-the-task-ui)
- [Import a task](#import-a-task)
### Create a task from the Data Explorer
1. Click on the **Data Explorer** icon in the left navigation menu.
{{< img-hd src="/img/data-explorer-icon.png" alt="Data Explorer Icon" />}}
2. Building a query and click **Save As** in the upper right.
3. Select the **Task** option.
4. Specify the task options. See [Task options](/v2.0/process-data/task-options)
for detailed information about each option.
5. Click **Save as Task**.
{{< img-hd src="/img/data-explorer-save-as-task.png" alt="Add a task from the Data Explorer"/>}}
### Create a task in the Task UI
1. Click on the **Tasks** icon in the left navigation menu.
{{< img-hd src="/img/tasks-icon.png" alt="Tasks Icon" />}}
2. Click **+ Create Task** in the upper right.
3. In the left panel, specify the task options.
See [Task options](/v2.0/process-data/task-options)for detailed information about each option.
4. In the right panel, enter your task script.
5. Click **Save** in the upper right.
{{< img-hd src="/img/tasks-create-edit.png" alt="Create a task" />}}
### Import a task
1. Click on the **Tasks** icon in the left navigation menu.
2. Click **Import** in the upper right.
3. Drag and drop or select a file to upload.
4. Click **Upload Task**.
{{< img-hd src="/img/tasks-import-task.png" alt="Import a task" />}}
## Create a task using the influx CLI
Use `influx task create` command to create a new task.
It accepts either a file path or raw Flux.
###### Create a task using a file
```sh
# Pattern
influx task create --org <org-name> @</path/to/task-script>
# Example
influx task create --org my-org @/tasks/cq-mean-1h.flux
```
###### Create a task using raw Flux
```sh
influx task create --org my-org - # <return> to open stdin pipe
options task = {
name: "task-name",
every: 6h
}
# ... Task script ...
# <ctrl-d> to close the pipe and submit the command
```

View File

@ -0,0 +1,37 @@
---
title: Delete a task
seotitle: Delete a task for processing data in InfluxDB
description: >
How to delete a task in InfluxDB using the InfluxDB user interface or using
the 'influx' command line interface.
menu:
v2_0:
name: Delete a task
parent: Manage tasks
weight: 4
---
## Delete a task in the InfluxDB UI
1. Click the **Tasks** icon in the left navigation menu.
{{< img-hd src="/img/tasks-icon.png" alt="Tasks Icon" />}}
2. In the list of tasks, hover over the task you would like to delete.
3. Click **Delete** on the far right.
4. Click **Confirm**.
{{< img-hd src="/img/tasks-delete-task.png" alt="Delete a task" />}}
## Delete a task with the influx CLI
Use the `influx task delete` command to delete a task.
_This command requires a task ID, which is available in the output of `influx task find`._
```sh
# Pattern
influx task delete -i <task-id>
# Example
influx task delete -i 0343698431c35000
```

View File

@ -0,0 +1,63 @@
---
title: Update a task
seotitle: Update a task for processing data in InfluxDB
description: >
How to update a task that processes data in InfluxDB using the InfluxDB user
interface or the 'influx' command line interface.
menu:
v2_0:
name: Update a task
parent: Manage tasks
weight: 3
---
## Update a task in the InfluxDB UI
To view your tasks, click the **Tasks** icon in the left navigation menu.
{{< img-hd src="/img/tasks-icon.png" alt="Tasks Icon" />}}
#### Update a task's Flux script
1. In the list of tasks, click the **Name** of the task you would like to update.
2. In the left panel, modify the task options.
3. In the right panel, modify the task script.
4. Click **Save** in the upper right.
{{< img-hd src="/img/tasks-create-edit.png" alt="Update a task" />}}
#### Update the status of a task
In the list of tasks, click the toggle in the **Active** column of the task you
would like to activate or inactivate.
## Update a task with the influx CLI
Use the `influx task update` command to update or change the status of an existing task.
_This command requires a task ID, which is available in the output of `influx task find`._
#### Update a task's Flux script
Pass the file path of your updated Flux script to the `influx task update` command
with the ID of the task you would like to update.
Modified [task options](/v2.0/process-data/task-options) defined in the Flux
script are also updated.
```sh
# Pattern
influx task update -i <task-id> @/path/to/updated-task-script
# Example
influx task update -i 0343698431c35000 @/tasks/cq-mean-1h.flux
```
#### Update the status of a task
Pass the ID of the task you would like to update to the `influx task update`
command with the `--status` flag.
_Possible arguments of the `--status` flag are `active` or `inactive`._
```sh
# Pattern
influx task update -i <task-id> --status < active | inactive >
# Example
influx task update -i 0343698431c35000 --status inactive
```

View File

@ -0,0 +1,39 @@
---
title: View tasks in InfluxDB
seotitle: View created tasks that process data in InfluxDB
description: >
How to view all created data processing tasks using the InfluxDB user interface
or the 'influx' command line interface.
menu:
v2_0:
name: View tasks
parent: Manage tasks
weight: 2
---
## View tasks in the InfluxDB UI
Click the **Tasks** icon in the left navigation to view the lists of tasks.
{{< img-hd src="/img/tasks-icon.png" alt="Tasks Icon" />}}
### Filter the list of tasks
1. Enable the **Show Inactive** option to include inactive tasks in the list.
2. Enter text in the **Filter tasks by name** field to search for tasks by name.
3. Select an organization from the **All Organizations** dropdown to filter the list by organization.
4. Click on the heading of any column to sort by that field.
{{< img-hd src="/img/tasks-list.png" alt="View and filter tasks" />}}
## View tasks with the influx CLI
Use the `influx task find` command to return a list of created tasks.
```sh
influx task find
```
#### Filter tasks using the CLI
Other filtering options such as filtering by organization or user,
or limiting the number of tasks returned are available.
See the [`influx task find` documentation](/v2.0/reference/cli/influx/task/find)
for information about other available flags.

View File

@ -0,0 +1,109 @@
---
title: Task configuration options
seotitle: InfluxDB task configuration options
description: >
Task options define specific information about a task such as its name,
the schedule on which it runs, execution delays, and others.
menu:
v2_0:
name: Task options
parent: Process data
weight: 5
---
Task options define specific information about the task and are specified in your
Flux script or in the InfluxDB user interface (UI).
The following task options are available:
- [name](#name)
- [every](#every)
- [cron](#cron)
- [offset](#offset)
- [concurrency](#concurrency)
- [retry](#retry)
{{% note %}}
`every` and `cron` are mutually exclusive, but at least one is required.
{{% /note %}}
## name
The name of the task. _**Required**_.
_**Data type:** String_
```js
options task = {
name: "taskName",
// ...
}
```
## every
The interval at which the task runs.
_**Data type:** Duration_
_**Note:** In the InfluxDB UI, the **Interval** field sets this option_.
```js
options task = {
// ...
every: 1h,
}
```
## cron
The [cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that
defines the schedule on which the task runs.
Cron scheduling is based on system time.
_**Data type:** String_
```js
options task = {
// ...
cron: "0 * * * *",
}
```
## offset
Delays the execution of the task but preserves the original time range.
For example, if a task is to run on the hour, a `10m` offset will delay it to 10
minutes after the hour, but all time ranges defined in the task are relative to
the specified execution time.
A common use case is offsetting execution to account for data that may arrive late.
_**Data type:** Duration_
```js
options task = {
// ...
offset: "0 * * * *",
}
```
## concurrency
The number task of executions that can run concurrently.
If the concurrency limit is reached, all subsequent executions are queued until
other running task executions complete.
_**Data type:** Integer_
```js
options task = {
// ...
concurrency: 2,
}
```
## retry
The number of times to retry the task before it is considered as having failed.
_**Data type:** Integer_
```js
options task = {
// ...
retry: 2,
}
```

View File

@ -0,0 +1,147 @@
---
title: Write an InfluxDB task
seotitle: Write an InfluxDB task that processes data
description: >
How to write an InfluxDB task that processes data in some way, then performs an action
such as storing the modified data in a new bucket or sending an alert.
menu:
v2_0:
name: Write a task
parent: Process data
weight: 1
---
InfluxDB tasks are scheduled Flux scripts that take a stream of input data, modify or analyze
it in some way, then store the modified data in a new bucket or perform other actions.
This article walks through writing a basic InfluxDB task that downsamples
data and stores it in a new bucket.
## Components of a task
Every InfluxDB task needs the following four components.
Their form and order can vary, but the are all essential parts of a task.
- [Task options](#define-task-options)
- [A data source](#define-a-data-source)
- [Data processing or transformation](#process-or-transform-your-data)
- [A destination](#define-a-destination)
_[Skip to the full example task script](#full-example-task-script)_
## Define task options
Task options define specific information about the task.
The example below illustrates how task options are defined in your Flux script:
```js
option task = {
name: "cqinterval15m",
every: 1h,
offset: 0m,
concurrency: 1,
retry: 5
}
```
_See [Task configuration options](/v2.0/process-data/task-options) for detailed information
about each option._
{{% note %}}
If creating a task in the InfluxDB user interface (UI), task options are defined
in form fields when creating the task.
{{% /note %}}
## Define a data source
Define a data source using Flux's [`from()` function](/v2.0/reference/flux/functions/inputs/from/)
or any other [Flux input functions](/v2.0/reference/flux/functions/inputs/).
For convenience, consider creating a variable that includes the sourced data with
the required time range and any relevant filters.
```js
data = from(bucket: "telegraf/default")
|> range(start: -task.every)
|> filter(fn: (r) =>
r._measurement == "mem" AND
r.host == "myHost"
)
```
{{% note %}}
#### Using task options in your Flux script
Task options are passed as part of a `task` object and can be referenced in your Flux script.
In the example above, the time range is defined as `-task.every`.
`task.every` is dot notation that references the `every` property of the `task` object.
`every` is defined as `1h`, therefore `-task.every` equates to `-1h`.
Using task options to define values in your Flux script can make reusing your task easier.
{{% /note %}}
## Process or transform your data
The purpose of tasks is to process or transform data in some way.
What exactly happens and what form the output data takes is up to you and your
specific use case.
The example below illustrates a task that downsamples data by calculating the average of set intervals.
It uses the `data` variable defined [above](#define-a-data-source) as the data source.
It then windows the data into 5 minute intervals and calculates the average of each
window using the [`aggregateWindow()` function](/v2.0/reference/flux/functions/transformations/aggregates/aggregatewindow/).
```js
data
|> aggregateWindow(
every: 5m,
fn: mean
)
```
_See [Common tasks](/v2.0/process-data/common-tasks) for examples of tasks commonly used with InfluxDB._
## Define a destination
In the vast majority of task use cases, once data is transformed, it needs to sent and stored somewhere.
This could be a separate bucket with a different retention policy, another measurement, or even an alert endpoint _(Coming)_.
The example below uses Flux's [`to()` function](/v2.0/reference/flux/functions/outputs/to)
to send the transformed data to another bucket:
```js
// ...
|> to(bucket: "telegraf_downsampled", org: "my-org")
```
{{% note %}}
You cannot write to the same bucket you are reading from.
{{% /note %}}
## Full example task script
Below is the full example task script that combines all of the components described above:
```js
// Task options
option task = {
name: "cqinterval15m",
every: 1h,
offset: 0m,
concurrency: 1,
retry: 5
}
// Data source
data = from(bucket: "telegraf/default")
|> range(start: -task.every)
|> filter(fn: (r) =>
r._measurement == "mem" AND
r.host == "myHost"
)
data
// Data transformation
|> aggregateWindow(
every: 5m,
fn: mean
)
// Data destination
|> to(bucket: "telegraf_downsampled")
```

View File

@ -0,0 +1,16 @@
---
title: Query data in InfluxDB
seotitle: Query data stored in InfluxDB
description: >
Learn to query data stored in InfluxDB using Flux and tools such as the InfluxDB
user interface and the 'influx' command line interface.
menu:
v2_0:
name: Query data
weight: 2
---
Learn to query data stored in InfluxDB using Flux and tools such as the InfluxDB
user interface and the 'influx' command line interface.
{{< children >}}

View File

@ -0,0 +1,90 @@
---
title: Execute queries
seotitle: Different ways to query InfluxDB
description: There are multiple ways to query data from InfluxDB including the the InfluxDB UI, CLI, and API.
menu:
v2_0:
name: Execute queries
parent: Query data
weight: 2
---
There are multiple ways to execute queries with InfluxDB.
This guide covers the different options:
1. [Data Explorer](#data-explorer)
2. [Influx REPL](#influx-repl)
3. [Influx query command](#influx-query-command)
5. [InfluxDB API](#influxdb-api)
## Data Explorer
Queries can be built, executed, and visualized in InfluxDB UI's Data Explorer.
![Data Explorer with Flux](/img/flux-data-explorer.png)
## Influx REPL
The [`influx repl` command](/v2.0/reference/cli/influx/repl) starts an interactive
read-eval-print-loop (REPL) where you can write and execute Flux queries.
```bash
influx repl --org org-name
```
_**Note:** `ctrl-d` will close the REPL._
## Influx query command
You can pass queries to the [`influx query` command](/v2.0/reference/cli/influx/query)
as either a file or raw Flux via stdin.
###### Run a query from a file
```bash
influx query @/path/to/query.flux
```
###### Pass raw Flux via stdin pipe
```bash
influx query - # Return to open the pipe
data = from(bucket: "example-bucket") |> range(start: -10m) # ...
# ctrl-d to close the pipe and submit the query
```
## InfluxDB API
Query InfluxDB through the `/api/v2/query` endpoint.
Queried data is returned in annotated CSV format.
In your request, set the following:
- `Authorization` header to `Token ` + your authentication token.
- `accept` header to `application/csv`
- `content-type` header to `application/vnd.flux`
This allows you to POST the Flux query in plain text and receive the annotated CSV response.
Below is an example `curl` command that queries InfluxDB:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'accept:application/csv' \
-H 'content-type:application/vnd.flux' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query -XPOST -sS -H 'Authorization: Token TOKENSTRINGHERE' -H 'accept:application/csv' -H 'content-type:application/vnd.flux' -d 'from(bucket:“test”) |> range(start:-1000h) |> group(columns:[“_measurement”], mode:“by”) |> sum()'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -0,0 +1,78 @@
---
title: Get started with Flux
description: >
Get started with Flux, InfluxData's functional data scripting language.
This step-by-step guide through the basics of writing a Flux query.
menu:
v2_0:
name: Get started with Flux
parent: Query data
weight: 1
---
Flux is InfluxData's functional data scripting language designed for querying,
analyzing, and acting on data.
This multi-part getting started guide walks through important concepts related to Flux,
how to query time series data from InfluxDB using Flux, and introduces Flux syntax and functions.
## Flux design principles
Flux is designed to be usable, readable, flexible, composable, testable, contributable, and shareable.
Its syntax is largely inspired by [2018's most popular scripting language](https://insights.stackoverflow.com/survey/2018#technology),
Javascript, and takes a functional approach to data exploration and processing.
The following example illustrates querying data stored from the last five minutes,
filtering by the `cpu` measurement and the `cpu=cpu-usage` tag, windowing the data in 1 minute intervals,
and calculating the average of each window:
```js
from(bucket:"example-bucket")
|> range(start:-1h)
|> filter(fn:(r) =>
r._measurement == "cpu" and
r.cpu == "cpu-total"
)
|> aggregateWindow(every: 1m, fn: mean)
```
## Key concepts
Flux introduces important new concepts you should understand as you get started.
### Pipe-forward operator
Flux uses pipe-forward operators (`|>`) extensively to chain operations together.
After each function or operation, Flux returns a table or collection of tables containing data.
The pipe-forward operator pipes those tables into the next function or operation where
they are further processed or manipulated.
### Tables
Flux structures all data in tables.
When data is streamed from data sources, Flux formats it as annotated
comma-separated values (CSV), representing tables.
Functions then manipulate or process them and output new tables.
#### Group keys
Every table has a **group key** which describes the contents of the table.
It's a list of columns for which every row in the table will have the same value.
Columns with unique values in each row are **not** part of the group key.
As functions process and transform data, each modifies the group keys of output tables.
Understanding how tables and group keys are modified by functions is key to properly
shaping your data for the desired output.
###### Example group key
```js
[_start, _stop, _field, _measurement, host]
```
Note that `_time` and `_value` are excluded from the example group key because they
are unique to each row.
## Tools for working with Flux
The [Execute queries](/v2.0/query-data/execute-queries) guide walks through
the different tools available for querying InfluxDB with Flux.
<div class="page-nav-btns">
<a class="btn prev" href="/v2.0/query-data/">Introduction to Flux</a>
<a class="btn next" href="/v2.0/query-data/get-started/query-influxdb/">Query InfluxDB with Flux</a>
</div>

View File

@ -0,0 +1,130 @@
---
title: Query InfluxDB with Flux
description: Learn the basics of using Flux to query data from InfluxDB.
menu:
v2_0:
name: Query InfluxDB
parent: Get started with Flux
weight: 1
---
This guide walks through the basics of using Flux to query data from InfluxDB.
Every Flux query needs the following:
1. [A data source](#1-define-your-data-source)
2. [A time range](#2-specify-a-time-range)
3. [Data filters](#3-filter-your-data)
## 1. Define your data source
Flux's [`from()`](/v2.0/reference/flux/functions/inputs/from) function defines an InfluxDB data source.
It requires a [`bucket`](/v2.0/reference/flux/functions/inputs/from#bucket) parameter.
The following examples use `example-bucket` as the bucket name.
```js
from(bucket:"example-bucket")
```
## 2. Specify a time range
Flux requires a time range when querying time series data.
"Unbounded" queries are very resource-intensive and as a protective measure,
Flux will not query the database without a specified range.
Use the pipe-forward operator (`|>`) to pipe data from your data source into the [`range()`](/v2.0/reference/flux/functions/transformations/range)
function, which specifies a time range for your query.
It accepts two properties: `start` and `stop`.
Ranges can be **relative** using negative [durations](/v2.0/reference/flux/language/lexical-elements#duration-literals)
or **absolute** using [timestamps](/v2.0/reference/flux/language/lexical-elements#date-and-time-literals).
###### Example relative time ranges
```js
// Relative time range with start only. Stop defaults to now.
from(bucket:"example-bucket")
|> range(start: -1h)
// Relative time range with start and stop
from(bucket:"example-bucket")
|> range(start: -1h, stop: -10m)
```
{{% note %}}
Relative ranges are relative to "now."
{{% /note %}}
###### Example absolute time range
```js
from(bucket:"example-bucket")
|> range(start: 2018-11-05T23:30:00Z, stop: 2018-11-06T00:00:00Z)
```
#### Use the following:
For this guide, use the relative time range, `-15m`, to limit query results to data from the last 15 minutes:
```js
from(bucket:"example-bucket")
|> range(start: -15m)
```
## 3. Filter your data
Pass your ranged data into the `filter()` function to narrow results based on data attributes or columns.
The `filter()` function has one parameter, `fn`, which expects an anonymous function
with logic that filters data based on columns or attributes.
Flux's anonymous function syntax is similar to Javascript's.
Records or rows are passed into the `filter()` function as an object (`r`).
The anonymous function takes the object and evaluates it to see if it matches the defined filters.
Use the `and` relational operator to chain multiple filters.
```js
// Pattern
(r) => (r.objectProperty comparisonOperator comparisonExpression)
// Example with single filter
(r) => (r._measurement == "cpu")
// Example with multiple filters
(r) => (r._measurement == "cpu") and (r._field != "usage_system" )
```
#### Use the following:
For this example, filter by the `cpu` measurement, the `usage_system` field, and the `cpu-total` tag value:
```js
from(bucket:"example-bucket")
|> range(start: -15m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
```
## 4. Yield your queried data
Use Flux's `yield()` function to output the filtered tables as the result of the query.
```js
from(bucket:"example-bucket")
|> range(start: -15m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
|> yield()
```
{{% note %}}
Flux automatically assume a `yield()` function at
the end of each script in order to output and visualize the data.
`yield()` is only necessary when including multiple queries in the same Flux query.
Each set of returned data needs to be named using the `yield()` function.
{{% /note %}}
## Congratulations!
You have now queried data from InfluxDB using Flux.
This is a barebones query that can be transformed in other ways.
<div class="page-nav-btns">
<a class="btn prev" href="/v2.0/query-data/get-started/">Get started with Flux</a>
<a class="btn next" href="/v2.0/query-data/get-started/transform-data/">Transform your data</a>
</div>

View File

@ -0,0 +1,217 @@
---
title: Flux syntax basics
description: An introduction to the basic elements of the Flux syntax with real-world application examples.
menu:
v2_0:
name: Syntax basics
parent: Get started with Flux
weight: 3
---
Flux, at its core, is a scripting language designed specifically for working with data.
This guide walks through a handful of simple expressions and how they are handled in Flux.
## Use the influx CLI's REPL
Use the `influx repl` command to open the interactive read-eval-print-loop (REPL).
Run the commands provided in this guide in the REPL.
##### Start in the influx CLI in Flux mode
```bash
influx repl --org org-name
```
## Basic Flux syntax
The code blocks below provide commands that illustrate the basic syntax of Flux.
Run these commands in the REPL.
### Simple expressions
Flux is a scripting language that supports basic expressions.
For example, simple addition:
```js
> 1 + 1
2
```
### Variables
Assign an expression to a variable using the assignment operator, `=`.
```js
> s = "this is a string"
> i = 1 // an integer
> f = 2.0 // a floating point number
```
Type the name of a variable to print its value:
```js
> s
this is a string
> i
1
> f
2
```
### Objects
Flux also supports objects. Each value in an object can be a different data type.
```js
> o = {name:"Jim", age: 42}
```
Use dot notation to access a properties of an object:
```js
> o.name
Jim
> o.age
42
```
### Lists
Flux supports lists. List values must be the same type.
```js
> n = 4
> l = [1,2,3,n]
> l
[1, 2, 3, 4]
```
### Functions
Flux uses functions for most of its heavy lifting.
Below is a simple function that squares a number, `n`.
```js
> square = (n) => n * n
> square(n:3)
9
```
{{% note %}}
Flux does not support positional arguments or parameters.
Parameters must always be named when calling a function.
{{% /note %}}
### Pipe-forward operator
Flux uses the pipe-forward operator (`|>`) extensively to chain operations together.
After each function or operation, Flux returns a table or collection of tables containing data.
The pipe-forward operator pipes those tables into the next function where they are further processed or manipulated.
```js
data |> someFunction() |> anotherFunction()
```
## Real-world application of basic syntax
This likely seems familiar if you've already been through through the other
[getting started guides](/v2.0/query-data/get-started).
Flux's syntax is inspired by Javascript and other functional scripting languages.
As you begin to apply these basic principles in real-world use cases such as creating data stream variables,
custom functions, etc., the power of Flux and its ability to query and process data will become apparent.
The examples below provide both multi-line and single-line versions of each input command.
Carriage returns in Flux aren't necessary, but do help with readability.
Both single- and multi-line commands can be copied and pasted into the `influx` CLI running in Flux mode.
### Define data stream variables
A common use case for variable assignments in Flux is creating variables for one
or more input data streams.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
timeRange = -1h
cpuUsageUser =
from(bucket:"example-bucket")
|> range(start: timeRange)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_user" and
r.cpu == "cpu-total"
)
memUsagePercent =
from(bucket:"example-bucket")
|> range(start: timeRange)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
timeRange = -1h
cpuUsageUser = from(bucket:"example-bucket") |> range(start: timeRange) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu == "cpu-total")
memUsagePercent = from(bucket:"example-bucket") |> range(start: timeRange) |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent")
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper>}}
These variables can be used in other functions, such as `join()`, while keeping the syntax minimal and flexible.
### Define custom functions
Create a function that returns the `N` number rows in the input stream with the highest `_value`s.
To do this, pass the input stream (`tables`) and the number of results to return (`n`) into a custom function.
Then using Flux's `sort()` and `limit()` functions to find the top `n` results in the data set.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
topN = (tables=<-, n) =>
tables
|> sort(desc: true)
|> limit(n: n)
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
topN = (tables=<-, n) => tables |> sort(desc: true) |> limit(n: n)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
_More information about creating custom functions is available in the [Custom functions](/v2.0/query-data/guides/custom-functions) documentation._
Using the `cpuUsageUser` data stream variable defined above, find the top five data
points with the custom `topN` function and yield the results.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
cpuUsageUser
|> topN(n:5)
|> yield()
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
cpuUsageUser |> topN(n:5) |> yield()
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper>}}
This query will return the five data points with the highest user CPU usage over the last hour.
<div class="page-nav-btns">
<a class="btn prev" href="/v2.0/query-data/get-started/transform-data/">Transform your data</a>
</div>

View File

@ -0,0 +1,176 @@
---
title: Transform data with Flux
description: Learn the basics of using Flux to transform data queried from InfluxDB.
menu:
v2_0:
name: Transform data
parent: Get started with Flux
weight: 2
---
When [querying data from InfluxDB](/v2.0/query-data/get-started/query-influxdb),
you often need to transform that data in some way.
Common examples are aggregating data into averages, downsampling data, etc.
This guide demonstrates using [Flux functions](/v2.0/reference/flux/functions) to transform your data.
It walks through creating a Flux script that partitions data into windows of time,
averages the `_value`s in each window, and outputs the averages as a new table.
It's important to understand how the "shape" of your data changes through each of these operations.
## Query data
Use the query built in the previous [Query data from InfluxDB](/v2.0/query-data/get-started/query-influxdb)
guide, but update the range to pull data from the last hour:
```js
from(bucket:"example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
```
## Flux functions
Flux provides a number of functions that perform specific operations, transformations, and tasks.
You can also [create custom functions](/v2.0/query-data/guides/custom-functions) in your Flux queries.
_Functions are covered in detail in the [Flux functions](/v2.0/reference/flux/functions) documentation._
A common type of function used when transforming data queried from InfluxDB is an aggregate function.
Aggregate functions take a set of `_value`s in a table, aggregate them, and transform
them into a new value.
This example uses the [`mean()` function](/v2.0/reference/flux/functions/transformations/aggregates/mean)
to average values within each time window.
{{% note %}}
The following example walks through the steps required to window and aggregate data,
but there is a [`aggregateWindow()` helper function](#helper-functions) that does it for you.
It's just good to understand the steps in the process.
{{% /note %}}
## Window your data
Flux's [`window()` function](/v2.0/reference/flux/functions/transformations/window) partitions records based on a time value.
Use the `every` parameter to define a duration of each window.
For this example, window data in five minute intervals (`5m`).
```js
from(bucket:"example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
|> window(every: 5m)
```
As data is gathered into windows of time, each window is output as its own table.
When visualized, each table is assigned a unique color.
![Windowed data tables](/img/flux-windowed-data.png)
## Aggregate windowed data
Flux aggregate functions take the `_value`s in each table and aggregate them in some way.
Use the [`mean()` function](/v2.0/reference/flux/functions/transformations/aggregates/mean) to average the `_value`s of each table.
```js
from(bucket:"example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
|> window(every: 5m)
|> mean()
```
As rows in each window are aggregated, their output table contains only a single row with the aggregate value.
Windowed tables are all still separate and, when visualized, will appear as single, unconnected points.
![Windowed aggregate data](/img/flux-windowed-aggregates.png)
## Add times to your aggregates
As values are aggregated, the resulting tables do not have a `_time` column because
the records used for the aggregation all have different timestamps.
Aggregate functions don't infer what time should be used for the aggregate value.
Therefore the `_time` column is dropped.
A `_time` column is required in the [next operation](#unwindow-aggregate-tables).
To add one, use the [`duplicate()` function](/v2.0/reference/flux/functions/transformations/duplicate)
to duplicate the `_stop` column as the `_time` column for each windowed table.
```js
from(bucket:"example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
|> window(every: 5m)
|> mean()
|> duplicate(column: "_stop", as: "_time")
```
## Unwindow aggregate tables
Use the `window()` function with the `every: inf` parameter to gather all points
into a single, infinite window.
```js
from(bucket:"example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
|> window(every: 5m)
|> mean()
|> duplicate(column: "_stop", as: "_time")
|> window(every: inf)
```
Once ungrouped and combined into a single table, the aggregate data points will appear connected in your visualization.
![Unwindowed aggregate data](/img/flux-windowed-aggregates-ungrouped.png)
## Helper functions
This may seem like a lot of coding just to build a query that aggregates data, however going through the
process helps to understand how data changes "shape" as it is passed through each function.
Flux provides (and allows you to create) "helper" functions that abstract many of these steps.
The same operation performed in this guide can be accomplished using the
[`aggregateWindow()` function](/v2.0/reference/flux/functions/transformations/aggregates/aggregatewindow).
```js
from(bucket:"example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
|> aggregateWindow(every: 5m, fn: mean)
```
## Congratulations!
You have now constructed a Flux query that uses Flux functions to transform your data.
There are many more ways to manipulate your data using both Flux's primitive functions
and your own custom functions, but this is a good introduction into the basic syntax and query structure.
---
_For a deeper dive into windowing and aggregating data with example data output for each transformation,
view the [Window and aggregate data](/v2.0/query-data/guides/window-aggregate) guide._
---
<div class="page-nav-btns">
<a class="btn prev" href="/v2.0/query-data/get-started/query-influxdb/">Query InfluxDB</a>
<a class="btn next" href="/v2.0/query-data/get-started/syntax-basics/">Syntax basics</a>
</div>

View File

@ -0,0 +1,13 @@
---
title: Flux how-to guides
description: Helpful guides that walk through both common and complex tasks and use cases for Flux.
menu:
v2_0:
name: How-to guides
parent: Query data
weight: 3
---
The following guides walk through common query uses cases.
{{% children %}}

View File

@ -0,0 +1,137 @@
---
title: Create custom Flux functions
seotitle: Create custom Flux functions
description: Create your own custom Flux functions to transform and manipulate data.
menu:
v2_0:
name: Create custom functions
parent: How-to guides
weight: 8
---
Flux's functional syntax allows for custom functions.
This guide walks through the basics of creating your own function.
## Function definition structure
The basic structure for defining functions in Flux is as follows:
```js
// Basic function definition structure
functionName = (functionParameters) => functionOperations
```
##### functionName
The name used to call the function in your Flux script.
##### functionParameters
A comma-separated list of parameters passed into the function and used in its operations.
[Parameter defaults](#define-parameter-defaults) can be defined for each.
##### functionOperations
Operations and functions that manipulate the input into the desired output.
#### Basic function examples
###### Example square function
```js
// Function definition
square = (n) => n * n
// Function usage
> square(n:3)
9
```
###### Example multiply function
```js
// Function definition
multiply = (x, y) => x * y
// Function usage
> multiply(x:2, y:15)
30
```
## Functions that manipulate piped-forward data
Most Flux functions manipulate data piped-forward into the function.
In order for a custom function to process piped-forward data, one of the function
parameters must capture the input tables using the `<-` pipe-receive expression.
In the example below, the `tables` parameter is assigned to the `<-` expression,
which represents all data piped-forward into the function.
`tables` is then piped-forward into other operations in the function definition.
```js
functionName = (tables=<-) => tables |> functionOperations
```
#### Pipe-forwardable function example
###### Multiply row values by x
The example below defines a `multByX` function that multiplies the `_value` column
of each row in the input table by the `x` parameter.
It uses the [`map()` function](/v2.0/reference/flux/functions/transformations/map)
to modify each `_value`.
```js
// Function definition
multByX = (tables=<-, x) =>
tables
|> map(fn: (r) => r._value * x)
// Function usage
from(bucket: "telegraf/autogen")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> multByX(x:2.0)
```
## Define parameter defaults
Use the `=` assignment operator to assign a default value to function parameters
in your function definition:
```js
functionName = (param1=defaultValue1, param2=defaultValue2) => functionOperation
```
Defaults are overridden by explicitly defining the parameter in the function call.
#### Example functions with defaults
###### Get the winner or the "winner"
The example below defines a `getWinner` function that returns the record with the highest
or lowest `_value` (winner versus "winner") depending on the `noSarcasm` parameter which defaults to `true`.
It uses the [`sort()` function](/v2.0/reference/flux/functions/transformations/sort)
to sort records in either descending or ascending order.
It then uses the [`limit()` function](/v2.0/reference/flux/functions/transformations/limit)
to return the first record from the sorted table.
```js
// Function definition
getWinner = (tables=<-, noSarcasm:true) =>
tables
|> sort(desc: noSarcasm)
|> limit(n:1)
// Function usage
// Get the winner
from(bucket: "telegraf/autogen")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> getWinner()
// Get the "winner"
from(bucket: "telegraf/autogen")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> getWinner(noSarcasm: false)
```

View File

@ -0,0 +1,667 @@
---
title: Group data with Flux
seotitle: How to group data with Flux
description: >
This guide walks through grouping data with Flux by providing examples and
illustrating how data is shaped throughout the process.
menu:
v2_0:
name: Group data
parent: How-to guides
weight: 3
---
With Flux, you can group data by any column in your queried data set.
"Grouping" partitions data into tables in which each row shares a common value for specified columns.
This guide walks through grouping data in Flux and provides examples of how data is shaped in the process.
## Group keys
Every table has a **group key** a list of columns which for which every row in the table has the same value.
###### Example group key
```js
[_start, _stop, _field, _measurement, host]
```
Grouping data in Flux is essentially defining the group key of output tables.
Understanding how modifying group keys shapes output data is key to successfully
grouping and transforming data into your desired output.
## group() Function
Flux's [`group()` function](/v2.0/reference/flux/functions/transformations/group) defines the
group key for output tables, i.e. grouping records based on values for specific columns.
###### group() example
```js
dataStream
|> group(columns: ["cpu", "host"])
```
###### Resulting group key
```js
[cpu, host]
```
The `group()` function has the following parameters:
### columns
The list of columns to include or exclude (depending on the [mode](#mode)) in the grouping operation.
### mode
The method used to define the group and resulting group key.
Possible values include `by` and `except`.
## Example grouping operations
To illustrate how grouping works, define a `dataSet` variable that queries System
CPU usage from the `telegraf/autogen` bucket.
Filter the `cpu` tag so it only returns results for each numbered CPU core.
### Data set
CPU used by system operations for all numbered CPU cores.
It uses a regular expression to filter only numbered cores.
```js
dataSet = from(bucket: "telegraf/autogen")
|> range(start: -2m)
|> filter(fn: (r) =>
r._field == "usage_system" and
r.cpu =~ /cpu[0-9*]/
)
|> drop(columns: ["host"])
```
{{% note %}}
This example drops the `host` column from the returned data since the CPU data
is only tracked for a single host and it simplifies the output tables.
Don't drop the `host` column if monitoring multiple hosts.
{{% /note %}}
{{% truncate %}}
```
Table: keys: [_start, _stop, _field, _measurement, cpu]
_start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:00.000000000Z 7.892107892107892
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:10.000000000Z 7.2
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:20.000000000Z 7.4
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:30.000000000Z 5.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:40.000000000Z 7.4
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:50.000000000Z 7.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:00.000000000Z 10.3
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:10.000000000Z 9.2
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:20.000000000Z 8.4
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:30.000000000Z 8.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:40.000000000Z 8.6
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:50.000000000Z 10.2
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:36:00.000000000Z 10.6
Table: keys: [_start, _stop, _field, _measurement, cpu]
_start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:00.000000000Z 0.7992007992007992
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:10.000000000Z 0.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:20.000000000Z 0.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:30.000000000Z 0.4
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:40.000000000Z 0.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:50.000000000Z 0.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:00.000000000Z 1.4
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:10.000000000Z 1.2
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:20.000000000Z 0.8
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:30.000000000Z 0.8991008991008991
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:40.000000000Z 0.8008008008008008
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:50.000000000Z 0.999000999000999
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:36:00.000000000Z 1.1022044088176353
Table: keys: [_start, _stop, _field, _measurement, cpu]
_start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:00.000000000Z 4.1
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:10.000000000Z 3.6
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:20.000000000Z 3.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:30.000000000Z 2.6
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:40.000000000Z 4.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:50.000000000Z 4.895104895104895
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:00.000000000Z 6.906906906906907
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:10.000000000Z 5.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:20.000000000Z 5.1
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:30.000000000Z 4.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:40.000000000Z 5.1
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:50.000000000Z 5.9
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:36:00.000000000Z 6.4935064935064934
Table: keys: [_start, _stop, _field, _measurement, cpu]
_start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:00.000000000Z 0.5005005005005005
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:10.000000000Z 0.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:20.000000000Z 0.5
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:30.000000000Z 0.3
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:40.000000000Z 0.6
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:50.000000000Z 0.6
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:00.000000000Z 1.3986013986013985
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:10.000000000Z 0.9
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:20.000000000Z 0.5005005005005005
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:30.000000000Z 0.7
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:40.000000000Z 0.6
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:50.000000000Z 0.8
2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:36:00.000000000Z 0.9
```
{{% /truncate %}}
**Note that the group key is output with each table: `Table: keys: <group-key>`.**
![Group example data set](/img/grouping-data-set.png)
### Group by CPU
Group the `dataSet` stream by the `cpu` column.
```js
dataSet
|> group(columns: ["cpu"])
```
This won't actually change the structure of the data since it already has `cpu`
in the group key and is therefore grouped by `cpu`.
However, notice that it does change the group key:
{{% truncate %}}
###### Group by CPU output tables
```
Table: keys: [cpu]
cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time
---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 7.892107892107892 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 7.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 5.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 7.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 10.3 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 9.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 8.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 8.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 8.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 10.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [cpu]
cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time
---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 0.7992007992007992 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 0.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 1.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 1.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 0.8991008991008991 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 0.8008008008008008 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 0.999000999000999 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.1022044088176353 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [cpu]
cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time
---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 4.1 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 3.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 3.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 2.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 4.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 4.895104895104895 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 6.906906906906907 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 5.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 4.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 5.9 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 6.4935064935064934 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [cpu]
cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time
---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 0.3 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 1.3986013986013985 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z
cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z
```
{{% /truncate %}}
The visualization remains the same.
![Group by CPU](/img/grouping-data-set.png)
### Group by time
Grouping data by the `_time` column is a good illustration of how grouping changes the structure of your data.
```js
dataSet
|> group(columns: ["_time"])
```
When grouping by `_time`, all records that share a common `_time` value are grouped into individual tables.
So each output table represents a single point in time.
{{% truncate %}}
###### Group by time output tables
```
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.892107892107892 usage_system cpu cpu0
2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7992007992007992 usage_system cpu cpu1
2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.1 usage_system cpu cpu2
2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.2 usage_system cpu cpu0
2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1
2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 3.6 usage_system cpu cpu2
2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu cpu0
2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1
2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 3.5 usage_system cpu cpu2
2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.5 usage_system cpu cpu0
2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.4 usage_system cpu cpu1
2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 2.6 usage_system cpu cpu2
2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.3 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu cpu0
2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1
2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.5 usage_system cpu cpu2
2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.5 usage_system cpu cpu0
2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1
2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.895104895104895 usage_system cpu cpu2
2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.3 usage_system cpu cpu0
2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.4 usage_system cpu cpu1
2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 6.906906906906907 usage_system cpu cpu2
2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.3986013986013985 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 9.2 usage_system cpu cpu0
2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.2 usage_system cpu cpu1
2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.7 usage_system cpu cpu2
2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 8.4 usage_system cpu cpu0
2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu cpu1
2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu cpu2
2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 8.5 usage_system cpu cpu0
2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8991008991008991 usage_system cpu cpu1
2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.7 usage_system cpu cpu2
2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 8.6 usage_system cpu cpu0
2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8008008008008008 usage_system cpu cpu1
2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu cpu2
2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.2 usage_system cpu cpu0
2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.999000999000999 usage_system cpu cpu1
2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.9 usage_system cpu cpu2
2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu cpu3
Table: keys: [_time]
_time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string
------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ----------------------
2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.6 usage_system cpu cpu0
2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.1022044088176353 usage_system cpu cpu1
2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 6.4935064935064934 usage_system cpu cpu2
2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu cpu3
```
{{% /truncate %}}
Because each timestamp is a structured as a separate table, when visualized, they appear as individual, unconnected points.
Even though there are multiple records per timestamp, it will only visualize the last record of group's table.
![Group by time](/img/grouping-by-time.png)
{{% note %}}
With some further processing, you could calculate the average CPU usage across all CPUs per point
of time and group them into a single table, but we won't cover that in this example.
If you're interested in running and visualizing this yourself, here's what the query would look like:
```js
dataSet
|> group(columns: ["_time"])
|> mean()
|> group(columns: ["_value", "_time"], mode: "except")
```
{{% /note %}}
## Group by CPU and time
Group by the `cpu` and `_time` columns.
```js
dataSet
|> group(columns: ["cpu", "_time"])
```
This outputs a table for every unique `cpu` and `_time` combination:
{{% truncate %}}
###### Group by CPU and time output tables
```
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:00.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.892107892107892 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:00.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7992007992007992 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:00.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.1 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:00.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:10.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:10.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:10.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 3.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:10.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:20.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:20.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:20.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 3.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:20.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:30.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 5.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:30.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:30.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 2.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:30.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.3 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:40.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:40.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:40.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:40.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:50.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:50.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:50.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.895104895104895 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:34:50.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:00.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 10.3 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:00.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 1.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:00.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 6.906906906906907 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:00.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 1.3986013986013985 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:10.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 9.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:10.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 1.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:10.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:10.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:20.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 8.4 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:20.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:20.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:20.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:30.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 8.5 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:30.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.8991008991008991 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:30.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:30.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:40.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 8.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:40.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.8008008008008008 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:40.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:40.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:50.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 10.2 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:50.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.999000999000999 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:50.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.9 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:35:50.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:36:00.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 10.6 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:36:00.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 1.1022044088176353 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:36:00.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 6.4935064935064934 usage_system cpu 2018-11-05T21:34:00.000000000Z
Table: keys: [_time, cpu]
_time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time
------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------
2018-11-05T21:36:00.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z
```
{{% /truncate %}}
When visualized, tables appear as individual, unconnected points.
![Group by CPU and time](/img/grouping-by-cpu-time.png)
Grouping by `cpu` and `_time` is a good illustration of how grouping works.
## In conclusion
Grouping is a powerful way to shape your data into your desired output format.
It modifies the group keys of output tables, grouping records into tables that
all share common values within specified columns.

View File

@ -0,0 +1,141 @@
---
title: Create histograms with Flux
seotitle: How to create histograms with Flux
description: This guide walks through using the histogram() function to create cumulative histograms with Flux.
menu:
v2_0:
name: Create histograms
parent: How-to guides
weight: 7
---
Histograms provide valuable insight into the distribution of your data.
This guide walks through using Flux's `histogram()` function to transform your data into a **cumulative histogram**.
## histgram() function
The [`histogram()` function](/v2.0/reference/flux/functions/transformations/histogram) approximates the
cumulative distribution of a dataset by counting data frequencies for a list of "bins."
A **bin** is simply a range in which a data point falls.
All data points that are less than or equal to the bound are counted in the bin.
In the histogram output, a column is added (`le`) that represents the upper bounds of of each bin.
Bin counts are cumulative.
```js
from(bucket:"telegraf/autogen")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> histogram(bins: [0.0, 10.0, 20.0, 30.0])
```
{{% note %}}
Values output by the `histogram` function represent points of data aggregated over time.
Since values do not represent single points in time, there is no `_time` column in the output table.
{{% /note %}}
## Bin helper functions
Flux provides two helper functions for generating histogram bins.
Each generates an array of floats designed to be used in the `histogram()` function's `bins` parameter.
### linearBins()
The [`linearBins()` function](/v2.0/reference/flux/functions/misc/linearbins) generates a list of linearly separated floats.
```js
linearBins(start: 0.0, width: 10.0, count: 10)
// Generated list: [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, +Inf]
```
### logarithmicBins()
The [`logarithmicBins()` function](/v2.0/reference/flux/functions/misc/logarithmicbins) generates a list of exponentially separated floats.
```js
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
```
## Examples
### Generating a histogram with linear bins
```js
from(bucket:"telegraf/autogen")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> histogram(
bins: linearBins(
start:65.5,
width: 0.5,
count: 20,
infinity:false
)
)
```
###### Output table
```
Table: keys: [_start, _stop, _field, _measurement, host]
_start:time _stop:time _field:string _measurement:string host:string le:float _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------ ---------------------------- ----------------------------
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 65.5 5
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 66 6
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 66.5 8
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 67 9
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 67.5 9
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 68 10
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 68.5 12
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 69 12
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 69.5 15
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 70 23
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 70.5 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 71 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 71.5 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 72 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 72.5 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 73 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 73.5 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 74 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 74.5 30
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 75 30
```
### Generating a histogram with logarithmic bins
```js
from(bucket:"telegraf/autogen")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> histogram(
bins: logarithmicBins(
start:0.5,
factor: 2.0,
count: 10,
infinity:false
)
)
```
###### Output table
```
Table: keys: [_start, _stop, _field, _measurement, host]
_start:time _stop:time _field:string _measurement:string host:string le:float _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------ ---------------------------- ----------------------------
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 0.5 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 1 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 2 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 4 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 8 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 16 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 32 0
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 64 2
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 128 30
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 256 30
```

View File

@ -0,0 +1,302 @@
---
title: Join data with Flux
seotitle: How to join data with Flux
description: This guide walks through joining data with Flux and outlines how it shapes your data in the process.
menu:
v2_0:
name: Join data
parent: How-to guides
weight: 5
---
The [`join()` function](/v2.0/reference/flux/functions/transformations/join) merges two or more
input streams, whose values are equal on a set of common columns, into a single output stream.
Flux allows you to join on any columns common between two data streams and opens the door
for operations such as cross-measurement joins and math across measurements.
To illustrate a join operation, use data captured by Telegraf and and stored in
InfluxDB - memory usage and processes.
In this guide, we'll join two data streams, one representing memory usage and the other representing the
total number of running processes, then calculate the average memory usage per running process.
## Define stream variables
In order to perform a join, you must have two streams of data.
Assign a variable to each data stream.
### Memory used variable
Define a `memUsed` variable that filters on the `mem` measurement and the `used` field.
This returns the amount of memory (in bytes) used.
###### memUsed stream definition
```js
memUsed = from(bucket: "telegraf/autogen")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used"
)
```
{{% truncate %}}
###### memUsed data output
```
Table: keys: [_start, _stop, _field, _measurement, host]
_start:time _stop:time _field:string _measurement:string host:string _time:time _value:int
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------ ------------------------------ --------------------------
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:50:00.000000000Z 10956333056
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:50:10.000000000Z 11014008832
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:50:20.000000000Z 11373428736
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:50:30.000000000Z 11001421824
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:50:40.000000000Z 10985852928
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:50:50.000000000Z 10992279552
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:51:00.000000000Z 11053568000
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:51:10.000000000Z 11092242432
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:51:20.000000000Z 11612774400
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:51:30.000000000Z 11131961344
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:51:40.000000000Z 11124805632
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:51:50.000000000Z 11332464640
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:52:00.000000000Z 11176923136
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:52:10.000000000Z 11181068288
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:52:20.000000000Z 11182579712
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:52:30.000000000Z 11238862848
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:52:40.000000000Z 11275296768
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:52:50.000000000Z 11225411584
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:53:00.000000000Z 11252690944
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:53:10.000000000Z 11227029504
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:53:20.000000000Z 11201646592
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:53:30.000000000Z 11227897856
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:53:40.000000000Z 11330428928
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:53:50.000000000Z 11347976192
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:54:00.000000000Z 11368271872
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:54:10.000000000Z 11269623808
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:54:20.000000000Z 11295637504
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:54:30.000000000Z 11354423296
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:54:40.000000000Z 11379687424
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:54:50.000000000Z 11248926720
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z used mem host1.local 2018-11-06T05:55:00.000000000Z 11292524544
```
{{% /truncate %}}
### Total processes variable
Define a `procTotal` variable that filters on the `processes` measurement and the `total` field.
This returns the number of running processes.
###### procTotal stream definition
```js
procTotal = from(bucket: "telegraf/autogen")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "processes" and
r._field == "total"
)
```
{{% truncate %}}
###### procTotal data output
```
Table: keys: [_start, _stop, _field, _measurement, host]
_start:time _stop:time _field:string _measurement:string host:string _time:time _value:int
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------ ------------------------------ --------------------------
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:50:00.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:50:10.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:50:20.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:50:30.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:50:40.000000000Z 469
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:50:50.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:51:00.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:51:10.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:51:20.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:51:30.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:51:40.000000000Z 469
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:51:50.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:52:00.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:52:10.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:52:20.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:52:30.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:52:40.000000000Z 472
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:52:50.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:53:00.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:53:10.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:53:20.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:53:30.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:53:40.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:53:50.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:54:00.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:54:10.000000000Z 470
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:54:20.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:54:30.000000000Z 473
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:54:40.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:54:50.000000000Z 471
2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z total processes host1.local 2018-11-06T05:55:00.000000000Z 471
```
{{% /truncate %}}
## Join the two data streams
With the two data streams defined, use the `join()` function to join them together.
`join()` requires two parameters:
##### `tables`
A map of tables to join with keys by which they will be aliased.
In the example below, `mem` is the alias for `memUsed` and `proc` is the alias for `procTotal`.
##### `on`
An array of strings defining the columns on which the tables will be joined.
_**Both tables must have all columns specified in this list.**_
```js
join(
tables: {mem:memUsed, proc:procTotal},
on: ["_time", "_stop", "_start", "host"]
)
```
{{% truncate %}}
###### Joined output table
```
Table: keys: [_field_mem, _field_proc, _measurement_mem, _measurement_proc, _start, _stop, host]
_field_mem:string _field_proc:string _measurement_mem:string _measurement_proc:string _start:time _stop:time host:string _time:time _value_mem:int _value_proc:int
---------------------- ---------------------- ----------------------- ------------------------ ------------------------------ ------------------------------ ------------------------ ------------------------------ -------------------------- --------------------------
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:00.000000000Z 10956333056 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:10.000000000Z 11014008832 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:20.000000000Z 11373428736 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:30.000000000Z 11001421824 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:40.000000000Z 10985852928 469
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:50.000000000Z 10992279552 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:00.000000000Z 11053568000 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:10.000000000Z 11092242432 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:20.000000000Z 11612774400 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:30.000000000Z 11131961344 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:40.000000000Z 11124805632 469
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:50.000000000Z 11332464640 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:00.000000000Z 11176923136 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:10.000000000Z 11181068288 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:20.000000000Z 11182579712 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:30.000000000Z 11238862848 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:40.000000000Z 11275296768 472
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:50.000000000Z 11225411584 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:00.000000000Z 11252690944 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:10.000000000Z 11227029504 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:20.000000000Z 11201646592 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:30.000000000Z 11227897856 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:40.000000000Z 11330428928 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:50.000000000Z 11347976192 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:00.000000000Z 11368271872 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:10.000000000Z 11269623808 470
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:20.000000000Z 11295637504 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:30.000000000Z 11354423296 473
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:40.000000000Z 11379687424 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:50.000000000Z 11248926720 471
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:55:00.000000000Z 11292524544 471
```
{{% /truncate %}}
Notice the output table includes the following columns:
- `_field_mem`
- `_field_proc`
- `_measurement_mem`
- `_measurement_proc`
- `_value_mem`
- `_value_proc`
These represent the columns with values unique to the two input tables.
## Calculate and create a new table
With the two streams of data joined into a single table, use the
[`map()` function](/v2.0/reference/flux/functions/transformations/map)
to build a new table by mapping the existing `_time` column to a new `_time`
column and dividing `_value_mem` by `_value_proc` and mapping it to a
new `_value` column.
```js
join(tables: {mem:memUsed, proc:procTotal}, on: ["_time", "_stop", "_start", "host"])
|> map(fn: (r) => ({
_time: r._time,
_value: r._value_mem / r._value_proc
}))
```
{{% truncate %}}
###### Mapped table
```
Table: keys: [_field_mem, _field_proc, _measurement_mem, _measurement_proc, _start, _stop, host]
_field_mem:string _field_proc:string _measurement_mem:string _measurement_proc:string _start:time _stop:time host:string _time:time _value:int
---------------------- ---------------------- ----------------------- ------------------------ ------------------------------ ------------------------------ ------------------------ ------------------------------ --------------------------
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:00.000000000Z 23311346
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:10.000000000Z 23434061
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:20.000000000Z 24147407
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:30.000000000Z 23407280
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:40.000000000Z 23423993
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:50:50.000000000Z 23338173
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:00.000000000Z 23518229
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:10.000000000Z 23600515
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:20.000000000Z 24708030
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:30.000000000Z 23685024
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:40.000000000Z 23720267
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:51:50.000000000Z 24060434
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:00.000000000Z 23730197
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:10.000000000Z 23789506
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:20.000000000Z 23792722
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:30.000000000Z 23861704
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:40.000000000Z 23888340
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:52:50.000000000Z 23833145
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:00.000000000Z 23941895
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:10.000000000Z 23887296
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:20.000000000Z 23833290
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:30.000000000Z 23838424
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:40.000000000Z 24056112
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:53:50.000000000Z 24093367
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:00.000000000Z 24136458
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:10.000000000Z 23977922
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:20.000000000Z 23982245
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:30.000000000Z 24005123
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:40.000000000Z 24160695
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:54:50.000000000Z 23883071
used total mem processes 2018-11-06T05:50:00.000000000Z 2018-11-06T05:55:00.000000000Z Scotts-MacBook-Pro.local 2018-11-06T05:55:00.000000000Z 23975635
```
{{% /truncate %}}
This table represents the average amount of memory in bytes per running process.
## Real world example
The following function calculates the batch sizes written to an InfluxDB cluster by joining
fields from `httpd` and `write` measurements in order to compare `pointReq` and `writeReq`.
The results are grouped by cluster ID so you can make comparisons across clusters.
```js
batchSize = (cluster_id, start=-1m, interval=10s) => {
httpd = from(bucket:"telegraf")
|> range(start:start)
|> filter(fn:(r) =>
r._measurement == "influxdb_httpd" and
r._field == "writeReq" and
r.cluster_id == cluster_id
)
|> aggregateWindow(every: interval, fn: mean)
|> derivative(nonNegative:true,unit:60s)
write = from(bucket:"telegraf")
|> range(start:start)
|> filter(fn:(r) =>
r._measurement == "influxdb_write" and
r._field == "pointReq" and
r.cluster_id == cluster_id
)
|> aggregateWindow(every: interval, fn: max)
|> derivative(nonNegative:true,unit:60s)
return join(
tables:{httpd:httpd, write:write},
on:["_time","_stop","_start","host"]
)
|> map(fn:(r) => ({
_time: r._time,
_value: r._value_httpd / r._value_write,
}))
|> group(columns: cluster_id)
}
batchSize(cluster_id: "enter cluster id here")
```

View File

@ -0,0 +1,85 @@
---
title: Use regular expressions in Flux
seotitle: How to use regular expressions in Flux
description: This guide walks through using regular expressions in evaluation logic in Flux functions.
menu:
v2_0:
name: Use regular expressions
parent: How-to guides
weight: 9
---
Regular expressions (regexes) are incredibly powerful when matching patterns in large collections of data.
With Flux, regular expressions are primarily used for evaluation logic in predicate functions for things
such as filtering rows, dropping and keeping columns, state detection, etc.
This guide shows how to use regular expressions in your Flux scripts.
## Go regular expression syntax
Flux uses Go's [regexp package](https://golang.org/pkg/regexp/) for regular expression search.
The links [below](#helpful-links) provide information about Go's regular expression syntax.
## Regular expression operators
Flux provides two comparison operators for use with regular expressions.
#### `=~`
When the expression on the left **MATCHES** the regular expression on the right, this evaluates to `true`.
#### `!~`
When the expression on the left **DOES NOT MATCH** the regular expression on the right, this evaluates to `true`.
## Regular expressions in Flux
When using regex matching in your Flux scripts, enclose your regular expressions with `/`.
The following is the basic regex comparison syntax:
###### Basic regex comparison syntax
```js
expression =~ /regex/
expression !~ /regex/
```
## Examples
### Use a regex to filter by tag value
The following example filters records by the `cpu` tag.
It only keeps records for which the `cpu` is either `cpu0`, `cpu1`, or `cpu2`.
```js
from(bucket: "telegraf/autogen")
|> range(start: -15m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_user" and
r.cpu =~ /cpu[0-2]/
)
```
### Use a regex to filter by field key
The following example excludes records that do not have `_percent` in a field key.
```js
from(bucket: "telegraf/autogen")
|> range(start: -15m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field =~ /_percent/
)
```
### Drop columns matching a regex
The following example drops columns whose names do not being with `_`.
```js
from(bucket: "telegraf/autogen")
|> range(start: -15m)
|> filter(fn: (r) => r._measurement == "mem")
|> drop(fn: (column) => column !~ /_.*/)
```
## Helpful links
##### Syntax documentation
[regexp Syntax GoDoc](https://godoc.org/regexp/syntax)
[RE2 Syntax Overview](https://github.com/google/re2/wiki/Syntax)
##### Go regex testers
[Regex Tester - Golang](https://regex-golang.appspot.com/assets/html/index.html)
[Regex101](https://regex101.com/)

View File

@ -0,0 +1,56 @@
---
title: Sort and limit data with Flux
seotitle: How to sort and limit data with Flux
description: >
This guide walks through sorting and limiting data with Flux and outlines how
it shapes your data in the process.
menu:
v2_0:
name: Sort and limit data
parent: How-to guides
weight: 6
---
The [`sort()`function](/v2.0/reference/flux/functions/transformations/sort)
orders the records within each table.
The following example orders system uptime first by region, then host, then value.
```js
from(bucket:"telegraf/autogen")
|> range(start:-12h)
|> filter(fn: (r) =>
r._measurement == "system" and
r._field == "uptime"
)
|> sort(columns:["region", "host", "_value"])
```
The [`limit()` function](/v2.0/reference/flux/functions/transformations/limit)
limits the number of records in output tables to a fixed number, `n`.
The following example shows up to 10 records from the past hour.
```js
from(bucket:"telegraf/autogen")
|> range(start:-1h)
|> limit(n:10)
```
You can use `sort()` and `limit()` together to show the top N records.
The example below returns the 10 top system uptime values sorted first by
region, then host, then value.
```js
from(bucket:"telegraf/autogen")
|> range(start:-12h)
|> filter(fn: (r) =>
r._measurement == "system" and
r._field == "uptime"
)
|> sort(columns:["region", "host", "_value"])
|> limit(n:10)
```
You now have created a Flux query that sorts and limits data.
Flux also provides the [`top()`](/v2.0/reference/flux/functions/transformations/selectors/top)
and [`bottom()`](/v2.0/reference/flux/functions/transformations/selectors/bottom)
functions to perform both of these functions at the same time.

View File

@ -0,0 +1,340 @@
---
title: Window and aggregate data with Flux
seotitle: How to window and aggregate data with Flux
description: >
This guide walks through windowing and aggregating data with Flux and outlines
how it shapes your data in the process.
menu:
v2_0:
name: Window and aggregate data
parent: How-to guides
weight: 2
---
A common operation performed with time series data is grouping data into windows of time,
or "windowing" data, then aggregating windowed values into a new value.
This guide walks through windowing and aggregating data with Flux and demonstrates
how data is shaped in the process.
{{% note %}}
The following example is an in-depth walk-through of the steps required to window and aggregate data.
The [`aggregateWindow()` function](#summing-up) performs these operations for you, but understanding
how data is shaped in the process helps to successfully create your desired output.
{{% /note %}}
## Data set
For the purposes of this guide, define a variable that represents your base data set.
The following example queries the memory usage of the host machine.
```js
dataSet = from(bucket: "telegraf/autogen")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> drop(columns: ["host"])
```
{{% note %}}
This example drops the `host` column from the returned data since the memory data
is only tracked for a single host and it simplifies the output tables.
Dropping the `host` column is column is optional and not recommended if monitoring memory
on multiple hosts.
{{% /note %}}
`dataSet` can now be used to represent your base data, which will look similar to the following:
{{% truncate %}}
```
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:00.000000000Z 71.11611366271973
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:10.000000000Z 67.39630699157715
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:20.000000000Z 64.16666507720947
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:30.000000000Z 64.19951915740967
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:40.000000000Z 64.2122745513916
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:50.000000000Z 64.22209739685059
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 64.6336555480957
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:10.000000000Z 64.16516304016113
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:20.000000000Z 64.18349742889404
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:30.000000000Z 64.20474052429199
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:40.000000000Z 68.65062713623047
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:50.000000000Z 67.20139980316162
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 70.9143877029419
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:10.000000000Z 64.14549350738525
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:20.000000000Z 64.15379047393799
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:30.000000000Z 64.1592264175415
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:40.000000000Z 64.18190002441406
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:50.000000000Z 64.28837776184082
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 64.29731845855713
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:10.000000000Z 64.36963081359863
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:20.000000000Z 64.37397003173828
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:30.000000000Z 64.44413661956787
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:40.000000000Z 64.42906856536865
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:50.000000000Z 64.44573402404785
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.48912620544434
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:10.000000000Z 64.49522972106934
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:20.000000000Z 64.48652744293213
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:30.000000000Z 64.49949741363525
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:40.000000000Z 64.4949197769165
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:50.000000000Z 64.49787616729736
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229
```
{{% /truncate %}}
## Windowing data
Use the [`window()` function](/v2.0/reference/flux/functions/transformations/window)
to group your data based on time bounds.
The most common parameter passed with the `window()` is `every` which
defines the duration of time between windows.
Other parameters are available, but for this example, window the base data
set into one minute windows.
```js
dataSet
|> window(every: 1m)
```
Each window of time is output in its own table containing all records that fall within the window.
{{% truncate %}}
###### window() output tables
```
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:00.000000000Z 71.11611366271973
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:10.000000000Z 67.39630699157715
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:20.000000000Z 64.16666507720947
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:30.000000000Z 64.19951915740967
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:40.000000000Z 64.2122745513916
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:50.000000000Z 64.22209739685059
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 64.6336555480957
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:10.000000000Z 64.16516304016113
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:20.000000000Z 64.18349742889404
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:30.000000000Z 64.20474052429199
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:40.000000000Z 68.65062713623047
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:50.000000000Z 67.20139980316162
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 70.9143877029419
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:10.000000000Z 64.14549350738525
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:20.000000000Z 64.15379047393799
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:30.000000000Z 64.1592264175415
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:40.000000000Z 64.18190002441406
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:50.000000000Z 64.28837776184082
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 64.29731845855713
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:10.000000000Z 64.36963081359863
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:20.000000000Z 64.37397003173828
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:30.000000000Z 64.44413661956787
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:40.000000000Z 64.42906856536865
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:50.000000000Z 64.44573402404785
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.48912620544434
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:10.000000000Z 64.49522972106934
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:20.000000000Z 64.48652744293213
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:30.000000000Z 64.49949741363525
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:40.000000000Z 64.4949197769165
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:50.000000000Z 64.49787616729736
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:55:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229
```
{{% /truncate %}}
When visualized in the InfluxDB UI, each window table is displayed in a different color.
![Windowed data](/img/simple-windowed-data.png)
## Aggregate data
[Aggregate functions](/v2.0/reference/flux/functions/transformations/aggregates) take the values
of all rows in a table and use them to perform an aggregate operation.
The result is output as a new value in a single-row table.
Since windowed data is split into separate tables, aggregate operations run against
each table separately and output new tables containing only the aggregated value.
For this example, use the [`mean()` function](/v2.0/reference/flux/functions/transformations/aggregates/mean)
to output the average of each window:
```js
dataSet
|> window(every: 1m)
|> mean()
```
{{% truncate %}}
###### mean() output tables
```
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ----------------------------
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 65.88549613952637
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ----------------------------
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 65.50651391347249
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ----------------------------
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 65.30719598134358
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ----------------------------
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 64.39330975214641
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ----------------------------
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 64.49386278788249
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ----------------------------
2018-11-03T17:55:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 64.49816226959229
```
{{% /truncate %}}
Because each data point is contained in its own table, when visualized,
they appear as single, unconnected points.
![Aggregated windowed data](/img/simple-windowed-aggregate-data.png)
### Recreate the time column
**Notice the `_time` column is not in the [aggregated output tables](#mean-output-tables).**
Because records in each table are aggregated together, their timestamps no longer
apply and the column is removed from the group key and table.
Also notice the `_start` and `_stop` columns still exist.
These represent the lower and upper bounds of the time window.
Many Flux functions rely on the `_time` column.
To further process your data after an aggregate function, you need to re-add `_time`.
Use the [`duplicate()` function](/v2.0/reference/flux/functions/transformations/duplicate) to
duplicate either the `_start` or `_stop` column as a new `_time` column.
```js
dataSet
|> window(every: 1m)
|> mean()
|> duplicate(column: "_stop", as: "_time")
```
{{% truncate %}}
###### duplicate() output tables
```
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 65.88549613952637
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 65.50651391347249
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 65.30719598134358
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.39330975214641
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49386278788249
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:55:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229
```
{{% /truncate %}}
## "Unwindow" aggregate tables
Keeping aggregate values in separate tables generally isn't the format in which you want your data.
Use the `window()` function to "unwindow" your data into a single infinite (`inf`) window.
```js
dataSet
|> window(every: 1m)
|> mean()
|> duplicate(column: "_stop", as: "_time")
|> window(every: inf)
```
{{% note %}}
Windowing requires a `_time` column which is why it's necessary to
[recreate the `_time` column](#recreate-the-time-column) after an aggregation.
{{% /note %}}
###### Unwindowed output table
```
Table: keys: [_start, _stop, _field, _measurement]
_start:time _stop:time _field:string _measurement:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ----------------------------
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 65.88549613952637
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 65.50651391347249
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 65.30719598134358
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.39330975214641
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49386278788249
2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229
```
With the aggregate values in a single table, data points in the visualization are connected.
![Unwindowed aggregate data](/img/simple-unwindowed-data.png)
## Summing up
You have now created a Flux query that windows and aggregates data.
The data transformation process outlined in this guide should be used for all aggregation operations.
Flux also provides the [`aggregateWindow()` function](/v2.0/reference/flux/functions/transformations/aggregates/aggregatewindow)
which performs all these separate functions for you.
The following Flux query will return the same results:
###### aggregateWindow function
```js
dataSet
|> aggregateWindow(every: 1m, fn: mean)
```

View File

@ -0,0 +1,17 @@
---
title: Command line tools
seotitle: Command line tools for managing InfluxDB
description: >
InfluxDB provides command line tools designed to aid in managing and working
with InfluxDB from the command line.
menu:
v2_0_ref:
name: Command line tools
weight: 1
---
InfluxDB provides command line tools designed to aid in managing and working
with InfluxDB from the command line.
The following command line interfaces (CLIs) are available:
{{% children %}}

View File

@ -0,0 +1,43 @@
---
title: influx - InfluxDB command line interface
seotitle: influx - InfluxDB command line interface
description: >
The influx CLI includes commands to manage many aspects of InfluxDB,
including buckets, organizations, users, tasks, etc.
menu:
v2_0_ref:
name: influx
parent: Command line tools
weight: 1
---
The `influx` command line interface (CLI) includes commands to manage many aspects of InfluxDB,
including buckets, organizations, users, tasks, etc.
## Usage
```
influx [flags]
influx [command]
```
## Commands
| Command | Description |
|:------- |:----------- |
| [auth](/v2.0/reference/cli/influx/auth) | Authorization management commands |
| [bucket](/v2.0/reference/cli/influx/bucket) | Bucket management commands |
| [help](/v2.0/reference/cli/influx/help) | Help about any command |
| [org](/v2.0/reference/cli/influx/org) | Organization management commands |
| [query](/v2.0/reference/cli/influx/query) | Execute a Flux query |
| [repl](/v2.0/reference/cli/influx/repl) | Interactive REPL (read-eval-print-loop) |
| [setup](/v2.0/reference/cli/influx/setup) | Create default username, password, org, bucket, etc. |
| [task](/v2.0/reference/cli/influx/task) | Task management commands |
| [user](/v2.0/reference/cli/influx/user) | User management commands |
| [write](/v2.0/reference/cli/influx/write) | Write points to InfluxDB |
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for the influx command | |
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,41 @@
---
title: influx auth Authorization management commands
description: The 'influx auth' command and its subcommands manage authorizations in InfluxDB.
menu:
v2_0_ref:
name: influx auth
parent: influx
weight: 1
---
The `influx auth` command and its subcommands manage authorizations in InfluxDB.
## Usage
```
influx auth [flags]
influx auth [command]
```
#### Aliases
`auth`, `authorization`
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [active](/v2.0/reference/cli/influx/auth/active) | Active authorization |
| [create](/v2.0/reference/cli/influx/auth/create) | Create authorization |
| [delete](/v2.0/reference/cli/influx/auth/delete) | Delete authorization |
| [find](/v2.0/reference/cli/influx/auth/find) | Find authorization |
| [inactive](/v2.0/reference/cli/influx/auth/inactive) | Inactive authorization |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `auth` command |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx auth active
description: The 'influx auth active' command activates an authorization.
menu:
v2_0_ref:
name: influx auth active
parent: influx auth
weight: 1
---
The `influx auth active` command activates an authorization in InfluxDB.
## Usage
```
influx auth active [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for the `active` command | |
| `-i`, `--id` | The authorization ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,44 @@
---
title: influx auth create
description: The 'influx auth create' creates an authorization in InfluxDB.
menu:
v2_0_ref:
name: influx auth create
parent: influx auth
weight: 1
---
The `influx auth create` creates an authorization in InfluxDB.
## Usage
```
influx auth create [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `create` command | |
| `-o`, `--org` | The organization name **(Required)** | string |
| `--read-bucket` | The bucket ID | stringArray |
| `--read-buckets` | Grants the permission to perform read actions against organization buckets | |
| `--read-dashboards` | Grants the permission to read dashboards | |
| `--read-orgs` | Grants the permission to read organizations | |
| `--read-tasks` | Grants the permission to read tasks | |
| `--read-telegrafs` | Grants the permission to read telegraf configs | |
| `--read-user` | Grants the permission to perform read actions against organization users | |
| `-u`, `--user` | The user name | string |
| `--write-bucket` | The bucket ID | stringArray |
| `--write-buckets` | Grants the permission to perform mutative actions against organization buckets | |
| `--write-dashboards` | Grants the permission to create dashboards | |
| `--write-orgs` | Grants the permission to create organizations | |
| `--write-tasks` | Grants the permission to create tasks | |
| `--write-telegrafs` | Grants the permission to create telegraf configs | |
| `--write-user` | Grants the permission to perform mutative actions against organization users | |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx auth delete
description: The 'influx auth delete' command deletes an authorization in InfluxDB.
menu:
v2_0_ref:
name: influx auth delete
parent: influx auth
weight: 1
---
The `influx auth delete` command deletes an authorization in InfluxDB.
## Usage
```
influx auth delete [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `delete` command | |
| `-i`, `--id` | The authorization ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx auth find
description: The 'influx auth find' command lists and searches authorizations in InfluxDB.
menu:
v2_0_ref:
name: influx auth find
parent: influx auth
weight: 1
---
The `influx auth find` command lists and searches authorizations in InfluxDB.
## Usage
```
influx auth find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `find` command | |
| `-i`, `--id` | The authorization ID | string |
| `-u`, `--user` | The user | string |
| `--user-id` | The user ID | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx auth inactive
description: The 'influx auth inactive' inactivates an authorization in InfluxDB.
menu:
v2_0_ref:
name: influx auth inactive
parent: influx auth
weight: 1
---
The `influx auth inactive` inactivates an authorization in InfluxDB.
## Usage
```
influx auth inactive [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `inactive` command | |
| `-i`, `--id` | The authorization ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,37 @@
---
title: influx bucket Bucket management commands
description: The 'influx bucket' command and its subcommands manage buckets in InfluxDB.
menu:
v2_0_ref:
name: influx bucket
parent: influx
weight: 1
---
The `influx bucket` command and its subcommands manage buckets in InfluxDB.
## Usage
```
influx bucket [flags]
influx bucket [command]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [create](/v2.0/reference/cli/influx/bucket/create) | Create bucket |
| [delete](/v2.0/reference/cli/influx/bucket/delete) | Delete bucket |
| [find](/v2.0/reference/cli/influx/bucket/find) | Find buckets |
| [update](/v2.0/reference/cli/influx/bucket/update) | Update bucket |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `bucket` command |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,32 @@
---
title: influx bucket create
description: The 'influx bucket create' command creates a new bucket in InfluxDB.
menu:
v2_0_ref:
name: influx bucket create
parent: influx bucket
weight: 1
---
The `influx bucket create` command creates a new bucket in InfluxDB.
## Usage
```
influx bucket create [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `create` command | |
| `-n`, `--name` | Name of bucket that will be created | string |
| `-o`, `--org` | Name of the organization that owns the bucket | string |
| `--org-id` | The ID of the organization that owns the bucket | string |
| `-r`, `--retention` | Duration in nanoseconds data will live in bucket | duration |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx bucket delete
description: The 'influx bucket delete' command deletes a bucket from InfluxDB and all the data it contains.
menu:
v2_0_ref:
name: influx bucket delete
parent: influx bucket
weight: 1
---
The `influx bucket delete` command deletes a bucket from InfluxDB and all the data it contains.
## Usage
```
influx bucket delete [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `delete` command | |
| `-i`, `--id` | The bucket ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,32 @@
---
title: influx bucket find
description: The 'influx bucket find' command lists and searches for buckets in InfluxDB.
menu:
v2_0_ref:
name: influx bucket find
parent: influx bucket
weight: 1
---
The `influx bucket find` command lists and searches for buckets in InfluxDB.
## Usage
```
influx bucket find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `find` command | |
| `-i`, `--id` | The bucket ID | string |
| `-n`, `--name` | The bucket name | string |
| `-o`, `--org` | The bucket organization name | string |
| `--org-id` | The bucket organization ID | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx bucket update
description: The 'influx bucket update' command updates information associated with buckets in InfluxDB.
menu:
v2_0_ref:
name: influx bucket update
parent: influx bucket
weight: 1
---
The `influx bucket update` command updates information associated with buckets in InfluxDB.
## Usage
```
influx bucket update [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `update` command | |
| `-i`, `--id` | The bucket ID **(Required)** | string |
| `-n`, `--name` | New bucket name | string |
| `-r`, `--retention` | New duration data will live in bucket | duration |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,28 @@
---
title: influx help Help command for the influx CLI
description: The 'influx help' command provides help for any command in the `influx` command line interface.
menu:
v2_0_ref:
name: influx help
parent: influx
weight: 1
---
The `influx help` command provides help for any command in the `influx` command line interface.
## Usage
```
influx help [command] [flags]
```
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for help |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,41 @@
---
title: influx org Organization management commands
description: The 'influx org' command and its subcommands manage organization information in InfluxDB.
menu:
v2_0_ref:
name: influx org
parent: influx
weight: 1
---
The `influx org` command and its subcommands manage organization information in InfluxDB.
## Usage
```
influx org [flags]
influx org [command]
```
#### Aliases
`org`, `organization`
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [create](/v2.0/reference/cli/influx/org/create) | Create organization |
| [delete](/v2.0/reference/cli/influx/org/delete) | Delete organization |
| [find](/v2.0/reference/cli/influx/org/find) | Find organizations |
| [members](/v2.0/reference/cli/influx/org/members) | Organization membership commands |
| [update](/v2.0/reference/cli/influx/org/update) | Update organization |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `org` command |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx org create
description: The 'influx org create' creates a new organization in InfluxDB.
menu:
v2_0_ref:
name: influx org create
parent: influx org
weight: 1
---
The `influx org create` creates a new organization in InfluxDB.
## Usage
```
influx org create [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `create` | |
| `-n`, `--name` | The name of organization that will be created | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx org delete
description: The 'influx org delete' command deletes an organization in InfluxDB.
menu:
v2_0_ref:
name: influx org delete
parent: influx org
weight: 1
---
The `influx org delete` command deletes an organization in InfluxDB.
## Usage
```
influx org delete [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `delete` | |
| `-i`, `--id` | The organization ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx org find
description: The 'influx org find' lists and searches for organizations in InfluxDB.
menu:
v2_0_ref:
name: influx org find
parent: influx org
weight: 1
---
The `influx org find` lists and searches for organizations in InfluxDB.
## Usage
```
influx org find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `find` | |
| `-i`, `--id` | The organization ID | string |
| `-n`, `--name` | The organization name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,36 @@
---
title: influx org members - Organization membership management commands
description: The 'influx org members' command and its subcommands manage organization members in InfluxDB.
menu:
v2_0_ref:
name: influx org members
parent: influx org
weight: 1
---
The `influx org members` command and its subcommands manage organization members in InfluxDB.
## Usage
```
influx org members [flags]
influx org members [command]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| add | Add organization member |
| list | List organization members |
| remove | Remove organization member |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for `members` |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx org members add
description: The 'influx org members add' command adds a new member to an organization in InfluxDB.
menu:
v2_0_ref:
name: influx org members add
parent: influx org members
weight: 1
---
The `influx org members add` command adds a new member to an organization in InfluxDB.
## Usage
```
influx org members add [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `add` | |
| `-i`, `--id` | The organization ID | string |
| `-o`, `--member` | The member ID | string |
| `-n`, `--name` | The organization name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx org members list
description: The 'influx org members list' command lists members within an organization in InfluxDB.
menu:
v2_0_ref:
name: influx org members list
parent: influx org members
weight: 1
---
The `influx org members list` command lists members within an organization in InfluxDB.
## Usage
```
influx org members list [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `list` | |
| `-i`, `--id` | The organization ID | string |
| `-n`, `--name` | The organization name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx org members remove
description: The 'influx org members remove' command removes a member from an organization in InfluxDB.
menu:
v2_0_ref:
name: influx org members remove
parent: influx org members
weight: 1
---
The `influx org members remove` command removes a member from an organization in InfluxDB.
## Usage
```
influx org members remove [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `remove` | |
| `-i`, `--id` | The organization ID | string |
| `-o`, `--member` | The member ID | string |
| `-n`, `--name` | The organization name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx org update
description: The 'influx org update' command updates information related to organizations in InfluxDB.
menu:
v2_0_ref:
name: influx org update
parent: influx org
weight: 1
---
The `influx org update` command updates information related to organizations in InfluxDB.
## Usage
```
influx org update [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `update` | |
| `-i`, `--id` | The organization ID **(Required)** | string |
| `-n`, `--name` | The organization name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,32 @@
---
title: influx query Execute queries from the influx CLI
description: >
The 'influx query' command executes a literal Flux query provided as a string
or a literal Flux query contained in a file by specifying the file prefixed with an '@' sign.
menu:
v2_0_ref:
name: influx query
parent: influx
weight: 1
---
The `influx query` command executes a literal Flux query provided as a string
or a literal Flux query contained in a file by specifying the file prefixed with an `@` sign.
## Usage
```
influx query [query literal or @/path/to/query.flux] [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for the query command | |
| `--org-id` | The organization ID | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,33 @@
---
title: influx repl Enter an interactive REPL
description: >
The 'influx repl' command opens and interactive read-eval-print-loop (REPL)
from which you can run Flux commands.
menu:
v2_0_ref:
name: influx repl
parent: influx
weight: 1
---
The `influx repl` command opens and interactive read-eval-print-loop (REPL)
from which you can run Flux commands.
## Usage
```
influx repl [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for the `repl` command | |
| `-o`, `--org` | The name of the organization | string |
| `--org-id` | The ID of organization to query | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx setup Run the initial Influx DB setup
description: >
The 'influx setup' command walks through the initial InfluxDB setup process,
creating a default user, organization, and bucket.
menu:
v2_0_ref:
name: influx setup
parent: influx
weight: 1
---
The `influx setup` command walks through the initial InfluxDB setup process,
creating a default user, organization, and bucket.
## Usage
```
influx setup [flags]
```
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `setup` command |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,40 @@
---
title: influx task Task management commands
description: The 'influx task' command and its subcommands manage tasks in InfluxDB.
menu:
v2_0_ref:
name: influx task
parent: influx
weight: 1
---
The `influx task` command and its subcommands manage tasks in InfluxDB.
### Usage
```
influx task [flags]
influx task [command]
```
### Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [create](/v2.0/reference/cli/influx/task/create) | Create task |
| [delete](/v2.0/reference/cli/influx/task/delete) | Delete task |
| [find](/v2.0/reference/cli/influx/task/find) | Find tasks |
| [log](/v2.0/reference/cli/influx/task/log) | Log related commands |
| [retry](/v2.0/reference/cli/influx/task/retry) | retry a run |
| [run](/v2.0/reference/cli/influx/task/run) | Run related commands |
| [update](/v2.0/reference/cli/influx/task/update) | Update task |
### Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `task` command |
### Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx task create
description: The 'influx task create' command creates a new task in InfluxDB.
menu:
v2_0_ref:
name: influx task create
parent: influx task
weight: 1
---
The `influx task create` command creates a new task in InfluxDB.
## Usage
```
influx task create [query literal or @/path/to/query.flux] [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `create` | |
| `--org` | Organization name | string |
| `--org-id` | ID of the organization that owns the task | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx task delete
description: The 'influx task delete' command deletes a task in InfluxDB.
menu:
v2_0_ref:
name: influx task delete
parent: influx task
weight: 1
---
The `influx task delete` command deletes a task in InfluxDB.
## Usage
```
influx task delete [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `delete` | |
| `-i`, `--id` | Task id **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,32 @@
---
title: influx task find
description: The 'influx task find' command lists and searches for tasks in InfluxDB.
menu:
v2_0_ref:
name: influx task find
parent: influx task
weight: 1
---
The `influx task find` command lists and searches for tasks in InfluxDB.
## Usage
```
influx task find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `find` | |
| `-i`, `--id` | Task ID | string |
| `--limit` | The number of tasks to find (default `100`) | integer |
| `--org-id` | Task organization ID | string |
| `-n`, `--user-id` | Task owner ID | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,34 @@
---
title: influx task log
description: The 'influx task log' and its subcommand 'find' output log information related related to a task.
menu:
v2_0_ref:
name: influx task log
parent: influx task
weight: 1
---
The `influx task log` command and its subcommand `find` output log information related to a task.
## Usage
```
influx task log [flags]
influx task log [command]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| find | Find logs for task |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for `log` |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx task log find
description: The 'influx task log find' command outputs log information related to a task.
menu:
v2_0_ref:
name: influx task log find
parent: influx task log
weight: 1
---
The `influx task log find` command outputs log information related to a task.
## Usage
```
influx task log find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `find` | |
| `--org-id` | Organization ID | string |
| `--run-id` | Run ID | string |
| `--task-id` | Task ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx task retry
description: The 'influx task retry' command retries to run a task in InfluxDB.
menu:
v2_0_ref:
name: influx task retry
parent: influx task
weight: 1
---
The `influx task retry` command retries to run a task in InfluxDB.
## Usage
```
influx task retry [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `retry` | |
| `-r`, `--run-id` | Run id **(Required)** | string |
| `-i`, `--task-id` | Task id **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,36 @@
---
title: influx task run
description: >
The 'influx task run' command and its subcommand 'find' output information
related to runs of a task.
menu:
v2_0_ref:
name: influx task run
parent: influx task
weight: 1
---
The `influx task run` command and its subcommand `find` output information related to runs of a task.
## Usage
```
influx task run [flags]
influx task run [command]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| find | Find runs for a task |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for `run` |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,34 @@
---
title: influx task run find
description: The 'influx task run find' command outputs information related to runs of a task.
menu:
v2_0_ref:
name: influx task run find
parent: influx task run
weight: 1
---
The `influx task run find` command outputs information related to runs of a task.
## Usage
```
influx task run find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `--after` | After-time for filtering | string |
| `--before` | Before-time for filtering | string |
| `-h`,`--help` | Help for `find` | |
| `--limit` | Limit the number of results | integer |
| `--org-id` | Organization ID | string |
| `--run-id` | Run ID | string |
| `--task-id` | Task ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx task update
description: The 'influx task update' command updates information related to tasks in InfluxDB.
menu:
v2_0_ref:
name: influx task update
parent: influx task
weight: 1
---
The `influx task update` command updates information related to tasks in InfluxDB.
## Usage
```
influx task update [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `update` | |
| `-i`, `--id` | Task ID **(Required)** | string |
| `--status` | Update task status | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,37 @@
---
title: influx user User management commands
description: The 'influx user' command and its subcommands manage user information in InfluxDB.
menu:
v2_0_ref:
name: influx user
parent: influx
weight: 1
---
The `influx user` command and its subcommands manage user information in InfluxDB.
## Usage
```
influx user [flags]
influx user [command]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [create](/v2.0/reference/cli/influx/user/create) | Create user |
| [delete](/v2.0/reference/cli/influx/user/delete) | Delete user |
| [find](/v2.0/reference/cli/influx/user/find) | Find user |
| [update](/v2.0/reference/cli/influx/user/update) | Update user |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `user` command |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx user create
description: The 'influx user create' command creates a user in InfluxDB.
menu:
v2_0_ref:
name: influx user create
parent: influx user
weight: 1
---
The `influx user create` command creates a new user in InfluxDB.
## Usage
```
influx user create [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `create` | |
| `-n`, `--name` | The user name **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,29 @@
---
title: influx user delete
description: The 'influx user delete' command deletes a specified user.
menu:
v2_0_ref:
name: influx user delete
parent: influx user
weight: 1
---
The `influx user delete` command deletes a specified user in InfluxDB.
## Usage
```
influx user delete [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `delete` | |
| `-i`, `--id` | The user ID **(Required)** | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influx user find
description: The 'influx user find' lists and searches for users in InfluxDB.
menu:
v2_0_ref:
name: influx user find
parent: influx user
weight: 1
---
The `influx user find` command lists and searches for users in InfluxDB.
## Usage
```
influx user find [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `find` | |
| `-i`, `--id` | The user ID | string |
| `-n`, `--name` | The user name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,31 @@
---
title: influx user update
description: >
The 'influx user update' command updates information related to a user such as their user name.
menu:
v2_0_ref:
name: influx user update
parent: influx user
weight: 1
---
The `influx user update` command updates information related to a user in InfluxDB.
## Usage
```
influx user update [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for `update` | |
| `-i`, `--id` | The user ID **(Required)** | string |
| `-n`, `--name` | The user name | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,36 @@
---
title: influx write Write data to InfluxDB using the CLI
description: >
The 'influx write' command writes line protocol to InfluxDB either via a single
line of line protocol, or a via a file containing line protocol.
menu:
v2_0_ref:
name: influx write
parent: influx
weight: 1
---
The `influx write` writes a single line of line protocol to InfluxDB,
or adds an entire file specified with an `@` prefix.
## Usage
```
influx write [line protocol or @/path/to/points.txt] [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-b`, `--bucket` | The name of destination bucket | string |
| `--bucket-id` | The ID of destination bucket | string |
| `-h`, `--help` | Help for the write command | |
| `-o`, `--org` | The name of the organization that owns the bucket | string |
| `--org-id` | The ID of the organization that owns the bucket | string |
| `-p`, `--precision` | Precision of the timestamps of the lines (default `ns`) | string |
## Global flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -0,0 +1,30 @@
---
title: influxd - InfluxDB daemon
seotitle: influxd - InfluxDB daemon
description: The influxd daemon starts and runs all the processes necessary for InfluxDB to function.
menu:
v2_0_ref:
name: influxd
parent: Command line tools
weight: 2
---
The `influxd` daemon starts and runs all the processes necessary for InfluxDB to function.
## Usage
```
influxd [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `--bolt-path` | Path to boltdb database (default `~/.influxdbv2/influxd.bolt`) | string |
| `--developer-mode` | Serve assets from the local filesystem in developer mode | |
| `--engine-path` | Path to persistent engine files (default `~/.influxdbv2/engine`) | string |
| `-h`, `--help` | Help for `influxd` | |
| `--http-bind-address` | Bind address for the REST HTTP API (default `:9999`) | string |
| `--log-level` | Supported log levels are debug, info, and error (default `info`) | string |
| `--reporting-disabled` | Disable sending telemetry data to https://telemetry.influxdata.com | |
| `--protos-path` | Path to protos on the filesystem (default `~/.influxdbv2/protos`) | string |
| `--secret-store` | Data store for secrets (bolt or vault) (default `bolt`) | string |

View File

@ -0,0 +1,13 @@
---
title: Flux query language
description: Reference articles for Flux functions and the Flux language specification.
menu:
v2_0_ref:
name: Flux query language
weight: 2
---
The following articles are meant as a reference for Flux functions and the
Flux language specification.
{{< children >}}

View File

@ -0,0 +1,15 @@
---
title: Flux functions
description: Flux functions allows you to retrieve, transform, process, and output data easily.
menu:
v2_0_ref:
name: Flux functions
parent: Flux query language
weight: 4
---
Flux's functional syntax allows you to retrieve, transform, process, and output data easily.
There is a large library of built-in functions, but you can also create your own
custom functions to perform operations that suit your needs.
{{< children >}}

View File

@ -0,0 +1,14 @@
---
title: Flux input functions
description: Flux input functions define sources of data or or display information about data sources.
menu:
v2_0_ref:
parent: Flux functions
name: Inputs
weight: 1
---
Flux input functions define sources of data or display information about data sources.
The following input functions are available:
{{< function-list category="Inputs" menu="v2_0_ref" >}}

View File

@ -0,0 +1,22 @@
---
title: buckets() function
description: The buckets() function returns a list of buckets in the organization.
menu:
v2_0_ref:
name: buckets
parent: Inputs
weight: 1
---
The `buckets()` function returns a list of buckets in the organization.
_**Function type:** Input_
```js
buckets()
```
<hr style="margin-top:4rem"/>
##### Related InfluxQL functions and statements:
[SHOW DATABASES](https://docs.influxdata.com/influxdb/latest/query_language/schema_exploration/#show-databases)

View File

@ -0,0 +1,50 @@
---
title: from() function
description: The from() function retrieves data from an InfluxDB data source.
menu:
v2_0_ref:
name: from
parent: Inputs
weight: 1
---
The `from()` function retrieves data from an InfluxDB data source.
It returns a stream of tables from the specified [bucket](#parameters).
Each unique series is contained within its own table.
Each record in the table represents a single point in the series.
_**Function type:** Input_
_**Output data type:** Object_
```js
from(bucket: "telegraf/autogen")
// OR
from(bucketID: "0261d8287f4d6000")
```
## Parameters
### bucket
The name of the bucket to query.
_**Data type:** String_
### bucketID
The string-encoded ID of the bucket to query.
_**Data type:** String_
## Examples
```js
from(bucket: "telegraf/autogen")
```
```js
from(bucketID: "0261d8287f4d6000")
```
<hr style="margin-top:4rem"/>
##### Related InfluxQL functions and statements:
[FROM](https://docs.influxdata.com/influxdb/latest/query_language/data_exploration/#from-clause)

View File

@ -0,0 +1,64 @@
---
title: fromCSV() function
description: The fromCSV() function retrieves data from a CSV data source.
menu:
v2_0_ref:
name: fromCSV
parent: Inputs
weight: 1
---
The `fromCSV()` function retrieves data from a comma-separated value (CSV) data source.
It returns a stream of tables.
Each unique series is contained within its own table.
Each record in the table represents a single point in the series.
_**Function type:** Input_
_**Output data type:** Object_
```js
from(file: "/path/to/data-file.csv")
// OR
from(csv: csvData)
```
## Parameters
### file
The file path of the CSV file to query.
The path can be absolute or relative.
If relative, it is relative to the working directory of the `influxd` process.
_**Data type:** String_
### csv
Raw CSV-formatted text.
{{% note %}}
CSV data must be in the CSV format produced by the Flux HTTP response standard.
See the [Flux technical specification](https://github.com/influxdata/flux/blob/master/docs/SPEC.md#csv)
for information about this format.
{{% /note %}}
_**Data type:** String_
## Examples
### Query CSV data from a file
```js
from(file: "/path/to/data-file.csv")
```
### Query raw CSV-formatted text
```js
csvData = "
result,table,_start,_stop,_time,region,host,_value
mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
"
from(csv: csvData)
```

View File

@ -0,0 +1,15 @@
---
title: Flux miscellaneous functions
description: Flux provides miscellaneous functions that serve purposes other than retrieving, transforming, or outputting data.
menu:
v2_0_ref:
parent: Flux functions
name: Miscellaneous
weight: 5
---
Flux functions primarily retrieve, shape and transform, then output data, however
there are functions available that serve other purposes.
The following functions are are available but don't fit within other function categories:
{{< function-list category="Miscellaneous" menu="v2_0_ref" >}}

View File

@ -0,0 +1,154 @@
---
title: intervals() function
description: The intervals() function generates a set of time intervals over a range of time.
menu:
v2_0_ref:
name: intervals
parent: Miscellaneous
weight: 1
---
The `intervals()` function generates a set of time intervals over a range of time.
An interval is an object with `start` and `stop` properties that correspond to the inclusive start and exclusive stop times of the time interval.
The return value of intervals is another function that accepts start and stop time parameters and returns an interval generator.
The generator is then used to produce the set of intervals.
The set of intervals includes all intervals that intersect with the initial range of time.
{{% note %}}
The `intervals()` function is designed to be used with the intervals parameter of the [`window()` function](/v2.0/reference/flux/functions/transformations/window).
{{% /note %}}
_**Function type:** Miscellaneous_
_**Output data type:** Object_
```js
intervals()
```
## Parameters
### every
The duration between starts of each of the intervals.
The Nth interval start time is the initial start time plus the offset plus an Nth multiple of the every parameter.
Defaults to the value of the `period` duration.
_**Data type:** Duration_
### period
The length of each interval.
Each interval's stop time is equal to the interval start time plus the period duration.
It can be negative, indicating the start and stop boundaries are reversed.
Defaults to the value of the `every` duration.
_**Data type:** Duration_
### offset
The offset duration relative to the location offset.
It can be negative, indicating that the offset goes backwards in time.
Defaults to `0h`.
_**Data type:** Duration_
### filter
A function that accepts an interval object and returns a boolean value.
Each potential interval is passed to the filter function.
When the function returns false, that interval is excluded from the set of intervals.
Defaults to include all intervals.
_**Data type:** Function_
## Examples
##### Basic intervals
```js
// 1 hour intervals
intervals(every:1h)
// 2 hour long intervals every 1 hour
intervals(every:1h, period:2h)
// 2 hour long intervals every 1 hour starting at 30m past the hour
intervals(every:1h, period:2h, offset:30m)
// 1 week intervals starting on Monday (by default weeks start on Sunday)
intervals(every:1w, offset:1d)
// the hour from 11PM - 12AM every night
intervals(every:1d, period:-1h)
// the last day of each month
intervals(every:1mo, period:-1d)
```
##### Using a predicate
```js
// 1 day intervals excluding weekends
intervals(
every:1d,
filter: (interval) => !(weekday(time: interval.start) in [Sunday, Saturday]),
)
// Work hours from 9AM - 5PM on work days.
intervals(
every:1d,
period:8h,
offset:9h,
filter:(interval) => !(weekday(time: interval.start) in [Sunday, Saturday]),
)
```
##### Using known start and stop dates
```js
// Every hour for six hours on Sep 5th.
intervals(every:1h)(start:2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-07:00)
// Generates
// [2018-09-05T00:00:00-07:00, 2018-09-05T01:00:00-07:00)
// [2018-09-05T01:00:00-07:00, 2018-09-05T02:00:00-07:00)
// [2018-09-05T02:00:00-07:00, 2018-09-05T03:00:00-07:00)
// [2018-09-05T03:00:00-07:00, 2018-09-05T04:00:00-07:00)
// [2018-09-05T04:00:00-07:00, 2018-09-05T05:00:00-07:00)
// [2018-09-05T05:00:00-07:00, 2018-09-05T06:00:00-07:00)
// Every hour for six hours with 1h30m periods on Sep 5th
intervals(every:1h, period:1h30m)(start:2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-07:00)
// Generates
// [2018-09-05T00:00:00-07:00, 2018-09-05T01:30:00-07:00)
// [2018-09-05T01:00:00-07:00, 2018-09-05T02:30:00-07:00)
// [2018-09-05T02:00:00-07:00, 2018-09-05T03:30:00-07:00)
// [2018-09-05T03:00:00-07:00, 2018-09-05T04:30:00-07:00)
// [2018-09-05T04:00:00-07:00, 2018-09-05T05:30:00-07:00)
// [2018-09-05T05:00:00-07:00, 2018-09-05T06:30:00-07:00)
// Every hour for six hours using the previous hour on Sep 5th
intervals(every:1h, period:-1h)(start:2018-09-05T12:00:00-07:00, stop: 2018-09-05T18:00:00-07:00)
// Generates
// [2018-09-05T11:00:00-07:00, 2018-09-05T12:00:00-07:00)
// [2018-09-05T12:00:00-07:00, 2018-09-05T13:00:00-07:00)
// [2018-09-05T13:00:00-07:00, 2018-09-05T14:00:00-07:00)
// [2018-09-05T14:00:00-07:00, 2018-09-05T15:00:00-07:00)
// [2018-09-05T15:00:00-07:00, 2018-09-05T16:00:00-07:00)
// [2018-09-05T16:00:00-07:00, 2018-09-05T17:00:00-07:00)
// [2018-09-05T17:00:00-07:00, 2018-09-05T18:00:00-07:00)
// Every month for 4 months starting on Jan 1st
intervals(every:1mo)(start:2018-01-01, stop: 2018-05-01)
// Generates
// [2018-01-01, 2018-02-01)
// [2018-02-01, 2018-03-01)
// [2018-03-01, 2018-04-01)
// [2018-04-01, 2018-05-01)
// Every month for 4 months starting on Jan 15th
intervals(every:1mo)(start:2018-01-15, stop: 2018-05-15)
// Generates
// [2018-01-15, 2018-02-15)
// [2018-02-15, 2018-03-15)
// [2018-03-15, 2018-04-15)
// [2018-04-15, 2018-05-15)
```

View File

@ -0,0 +1,51 @@
---
title: linearBins() function
description: The linearBins() function generates a list of linearly separated floats.
menu:
v2_0_ref:
name: linearBins
parent: Miscellaneous
weight: 1
---
The `linearBins()` function generates a list of linearly separated floats.
It is a helper function meant to generate bin bounds for the
[`histogram()` function](/v2.0/reference/flux/functions/transformations/histogram).
_**Function type:** Miscellaneous_
_**Output data type:** Array of floats_
```js
linearBins(start: 0.0, width: 5.0, count: 20, infinity: true)
```
## Parameters
### start
The first value in the returned list.
_**Data type:** Float_
### width
The distance between subsequent bin values.
_**Data type:** Float_
### count
The number of bins to create.
_**Data type:** Integer_
### infinity
When `true`, adds an additional bin with a value of positive infinity.
Defaults to `true`.
_**Data type:** Boolean_
## Examples
```js
linearBins(start: 0.0, width: 10.0, count: 10)
// Generated list: [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, +Inf]
```

View File

@ -0,0 +1,50 @@
---
title: logarithmicBins() function
description: The logarithmicBins() function generates a list of exponentially separated floats.
menu:
v2_0_ref:
name: logarithmicBins
parent: Miscellaneous
weight: 1
---
The `logarithmicBins()` function generates a list of exponentially separated floats.
It is a helper function meant to generate bin bounds for the
[`histogram()` function](/v2.0/reference/flux/functions/transformations/histogram).
_**Function type:** Miscellaneous_
_**Output data type:** Array of floats_
```js
logarithmicBins(start:1.0, factor: 2.0, count: 10, infinity: true)
```
## Parameters
### start
The first value in the returned bin list.
_**Data type:** Float_
### factor
The multiplier applied to each subsequent bin.
_**Data type:** Float_
### count
The number of bins to create.
_**Data type:** Integer_
### infinity
When `true`, adds an additional bin with a value of positive infinity.
Defaults to `true`.
_**Data type:** Boolean_
## Examples
```js
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
```

View File

@ -0,0 +1,23 @@
---
title: systemTime() function
description: The systemTime() function returns the current system time.
menu:
v2_0_ref:
name: systemTime
parent: Miscellaneous
weight: 1
---
The `systemTime()` function returns the current system time.
_**Function type:** Date/Time_
_**Output data type:** Timestamp_
```js
systemTime()
```
## Examples
```js
offsetTime = (offset) => systemTime() |> shift(shift: offset)
```

View File

@ -0,0 +1,14 @@
---
title: Flux output functions
description: Flux output functions yield results or send data to a specified output destination.
menu:
v2_0_ref:
parent: Flux functions
name: Outputs
weight: 2
---
Flux output functions yield results or send data to a specified output destination.
The following output functions are are available:
{{< function-list category="Outputs" menu="v2_0_ref" >}}

View File

@ -0,0 +1,158 @@
---
title: to() function
description: The to() function writes data to an InfluxDB v2.0 bucket.
menu:
v2_0_ref:
name: to
parent: Outputs
weight: 1
---
The `to()` function writes data to an **InfluxDB v2.0** bucket.
_**Function type:** Output_
_**Output data type:** Object_
```js
to(
bucket: "my-bucket",
org: "my-org",
host: "http://example.com:8086",
token: "xxxxxx",
timeColumn: "_time",
tagColumns: ["tag1", "tag2", "tag3"],
fieldFn: (r) => ({ [r._field]: r._value })
)
// OR
to(
bucketID: "1234567890",
orgID: "0987654321",
host: "http://example.com:8086",
token: "xxxxxx",
timeColumn: "_time",
tagColumns: ["tag1", "tag2", "tag3"],
fieldFn: (r) => ({ [r._field]: r._value })
)
```
## Parameters
{{% note %}}
`bucket` OR `bucketID` is **required**.
{{% /note %}}
### bucket
The bucket to which data is written. Mutually exclusive with `bucketID`.
_**Data type:** String_
### bucketID
The ID of the bucket to which data is written. Mutually exclusive with `bucket`.
_**Data type:** String_
### org
The organization name of the specified [`bucket`](#bucket).
Only required when writing to a remote host.
Mutually exclusive with `orgID`
_**Data type:** String_
{{% note %}}
Specify either an `org` or an `orgID`, but not both.
{{% /note %}}
### orgID
The organization ID of the specified [`bucket`](#bucket).
Only required when writing to a remote host.
Mutually exclusive with `org`.
_**Data type:** String_
### host
The remote InfluxDB host to which to write.
_If specified, a `token` is required._
_**Data type:** String_
### token
The authorization token to use when writing to a remote host.
_Required when a `host` is specified._
_**Data type:** String_
### timeColumn
The time column of the output.
Default is `"_time"`.
_**Data type:** String_
### tagColumns
The tag columns of the output.
Defaults to all columns with type `string`, excluding all value columns and the `_field` column if present.
_**Data type:** Array of strings_
### fieldFn
Function that takes a record from the input table and returns an object.
For each record from the input table, `fieldFn` returns an object that maps output the field key to the output value.
Default is `(r) => ({ [r._field]: r._value })`
_**Data type:** Function_
_**Output data type:** Object_
## Examples
### Default to() operation
Given the following table:
| _time | _start | _stop | _measurement | _field | _value |
| ----- | ------ | ----- | ------------ | ------ | ------ |
| 0005 | 0000 | 0009 | "a" | "temp" | 100.1 |
| 0006 | 0000 | 0009 | "a" | "temp" | 99.3 |
| 0007 | 0000 | 0009 | "a" | "temp" | 99.9 |
The default `to` operation:
```js
// ...
|> to(bucket:"my-bucket", org:"my-org")
```
is equivalent to writing the above data using the following line protocol:
```
_measurement=a temp=100.1 0005
_measurement=a temp=99.3 0006
_measurement=a temp=99.9 0007
```
### Custom to() operation
The `to()` functions default operation can be overridden. For example, given the following table:
| _time | _start | _stop | tag1 | tag2 | hum | temp |
| ----- | ------ | ----- | ---- | ---- | ---- | ----- |
| 0005 | 0000 | 0009 | "a" | "b" | 55.3 | 100.1 |
| 0006 | 0000 | 0009 | "a" | "b" | 55.4 | 99.3 |
| 0007 | 0000 | 0009 | "a" | "b" | 55.5 | 99.9 |
The operation:
```js
// ...
|> to(bucket:"my-bucket", org:"my-org", tagColumns:["tag1"], fieldFn: (r) => return {"hum": r.hum, "temp": r.temp})
```
is equivalent to writing the above data using the following line protocol:
```
_tag1=a hum=55.3,temp=100.1 0005
_tag1=a hum=55.4,temp=99.3 0006
_tag1=a hum=55.5,temp=99.9 0007
```
<hr style="margin-top:4rem"/>
##### Related InfluxQL functions and statements:
[SELECT INTO](https://docs.influxdata.com/influxdb/latest/query_language/data_exploration/#the-into-clause)

Some files were not shown because too many files have changed in this diff Show More