resolved merge conflicts with master
commit
d78b6e083b
|
@ -4,8 +4,8 @@ jobs:
|
|||
docker:
|
||||
- image: circleci/node:latest
|
||||
environment:
|
||||
HUGO_VERSION: "0.53"
|
||||
S3DEPLOY_VERSION: "2.3.0"
|
||||
HUGO_VERSION: "0.55.1"
|
||||
S3DEPLOY_VERSION: "2.3.2"
|
||||
steps:
|
||||
- checkout
|
||||
- restore_cache:
|
||||
|
|
|
@ -63,8 +63,8 @@ weight: # Determines sort order in both the nav tree and in article lists.
|
|||
draft: # If true, will not render page on build
|
||||
enterprise_all: # If true, specifies the doc as a whole is specific to InfluxDB Enterprise
|
||||
enterprise_some: # If true, specifies the doc includes some content specific to InfluxDB Enterprise
|
||||
cloud_all: # If true, specifies the doc as a whole is specific to InfluxCloud
|
||||
cloud_some: # If true, specifies the doc includes some content specific to InfluxCloud
|
||||
cloud_all: # If true, specifies the doc as a whole is specific to InfluxDB Cloud
|
||||
cloud_some: # If true, specifies the doc includes some content specific to InfluxDB Cloud
|
||||
v2.x/tags: # Tags specific to each version (replace .x" with the appropriate minor version )
|
||||
```
|
||||
|
||||
|
@ -144,21 +144,35 @@ Insert enterprise-specific markdown content here.
|
|||
|
||||
#### Enterprise name
|
||||
The name used to refer to InfluxData's enterprise offering is subject to change.
|
||||
To facilitate easy updates in the future, use the `enterprise-name` shortcode when referencing the enterprise product.
|
||||
To facilitate easy updates in the future, use the `enterprise-name` shortcode
|
||||
when referencing the enterprise product.
|
||||
This shortcode accepts a `"short"` parameter which uses the "short-name".
|
||||
|
||||
```
|
||||
This is content that references {{< enterprise-name >}}.
|
||||
This is content that references {{< enterprise-name "short" >}}.
|
||||
```
|
||||
|
||||
The product name is stored in `data/products.yml`
|
||||
Product names are stored in `data/products.yml`.
|
||||
|
||||
### InfluxCloud Content
|
||||
Some articles are unique to InfluxCloud or at least contain some information specific to InfluxCloud.
|
||||
#### Enterprise link
|
||||
References to InfluxDB Enterprise are often accompanied with a link to a page where
|
||||
visitors can get more information about the Enterprise offering.
|
||||
This link is subject to change.
|
||||
Use the `enterprise-link` shortcode when including links to more information about
|
||||
InfluxDB Enterprise.
|
||||
|
||||
```
|
||||
Find more info [here][{{< enterprise-link >}}]
|
||||
```
|
||||
|
||||
### InfluxDB Cloud Content
|
||||
Some articles are unique to InfluxDB Cloud or at least contain some information specific to InfluxDB Cloud.
|
||||
There are frontmatter options and an cloud shortcode that help to properly identify this content.
|
||||
|
||||
#### All content is cloud-specific
|
||||
If all content in an article is cloud-specific, set the menu in the frontmatter to `v2_0_cloud`
|
||||
(change the version number for the specific version of InfluxCloud).
|
||||
(change the version number for the specific version of InfluxDB Cloud).
|
||||
|
||||
```yaml
|
||||
menu:
|
||||
|
@ -176,7 +190,7 @@ If only some content in the article is cloud-specific, set the `cloud_some` fron
|
|||
cloud_some: true
|
||||
```
|
||||
|
||||
This will display a message at the top of page indicating some things are unique to InfluxCloud.
|
||||
This will display a message at the top of page indicating some things are unique to InfluxDB Cloud.
|
||||
To format cloud-specific content, wrap it in the `{{% cloud %}}` shortcode:
|
||||
|
||||
```md
|
||||
|
@ -185,15 +199,29 @@ Insert Cloud-specific markdown content here.
|
|||
{{% /cloud %}}
|
||||
```
|
||||
|
||||
#### InfluxCloud name
|
||||
#### InfluxDB Cloud name
|
||||
The name used to refer to InfluxData's cloud offering is subject to change.
|
||||
To facilitate easy updates in the future, use the `cloud-name` short-code when referencing the cloud product.
|
||||
To facilitate easy updates in the future, use the `cloud-name` short-code when
|
||||
referencing the cloud product.
|
||||
This shortcode accepts a `"short"` parameter which uses the "short-name".
|
||||
|
||||
```
|
||||
This is content that references {{< cloud-name >}}.
|
||||
This is content that references {{< cloud-name "short" >}}.
|
||||
```
|
||||
|
||||
The product name is stored in `data/products.yml`
|
||||
Product names are stored in `data/products.yml`.
|
||||
|
||||
#### InfluxDB Cloud link
|
||||
References to InfluxDB Cloud are often accompanied with a link to a page where
|
||||
visitors can get more information.
|
||||
This link is subject to change.
|
||||
Use the `cloud-link` shortcode when including links to more information about
|
||||
InfluxDB Cloud.
|
||||
|
||||
```
|
||||
Find more info [here][{{< cloud-link >}}]
|
||||
```
|
||||
|
||||
### Tabbed Content
|
||||
Shortcodes are available for creating "tabbed" content (content that is changed by a users' selection).
|
||||
|
@ -265,7 +293,7 @@ The shortcode structure is the same as above, but the shortcode names are differ
|
|||
data = from(bucket: "telegraf/autogen")
|
||||
|> range(start: -15m)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" AND
|
||||
r._measurement == "mem" and
|
||||
r._field == "used_percent"
|
||||
)
|
||||
```
|
||||
|
@ -351,8 +379,10 @@ Below is a list of available icons (some are aliases):
|
|||
- add-label
|
||||
- alert
|
||||
- calendar
|
||||
- chat
|
||||
- checkmark
|
||||
- clone
|
||||
- cloud
|
||||
- cog
|
||||
- config
|
||||
- copy
|
||||
|
@ -364,6 +394,7 @@ Below is a list of available icons (some are aliases):
|
|||
- edit
|
||||
- expand
|
||||
- export
|
||||
- feedback
|
||||
- fullscreen
|
||||
- gear
|
||||
- graph
|
||||
|
|
|
@ -1,10 +1,8 @@
|
|||
# InfluxDB v2.0 Documentation
|
||||
This is the repository contains the InfluxDB v2.x documentation that will be
|
||||
accessible at [docs.influxdata.com](https://docs.influxdata.com).
|
||||
# InfluxDB 2.0 Documentation
|
||||
This repository contains the InfluxDB 2.x documentation published at [docs.influxdata.com](https://docs.influxdata.com).
|
||||
|
||||
## Contributing
|
||||
We welcome and encourage community contributions to the InfluxData See our [Contribution guidelines](CONTRIBUTING.md) for information
|
||||
about contributing to the InfluxData documentation.
|
||||
We welcome and encourage community contributions. For information about contributing to the InfluxData documentation, see [Contribution guidelines](CONTRIBUTING.md).
|
||||
|
||||
## Run the docs locally
|
||||
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
.inline {
|
||||
margin: 0 .15rem;
|
||||
&.middle:before { vertical-align: middle; }
|
||||
&.top:before { vertical-align: text-top; }
|
||||
&.xsmall:before { font-size: .8rem; }
|
||||
&.small:before { font-size: .9rem; }
|
||||
&.large:before { font-size: 1.1rem; }
|
||||
|
@ -19,7 +20,29 @@
|
|||
width: 20px;
|
||||
height: 20px;
|
||||
padding-left: .28rem;
|
||||
line-height: 1.35rem;
|
||||
line-height: 1.25rem;
|
||||
}
|
||||
|
||||
&.ui-toggle {
|
||||
display: inline-block;
|
||||
position: relative;
|
||||
width: 34px;
|
||||
height: 22px;
|
||||
background: #1C1C21;
|
||||
border: 2px solid #383846;
|
||||
border-radius: .7rem;
|
||||
vertical-align: text-bottom;
|
||||
|
||||
.circle {
|
||||
display: inline-block;
|
||||
position: absolute;
|
||||
border-radius: 50%;
|
||||
height: 12px;
|
||||
width: 12px;
|
||||
background: #22ADF6;
|
||||
top: 3px;
|
||||
right: 3px;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -135,6 +135,7 @@
|
|||
&:not(:last-child) {
|
||||
> p:only-child{ margin-bottom: 0; }
|
||||
}
|
||||
ul,ol { margin: -.5rem 0 1rem;}
|
||||
}
|
||||
|
||||
//////////////////////////////////// Code ////////////////////////////////////
|
||||
|
@ -280,6 +281,10 @@
|
|||
border-style: solid;
|
||||
border-radius: 0 $border-radius $border-radius 0;
|
||||
font-size: .95rem;
|
||||
|
||||
ul,ol {
|
||||
&:last-child { margin-bottom: 1.85rem; }
|
||||
}
|
||||
}
|
||||
|
||||
blockquote {
|
||||
|
@ -442,6 +447,10 @@
|
|||
color: $article-cloud-link;
|
||||
&:hover { color: $article-cloud-link-hover; }
|
||||
}
|
||||
code, pre {
|
||||
color: $article-cloud-code;
|
||||
background: $article-cloud-code-bg;
|
||||
}
|
||||
}
|
||||
&-flag {
|
||||
background: $article-cloud-base;
|
||||
|
@ -695,6 +704,14 @@
|
|||
font-size: .8rem;
|
||||
}
|
||||
}
|
||||
|
||||
//////////////////////////////// Miscellaneous ///////////////////////////////
|
||||
|
||||
.required {
|
||||
color:#FF8564;
|
||||
font-weight:700;
|
||||
font-style: italic;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -43,10 +43,10 @@ $vertical-offset: -14px;
|
|||
background-color: $body-bg;
|
||||
border-radius: 0 $border-radius $border-radius 0;
|
||||
&:before, &:after {
|
||||
content: url('data:image/svg+xml;charset=UTF-8,
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 10 10" xml:space="preserve">
|
||||
<path fill="'+$body-bg+'" d="M0,10h10V0C10,5.52,5.52,10,0,10z"/>
|
||||
</svg>');
|
||||
content: url("data:image/svg+xml;charset=UTF-8,
|
||||
<svg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' x='0px' y='0px' viewBox='0 0 10 10' xml:space='preserve'>
|
||||
<path fill='"+rgba($body-bg, .9999)+"' d='M0,10h10V0C10,5.52,5.52,10,0,10z'/>
|
||||
</svg>");
|
||||
left: 0;
|
||||
}
|
||||
&:before { transform: rotateY(180deg); }
|
||||
|
@ -63,10 +63,10 @@ $vertical-offset: -14px;
|
|||
background-color: $article-bg;
|
||||
border-radius: $border-radius 0 0 $border-radius;
|
||||
&:before, &:after {
|
||||
content: url('data:image/svg+xml;charset=UTF-8,
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 10 10" xml:space="preserve">
|
||||
<path fill="'+$article-bg+'" d="M0,10h10V0C10,5.52,5.52,10,0,10z"/>
|
||||
</svg>');
|
||||
content: url("data:image/svg+xml;charset=UTF-8,
|
||||
<svg xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' x='0px' y='0px' viewBox='0 0 10 10' xml:space='preserve'>
|
||||
<path fill='"+rgba($article-bg, .9999)+"' d='M0,10h10V0C10,5.52,5.52,10,0,10z'/>
|
||||
</svg>");
|
||||
right: 0;
|
||||
}
|
||||
& > a {
|
||||
|
|
|
@ -109,6 +109,8 @@ $article-cloud-base: $b-pool !default;
|
|||
$article-cloud-text: $b-neutrino !default;
|
||||
$article-cloud-link: $b-snow !default;
|
||||
$article-cloud-link-hover: $g20-white !default;
|
||||
$article-cloud-code: $b-laser !default;
|
||||
$article-cloud-code-bg: $b-abyss !default;
|
||||
|
||||
// Article Tabs for tabbed content
|
||||
$article-tab-text: $g12-forge !default;
|
||||
|
|
|
@ -108,6 +108,8 @@ $article-cloud-base: $b-laser;
|
|||
$article-cloud-text: $b-ocean;
|
||||
$article-cloud-link: $b-ocean;
|
||||
$article-cloud-link-hover: $gr-canopy;
|
||||
$article-cloud-code: $b-sapphire;
|
||||
$article-cloud-code-bg: rgba($b-pool, .25);
|
||||
|
||||
// Article Tabs for tabbed content
|
||||
$article-tab-text: $g8-storm;
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
@font-face {
|
||||
font-family: 'icomoon';
|
||||
src: url('fonts/icomoon.eot?s76ef');
|
||||
src: url('fonts/icomoon.eot?s76ef#iefix') format('embedded-opentype'),
|
||||
url('fonts/icomoon.ttf?s76ef') format('truetype'),
|
||||
url('fonts/icomoon.woff?s76ef') format('woff'),
|
||||
url('fonts/icomoon.svg?s76ef#icomoon') format('svg');
|
||||
src: url('fonts/icomoon.eot?972u0y');
|
||||
src: url('fonts/icomoon.eot?972u0y#iefix') format('embedded-opentype'),
|
||||
url('fonts/icomoon.ttf?972u0y') format('truetype'),
|
||||
url('fonts/icomoon.woff?972u0y') format('woff'),
|
||||
url('fonts/icomoon.svg?972u0y#icomoon') format('svg');
|
||||
font-weight: normal;
|
||||
font-style: normal;
|
||||
}
|
||||
|
@ -24,6 +24,15 @@
|
|||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
.icon-ui-chat:before {
|
||||
content: "\e93a";
|
||||
}
|
||||
.icon-ui-cloud:before {
|
||||
content: "\e93f";
|
||||
}
|
||||
.icon-ui-nav-chat:before {
|
||||
content: "\e941";
|
||||
}
|
||||
.icon-ui-add-cell:before {
|
||||
content: "\e91f";
|
||||
}
|
||||
|
|
|
@ -15,6 +15,7 @@ preserveTaxonomyNames = true
|
|||
# Markdown rendering options
|
||||
[blackfriday]
|
||||
hrefTargetBlank = true
|
||||
smartDashes = false
|
||||
|
||||
# Menu items without actual pages
|
||||
[menu]
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: About InfluxDB Cloud 2
|
||||
description: Important information about InfluxDB Cloud 2 including release notes and known issues.
|
||||
weight: 10
|
||||
menu:
|
||||
v2_0_cloud:
|
||||
name: About InfluxDB Cloud
|
||||
---
|
||||
|
||||
Important information about InfluxDB Cloud 2 including known issues and release notes.
|
||||
|
||||
{{< children >}}
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Known issues in InfluxDB Cloud
|
||||
description: Information related to known issues in InfluxDB Cloud 2.
|
||||
weight: 102
|
||||
menu:
|
||||
v2_0_cloud:
|
||||
name: Known issues
|
||||
parent: About InfluxDB Cloud
|
||||
---
|
||||
|
||||
The following issues currently exist in {{< cloud-name >}}:
|
||||
|
||||
- Usage statistics on the Usage page may show incorrect values (still in development).
|
||||
- Existing tasks may have duration specified in nanoseconds and need to be resubmitted.
|
||||
- IDPE-2860: Additional user shows up as owner under Cloud 2 organization.
|
||||
- IDPE 2868: User must not be able to delete token with an active Telegraf configuration pointed to it.
|
||||
- IDPE-2869: As a Cloud 2.0 user, I cannot use any CLI tools to interact with my Cloud 2 tenant.
|
||||
- [TELEGRAF-5600](https://github.com/influxdata/telegraf/issues/5600): Improve error message in Telegraf when bucket it's reporting to is not found.
|
||||
- [INFLUXDB-12687](https://github.com/influxdata/influxdb/issues/12687): Create org should display only for the create org permission.
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: InfluxDB Cloud release notes
|
||||
description: Important changes and notes introduced in each InfluxDB Cloud 2 update.
|
||||
weight: 101
|
||||
menu:
|
||||
v2_0_cloud:
|
||||
parent: About InfluxDB Cloud
|
||||
name: Release notes
|
||||
---
|
||||
|
||||
## 2019-04-05
|
||||
|
||||
### Features
|
||||
|
||||
- **InfluxDB 2.0 alpha-7** –
|
||||
_See the [alpha-7 release notes](/v2.0/reference/release-notes/#v2-0-0-alpha-7-2019-03-28) for details._
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Logout works in InfluxDB Cloud 2 UI.
|
||||
- Single sign-on works between https://cloud2.influxdata.com and https://us-west-2-1.aws.cloud2.influxdata.com.
|
||||
- Able to copy error message from UI.
|
||||
- Able to change a task from every to cron.
|
||||
- Able to create a new bucket when switching between periodically and never (retention options).
|
|
@ -1,14 +1,15 @@
|
|||
---
|
||||
title: Get started with InfluxCloud 2.0 Beta
|
||||
title: Get started with InfluxDB Cloud 2 Beta
|
||||
description: >
|
||||
Sign up for and get started with InfluxCloud 2.0 Beta.
|
||||
Sign up for and get started with InfluxDB Cloud 2 Beta.
|
||||
weight: 1
|
||||
menu:
|
||||
v2_0_cloud:
|
||||
name: Get started with InfluxCloud
|
||||
name: Get started with InfluxDB Cloud
|
||||
|
||||
---
|
||||
{{< cloud-name >}} is a fully managed and hosted version of the InfluxDB 2.x API. To get started, complete the tasks below.
|
||||
{{< cloud-name >}} is a fully managed and hosted version of the [InfluxDB v2 API](/v2.0/reference/api/).
|
||||
To get started, complete the tasks below.
|
||||
|
||||
{{% cloud-msg %}}
|
||||
The InfluxDB v2.0 alpha documentation linked to in this article also applies to {{< cloud-name "short" >}} unless otherwise specified.
|
||||
|
@ -17,18 +18,18 @@ The InfluxDB v2.0 alpha documentation linked to in this article also applies to
|
|||
## Sign up
|
||||
|
||||
{{% note %}}
|
||||
Early access to {{< cloud-name >}} is limited. Apply for access [here](https://www.influxdata.com/influxcloud2beta/).
|
||||
Early access to {{< cloud-name >}} is limited. Apply for access [here]({{< cloud-link >}}).
|
||||
{{% /note %}}
|
||||
|
||||
Sign up for the InfluxCloud 2.0 Beta with the link provided in the invite email.
|
||||
Sign up for the {{< cloud-name >}} with the link provided in the invite email.
|
||||
|
||||
1. Look for an email invite from support@influxdata.com with the subject line **You've been invited to beta InfluxCloud 2.0.**
|
||||
1. Look for an email invite from support@influxdata.com with the subject line **You've been invited to beta {{< cloud-name >}}.**
|
||||
2. Click **Accept Invite** to begin the sign up process.
|
||||
3. Provide an email id, password and follow the prompts to sign up for a Free Tier.
|
||||
4. Select the Region and click Next to create your default organization and bucket.
|
||||
|
||||
{{% cloud-msg %}}
|
||||
InfluxCloud 2.0 Beta is restricted to the us-west-2 region.
|
||||
{{< cloud-name >}} is restricted to the us-west-2 region.
|
||||
{{% /cloud-msg %}}
|
||||
|
||||
5. Once your organization and bucket are created, the usage page opens.
|
||||
|
@ -49,24 +50,12 @@ For details, see [Automatically configure Telegraf](https://v2.docs.influxdata.c
|
|||
|
||||
## Query and visualize data
|
||||
|
||||
Once you've set up InfluxCloud to collect data with Telegraf, you can do the following:
|
||||
Once you've set up {{< cloud-name "short" >}} to collect data with Telegraf, you can do the following:
|
||||
|
||||
* Query data using Flux, the UI, and the `influx` command line interface. See [Query data](https://v2.docs.influxdata.com/v2.0/query-data/).
|
||||
* Build custom dashboards to visualize your data. See [Visualize data](https://v2.docs.influxdata.com/v2.0/visualize-data/).
|
||||
|
||||
|
||||
## Known issues and disabled features
|
||||
|
||||
The following issues currently exist in {{< cloud-name >}}:
|
||||
|
||||
* IDPE-2860: Additional user shows up as owner under Cloud 2 organization.
|
||||
* IDPE 2868: User must not be able to delete token with an active Telegraf configuration pointed to it.
|
||||
* IDPE-2869: As a Cloud 2.0 user, I cannot use any CLI tools to interact with my Cloud 2 tenant.
|
||||
* IDPE-2896: Logout does not work in Cloud 2.0 UI.
|
||||
* IDPE-2897: Single sign on does not work between `https://cloud2.influxdata.com` and `https://us-west-2-
|
||||
1.aws.cloud2.influxdata.com`.
|
||||
* [TELEGRAF-5600](https://github.com/influxdata/telegraf/issues/5600): Improve error message in Telegraf when bucket it's reporting to is not found.
|
||||
* [INFLUXDB-12686](https://github.com/influxdata/influxdb/issues/12686): Unable to copy error message from UI.
|
||||
* [INFLUXDB-12690](https://github.com/influxdata/influxdb/issues/12690): Impossible to change a task from `every` to `cron`.
|
||||
* [INFLUXDB-12688](https://github.com/influxdata/influxdb/issues/12688): Create bucket switching between periodically and never fails to create bucket.
|
||||
* [INFLUXDB-12687](https://github.com/influxdata/influxdb/issues/12687): Create org should display only for the create org permission.
|
||||
{{% note %}}
|
||||
#### Known issues and disabled features
|
||||
_See [Known issues](/v2.0/cloud/about/known-issues/) for information regarding all known issues in InfluxDB Cloud._
|
||||
{{% /note %}}
|
||||
|
|
|
@ -14,19 +14,19 @@ weight: 101
|
|||
---
|
||||
|
||||
Select **Quick Start** in the last step of the InfluxDB user interface's (UI)
|
||||
[setup process](/v2.0/get-started/#setup-influxdb) to quickly start collecting data with InfluxDB.
|
||||
[setup process](/v2.0/get-started/#set-up-influxdb) to quickly start collecting data with InfluxDB.
|
||||
Quick Start creates a data scraper that collects metrics from the InfluxDB `/metrics` endpoint.
|
||||
The scraped data provides a robust dataset of internal InfluxDB metrics that you can query, visualize, and process.
|
||||
|
||||
## Use Quick Start to collect InfluxDB metrics
|
||||
After [initializing InfluxDB v2.0](/v2.0/get-started/#setup-influxdb),
|
||||
After [setting up InfluxDB v2.0](/v2.0/get-started/#set-up-influxdb),
|
||||
the "Let's start collecting data!" page displays options for collecting data.
|
||||
Click **Quick Start**.
|
||||
|
||||
InfluxDB creates and configures a new [scraper](/v2.0/collect-data/scrape-data/).
|
||||
The target URL points to the `/metrics` HTTP endpoint of your local InfluxDB instance (e.g. `http://localhost:9999/metrics`),
|
||||
which outputs internal InfluxDB metrics in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/).
|
||||
It stores the scraped metrics in the bucket created during the [initial setup process](/v2.0/get-started/#setup-influxdb).
|
||||
It stores the scraped metrics in the bucket created during the [initial setup process](/v2.0/get-started/#set-up-influxdb).
|
||||
|
||||
The following message briefly appears in the UI:
|
||||
|
||||
|
|
|
@ -11,18 +11,17 @@ weight: 301
|
|||
Create a new scraper in the InfluxDB user interface (UI).
|
||||
|
||||
## Create a scraper in the InfluxDB UI
|
||||
1. Click **Organizations** in the left navigation menu.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. In the list of organizations, click the name of your organization.
|
||||
3. Click the **Scrapers** tab.
|
||||
4. Click **{{< icon "plus" >}} Create Scraper**.
|
||||
5. Enter a **Name** for the scraper.
|
||||
6. Select a **Bucket** to store the scraped data.
|
||||
7. Enter the **Target URL** to scrape. The default URL value is `http://localhost:9999/metrics`,
|
||||
2. Click the **Scrapers** tab.
|
||||
3. Click **{{< icon "plus" >}} Create Scraper**.
|
||||
4. Enter a **Name** for the scraper.
|
||||
5. Select a **Bucket** to store the scraped data.
|
||||
6. Enter the **Target URL** to scrape. The default URL value is `http://localhost:9999/metrics`,
|
||||
which provides InfluxDB-specific metrics in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/).
|
||||
8. Click **Finish**.
|
||||
7. Click **Finish**.
|
||||
|
||||
The new scraper will begin scraping data after approximately 10 seconds,
|
||||
then continue scraping in 10 second intervals.
|
||||
|
|
|
@ -11,12 +11,11 @@ weight: 303
|
|||
Delete a scraper from the InfluxDB user interface (UI).
|
||||
|
||||
## Delete a scraper from the InfluxDB UI
|
||||
1. Click **Organizations** in the left navigation menu.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. In the list of organizations, click the name of your organization.
|
||||
3. Click the **Scrapers** tab. A listing of any existing scrapers appears with the
|
||||
2. Click the **Scrapers** tab. A listing of any existing scrapers appears with the
|
||||
**Name**, **URL**, and **BUCKET** for each scraper.
|
||||
4. Hover over the scraper you want to delete and click **Delete**.
|
||||
5. Click **Confirm**.
|
||||
3. Hover over the scraper you want to delete and click **Delete**.
|
||||
4. Click **Confirm**.
|
||||
|
|
|
@ -16,12 +16,11 @@ To modify either, [create a new scraper](/v2.0/collect-data/scrape-data/manage-s
|
|||
{{% /note %}}
|
||||
|
||||
## Update a scraper in the InfluxDB UI
|
||||
1. Click **Organizations** in the left navigation menu.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. In the list of organizations, click the name of your organization.
|
||||
3. Click the **Scrapers** tab. A list of existing scrapers appears.
|
||||
4. Hover over the scraper you would like to update and click the **{{< icon "pencil" >}}**
|
||||
2. Click the **Scrapers** tab. A list of existing scrapers appears.
|
||||
3. Hover over the scraper you would like to update and click the **{{< icon "pencil" >}}**
|
||||
that appears next to the scraper name.
|
||||
5. Enter a new name for the scraper. Press Return or click out of the name field to save the change.
|
||||
4. Enter a new name for the scraper. Press Return or click out of the name field to save the change.
|
||||
|
|
|
@ -55,11 +55,11 @@ for using Telegraf with InfluxDB v2.0._
|
|||
## Start Telegraf
|
||||
|
||||
### Configure your API token as an environment variable
|
||||
Requests to the InfluxDB v2.0 API must include an authentication token.
|
||||
Requests to the [InfluxDB v2 API](/v2.0/reference/api/) must include an authentication token.
|
||||
A token identifies specific permissions to the InfluxDB instance.
|
||||
|
||||
Define the `INFLUX_TOKEN` environment variable using your token.
|
||||
_For information about viewing tokens, see [View tokens](/v2.0/users/tokens/view-tokens/)._
|
||||
_For information about viewing tokens, see [View tokens](/v2.0/security/tokens/view-tokens/)._
|
||||
|
||||
```sh
|
||||
export INFLUX_TOKEN=YourAuthenticationToken
|
||||
|
|
|
@ -10,14 +10,13 @@ weight: 303
|
|||
|
||||
To delete a Telegraf configuration:
|
||||
|
||||
1. Click **Organizations** in the left navigation menu.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click the **Name** of the organization that owns the configuration you want to delete.
|
||||
3. Click the **Telegraf** tab.
|
||||
4. Hover over the configuration you want to delete and click **Delete** on the far right.
|
||||
5. Click **Confirm**.
|
||||
2. Click the **Telegraf** tab.
|
||||
3. Hover over the configuration you want to delete, click the **{{< icon "trash" >}}**
|
||||
icon, and **Delete**.
|
||||
|
||||
{{< img-hd src="/img/2-0-telegraf-config-delete.png" />}}
|
||||
|
||||
|
|
|
@ -15,14 +15,13 @@ of a Telegraf configuration created in the UI.
|
|||
You cannot modify Telegraf settings in existing Telegraf configurations through the UI.
|
||||
{{% /note %}}
|
||||
|
||||
1. Click **Organizations** in the left navigation menu.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the **Name** of the organization that owns the configuration you want to delete.
|
||||
3. Click the **Telegraf** tab.
|
||||
4. Hover over the configuration you want to edit and click **{{< icon "pencil" >}}**
|
||||
2. Click the **Telegraf** tab.
|
||||
3. Hover over the configuration you want to edit and click **{{< icon "pencil" >}}**
|
||||
to update the name or description.
|
||||
5. Press Return or click out of the editable field to save your changes.
|
||||
4. Press Return or click out of the editable field to save your changes.
|
||||
|
||||
{{< img-hd src="/img/2-0-telegraf-config-update.png" />}}
|
||||
|
|
|
@ -12,16 +12,11 @@ weight: 301
|
|||
|
||||
View Telegraf configuration information in the InfluxDB user interface (UI):
|
||||
|
||||
1. Click **Organizations** in the left navigation menu.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
|
||||
2. Click the **Name** of the organization that owns the configuration you want to delete.
|
||||
3. Click the **Telegraf** tab.
|
||||
4. Hover over a configuration to view options.
|
||||
|
||||
{{< img-hd src="/img/2-0-telegraf-config-view.png" />}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click the **Telegraf** tab.
|
||||
|
||||
### View and download the telegraf.conf
|
||||
To view the actual `telegraf.conf` associated with the configuration,
|
||||
|
|
|
@ -38,7 +38,7 @@ _By default, InfluxDB runs on port `9999`._
|
|||
|
||||
##### token
|
||||
Your InfluxDB v2.0 authorization token.
|
||||
For information about viewing tokens, see [View tokens](/v2.0/users/tokens/view-tokens/).
|
||||
For information about viewing tokens, see [View tokens](/v2.0/security/tokens/view-tokens/).
|
||||
|
||||
{{% note %}}
|
||||
#### Avoid storing tokens in plain text
|
||||
|
|
|
@ -33,7 +33,7 @@ This is a paragraph. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nu
|
|||
data = from(bucket: "telegraf/autogen")
|
||||
|> range(start: -15m)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" AND
|
||||
r._measurement == "mem" and
|
||||
r._field == "used_percent"
|
||||
)
|
||||
```
|
||||
|
@ -442,7 +442,7 @@ This is a paragraph. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nu
|
|||
data = from(bucket: "telegraf/autogen")
|
||||
|> range(start: -15m)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" AND
|
||||
r._measurement == "mem" and
|
||||
r._field == "used_percent"
|
||||
)
|
||||
```
|
||||
|
|
|
@ -12,7 +12,7 @@ Get started with InfluxDB v2.0 by downloading InfluxDB, installing the necessary
|
|||
executables, and running the initial setup process.
|
||||
|
||||
{{% cloud-msg %}}
|
||||
This article describes how to get started with InfluxDB OSS. To get started with {{< cloud-name "short" >}}, see [Get Started with InfluxCloud 2.0 Beta](/v2.0/cloud/get-started/).
|
||||
This article describes how to get started with InfluxDB OSS. To get started with {{< cloud-name "short" >}}, see [Get Started with {{< cloud-name >}}](/v2.0/cloud/get-started/).
|
||||
{{% /cloud-msg %}}
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
|
@ -27,19 +27,24 @@ This article describes how to get started with InfluxDB OSS. To get started with
|
|||
### Download and install InfluxDB v2.0 alpha
|
||||
Download InfluxDB v2.0 alpha for macOS.
|
||||
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.7_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.9_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
|
||||
|
||||
### Place the executables in your $PATH
|
||||
Unpackage the downloaded archive and place the `influx` and `influxd` executables in your system `$PATH`.
|
||||
### Unpackage the InfluxDB binaries
|
||||
Unpackage the downloaded archive.
|
||||
|
||||
_**Note:** The following commands are examples. Adjust the file paths to your own needs._
|
||||
|
||||
```sh
|
||||
# Unpackage contents to the current working directory
|
||||
gunzip -c ~/Downloads/influxdb_2.0.0-alpha.7_darwin_amd64.tar.gz | tar xopf -
|
||||
gunzip -c ~/Downloads/influxdb_2.0.0-alpha.8_darwin_amd64.tar.gz | tar xopf -
|
||||
```
|
||||
|
||||
# Copy the influx and influxd binary to your $PATH
|
||||
sudo cp influxdb_2.0.0-alpha.7_darwin_amd64/{influx,influxd} /usr/local/bin/
|
||||
If you choose, you can place `influx` and `influxd` in your `$PATH`.
|
||||
You can also prefix the executables with `./` to run then in place.
|
||||
|
||||
```sh
|
||||
# (Optional) Copy the influx and influxd binary to your $PATH
|
||||
sudo cp influxdb_2.0.0-alpha.8_darwin_amd64/{influx,influxd} /usr/local/bin/
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
@ -50,7 +55,8 @@ If you rename the binaries, all references to `influx` and `influxd` in this doc
|
|||
{{% /note %}}
|
||||
|
||||
### Networking ports
|
||||
By default, InfluxDB uses TCP port `9999` for client-server communication over InfluxDB’s HTTP API.
|
||||
By default, InfluxDB uses TCP port `9999` for client-server communication over
|
||||
the [InfluxDB HTTP API](/v2.0/reference/api/).
|
||||
|
||||
## Start InfluxDB
|
||||
Start InfluxDB by running the `influxd` daemon:
|
||||
|
@ -84,8 +90,8 @@ influxd --reporting-disabled
|
|||
### Download and install InfluxDB v2.0 alpha
|
||||
Download the InfluxDB v2.0 alpha package appropriate for your chipset.
|
||||
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.7_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.7_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.9_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.9_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
|
||||
|
||||
### Place the executables in your $PATH
|
||||
Unpackage the downloaded archive and place the `influx` and `influxd` executables in your system `$PATH`.
|
||||
|
@ -94,10 +100,10 @@ _**Note:** The following commands are examples. Adjust the file names, paths, an
|
|||
|
||||
```sh
|
||||
# Unpackage contents to the current working directory
|
||||
tar xvzf path/to/influxdb_2.0.0-alpha.7_linux_amd64.tar.gz
|
||||
tar xvzf path/to/influxdb_2.0.0-alpha.9_linux_amd64.tar.gz
|
||||
|
||||
# Copy the influx and influxd binary to your $PATH
|
||||
sudo cp influxdb_2.0.0-alpha.7_linux_amd64/{influx,influxd} /usr/local/bin/
|
||||
sudo cp influxdb_2.0.0-alpha.9_linux_amd64/{influx,influxd} /usr/local/bin/
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
@ -108,7 +114,8 @@ If you rename the binaries, all references to `influx` and `influxd` in this doc
|
|||
{{% /note %}}
|
||||
|
||||
### Networking ports
|
||||
By default, InfluxDB uses TCP port `9999` for client-server communication over InfluxDB’s HTTP API.
|
||||
By default, InfluxDB uses TCP port `9999` for client-server communication over
|
||||
the [InfluxDB HTTP API](/v2.0/reference/api/).
|
||||
|
||||
## Start InfluxDB
|
||||
Start InfluxDB by running the `influxd` daemon:
|
||||
|
@ -141,7 +148,8 @@ influxd --reporting-disabled
|
|||
{{% tab-content %}}
|
||||
### Download and run InfluxDB v2.0 alpha
|
||||
Use `docker run` to download and run the InfluxDB v2.0 alpha Docker image.
|
||||
Expose port `9999`, which InfluxDB uses for client-server communication over its HTTP API.
|
||||
Expose port `9999`, which InfluxDB uses for client-server communication over
|
||||
the [InfluxDB HTTP API](/v2.0/reference/api/).
|
||||
|
||||
```sh
|
||||
docker run --name influxdb -p 9999:9999 quay.io/influxdb/influxdb:2.0.0-alpha
|
||||
|
@ -203,6 +211,20 @@ the `influx` command line interface (CLI).
|
|||
InfluxDB is now initialized with a primary user, organization, and bucket.
|
||||
You are ready to [collect data](/v2.0/collect-data).
|
||||
|
||||
{{% note %}}
|
||||
#### Using the influx CLI after setting up InfluxDB through the UI
|
||||
To use the [`influx` CLI](/v2.0/reference/cli/influx) after setting up InfluxDB through the UI,
|
||||
use one of the following methods to provide your [authentication token](/v2.0/users/tokens/) to the CLI:
|
||||
|
||||
1. Pass your token to the `influx` CLI using the `-t` or `--token` flag.
|
||||
2. Set the `INFLUX_TOKEN` environment variable using your token.
|
||||
3. Store your token in `~/.influxdbv2/credentials`.
|
||||
_The content of the `credentials` file should be only your token._
|
||||
|
||||
_See [View tokens](/v2.0/security/tokens/view-tokens/) for information about
|
||||
retrieving authentication tokens._
|
||||
{{% /note %}}
|
||||
|
||||
{{% /tab-content %}}
|
||||
<!-------------------------------- END UI Setup ------------------------------->
|
||||
|
||||
|
|
|
@ -14,11 +14,11 @@ to create a bucket.
|
|||
|
||||
## Create a bucket in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the name of an organization, then select the **Buckets** tab.
|
||||
2. Select the **Buckets** tab.
|
||||
3. Click **{{< icon "plus" >}} Create Bucket** in the upper right.
|
||||
4. Enter a **Name** for the bucket.
|
||||
5. Select **How often to clear data?**:
|
||||
|
|
|
@ -14,11 +14,11 @@ to delete a bucket.
|
|||
|
||||
## Delete a bucket in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the name of an organization, then select the **Buckets** tab.
|
||||
2. Select the **Buckets** tab.
|
||||
3. Hover over the bucket you would like to delete.
|
||||
4. Click **Delete** and **Confirm** to delete the bucket.
|
||||
|
||||
|
|
|
@ -8,19 +8,41 @@ menu:
|
|||
parent: Manage buckets
|
||||
weight: 202
|
||||
---
|
||||
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update a bucket.
|
||||
|
||||
Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI)
|
||||
to update a bucket.
|
||||
Note that updating an bucket's name will affect any assets that reference the bucket by name, including the following:
|
||||
|
||||
## Update a bucket in the InfluxDB UI
|
||||
- Queries
|
||||
- Dashboards
|
||||
- Tasks
|
||||
- Telegraf configurations
|
||||
- Templates
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
If you change a bucket name, be sure to update the bucket in the above places as well.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
|
||||
2. Click on the name of an organization, then select the **Buckets** tab. All of the organization's buckets appear.
|
||||
3. To update a bucket's name or retention policy, click the name of the bucket from the list.
|
||||
4. Click **Update** to save.
|
||||
## Update a bucket's name in the InfluxDB UI
|
||||
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Select the **Buckets** tab.
|
||||
3. Hover over the name of the bucket you want to rename in the list.
|
||||
4. Click **Rename**.
|
||||
5. Review the information in the window that appears and click **I understand, let's rename my bucket**.
|
||||
6. Update the bucket's name and click **Change Bucket Name**.
|
||||
|
||||
## Update a bucket's retention policy in the InfluxDB UI
|
||||
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Select the **Buckets** tab.
|
||||
3. Click the name of the bucket you want to update from the list.
|
||||
4. In the window that appears, edit the bucket's retention policy.
|
||||
5. Click **Save Changes**.
|
||||
|
||||
## Update a bucket using the influx CLI
|
||||
|
||||
|
|
|
@ -11,12 +11,12 @@ weight: 202
|
|||
|
||||
## View buckets in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the name of an organization, then select the **Buckets** tab. All of the organization's buckets appear.
|
||||
3. Click on a bucket to view details.=
|
||||
2. Select the **Buckets** tab.
|
||||
3. Click on a bucket to view details.
|
||||
|
||||
## View buckets using the influx CLI
|
||||
|
||||
|
|
|
@ -18,12 +18,12 @@ You cannot currently create additional organizations in {{< cloud-name >}}. Only
|
|||
|
||||
## Create an organization in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Influx** icon in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "admin" >}}
|
||||
|
||||
2. Click **{{< icon "plus" >}} Create Organization**.
|
||||
3. Enter a **Name** for the organization and click **Create**.
|
||||
2. Select **Create Organization**.
|
||||
3. In the window that appears, enter a name for the organization and associated bucket and click **Create**.
|
||||
|
||||
## Create an organization using the influx CLI
|
||||
|
||||
|
|
|
@ -9,18 +9,20 @@ menu:
|
|||
weight: 104
|
||||
---
|
||||
|
||||
Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI)
|
||||
to create an organization.
|
||||
Use the `influx` command line interface (CLI)
|
||||
to delete an organization.
|
||||
|
||||
<!--
|
||||
## Delete an organization in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Influx** icon in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "admin" >}}
|
||||
|
||||
The list of organizations appears.
|
||||
The list of organizations appears.
|
||||
|
||||
2. Hover over an organization's name, click **Delete**, and then **Confirm**.
|
||||
-->
|
||||
|
||||
## Delete an organization using the influx CLI
|
||||
|
||||
|
|
|
@ -15,9 +15,9 @@ to add a member to an organization.
|
|||
|
||||
## Add a member to an organization in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the name of an organization, then select the **Members** tab.
|
||||
|
||||
|
|
|
@ -15,11 +15,11 @@ to remove a member from an organization.
|
|||
|
||||
## Remove a member from an organization in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the name of an organization, then select the **Members** tab.
|
||||
2. Select the **Members** tab.
|
||||
|
||||
_Complete content coming soon_
|
||||
|
||||
|
|
|
@ -14,9 +14,9 @@ to view members of an organization.
|
|||
|
||||
## View members of organization in the InfluxDB UI
|
||||
|
||||
* Click the **Organizations** tab in the navigation bar.
|
||||
* Click the **Settings** tab in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
* Click on the name of an organization, then select the **Members** tab. The list of organization members appears.
|
||||
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Switch organizations
|
||||
seotitle: Switch organizations in InfluxDB
|
||||
description: Switch from one organization to another in the InfluxDB UI
|
||||
menu:
|
||||
v2_0:
|
||||
name: Switch organizations
|
||||
parent: Manage organizations
|
||||
weight: 105
|
||||
---
|
||||
|
||||
Use the InfluxDB user interface (UI) to switch from one organization to another. The organization you're currently viewing determines what dashboards, tasks, buckets, members, and other assets you can access.
|
||||
|
||||
## Switch organizations in the InfluxDB UI
|
||||
|
||||
1. Click the **Influx** icon in the navigation bar.
|
||||
|
||||
{{< nav-icon "admin" >}}
|
||||
|
||||
2. Select **Switch Organizations**. The list of organizations appears.
|
||||
3. Click the organization you want to switch to.
|
|
@ -9,18 +9,29 @@ menu:
|
|||
weight: 103
|
||||
---
|
||||
|
||||
Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI)
|
||||
to update an organization.
|
||||
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update an organization.
|
||||
|
||||
Note that updating an organization's name will affect any assets that reference the organization by name, including the following:
|
||||
|
||||
- Queries
|
||||
- Dashboards
|
||||
- Tasks
|
||||
- Telegraf configurations
|
||||
- Templates
|
||||
|
||||
If you change an organization name, be sure to update the organization in the above places as well.
|
||||
|
||||
## Update an organization in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
1. Click the **Settings** icon in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "settings" >}}
|
||||
|
||||
2. Click on the organization you want to update in the list.
|
||||
3. To update the organization's name, select the **Options** tab.
|
||||
4. To manage the organization's members, buckets, dashboards, and tasks, click on the corresponding tabs.
|
||||
2. Click the **Org Profile** tab.
|
||||
3. Click **Rename**.
|
||||
4. In the window that appears, review the information and click **I understand, let's rename my organization**.
|
||||
5. Enter a new name for your organization.
|
||||
6. Click **Change organization name**.
|
||||
|
||||
## Update an organization using the influx CLI
|
||||
|
||||
|
|
|
@ -14,11 +14,11 @@ to view organizations.
|
|||
|
||||
## View organizations in the InfluxDB UI
|
||||
|
||||
1. Click the **Organizations** tab in the navigation bar.
|
||||
* Click the **Influx** icon in the navigation bar.
|
||||
|
||||
{{< nav-icon "orgs" >}}
|
||||
{{< nav-icon "admin" >}}
|
||||
|
||||
The list of organizations appears.
|
||||
* Select **Switch Organizations**. The list of organizations appears.
|
||||
|
||||
|
||||
## View organizations using the influx CLI
|
||||
|
@ -33,3 +33,25 @@ influx org find
|
|||
Filtering options such as filtering by name or ID are available.
|
||||
See the [`influx org find` documentation](/v2.0/reference/cli/influx/org/find)
|
||||
for information about other available flags.
|
||||
|
||||
## View your organization ID
|
||||
Use the InfluxDB UI or `influx` CLI to view your organization ID.
|
||||
|
||||
### Organization ID in the UI
|
||||
After logging in to the InfluxDB UI, your organization ID appears in the URL.
|
||||
|
||||
<pre class="highlight">
|
||||
http://localhost:9999/orgs/<span class="bp" style="font-weight:bold;margin:0 .15rem">03a2bbf46249a000</span>/...
|
||||
</pre>
|
||||
|
||||
|
||||
### Organization ID in the CLI
|
||||
Use [`influx org find`](#view-organizations-using-the-influx-cli) to view your organization ID.
|
||||
|
||||
```sh
|
||||
> influx org find
|
||||
|
||||
ID Name
|
||||
03a2bbf46249a000 org-1
|
||||
03ace3a859669000 org-2
|
||||
```
|
||||
|
|
|
@ -11,17 +11,16 @@ weight: 5
|
|||
v2.0/tags: [tasks]
|
||||
---
|
||||
|
||||
InfluxDB's _**task engine**_ is designed for processing and analyzing data.
|
||||
A task is a scheduled Flux query that take a stream of input data, modify or
|
||||
analyze it in some way, then perform an action.
|
||||
Examples include data downsampling, anomaly detection _(Coming)_, alerting _(Coming)_, etc.
|
||||
Process and analyze your data with tasks in the InfluxDB **task engine**. Use tasks (scheduled Flux queries)
|
||||
to input a data stream and then analyze, modify, and act on the data accordingly.
|
||||
|
||||
Discover how to create and manage tasks using the InfluxDB user interface (UI)
|
||||
and the `influx` command line interface (CLI).
|
||||
Find examples of data downsampling, anomaly detection _(Coming)_, alerting
|
||||
_(Coming)_, and other common tasks.
|
||||
|
||||
{{% note %}}
|
||||
Tasks are a replacement for InfluxDB v1.x's continuous queries.
|
||||
Tasks replace InfluxDB v1.x continuous queries.
|
||||
{{% /note %}}
|
||||
|
||||
The following articles explain how to configure and build tasks using the InfluxDB user interface (UI)
|
||||
and via raw Flux scripts with the `influx` command line interface (CLI).
|
||||
They also provide examples of commonly used tasks.
|
||||
|
||||
{{< children >}}
|
||||
|
|
|
@ -1,13 +1,15 @@
|
|||
---
|
||||
title: Write an InfluxDB task
|
||||
seotitle: Write an InfluxDB task that processes data
|
||||
title: Get started with InfluxDB tasks
|
||||
list_title: Get started with tasks
|
||||
description: >
|
||||
How to write an InfluxDB task that processes data in some way, then performs an action
|
||||
Learn the basics of writing an InfluxDB task that processes data, and then performs an action,
|
||||
such as storing the modified data in a new bucket or sending an alert.
|
||||
aliases:
|
||||
- /v2.0/process-data/write-a-task/
|
||||
v2.0/tags: [tasks]
|
||||
menu:
|
||||
v2_0:
|
||||
name: Write a task
|
||||
name: Get started with tasks
|
||||
parent: Process data
|
||||
weight: 101
|
||||
---
|
||||
|
@ -62,7 +64,7 @@ the required time range and any relevant filters.
|
|||
data = from(bucket: "telegraf/default")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" AND
|
||||
r._measurement == "mem" and
|
||||
r.host == "myHost"
|
||||
)
|
||||
```
|
||||
|
@ -111,7 +113,10 @@ to send the transformed data to another bucket:
|
|||
```
|
||||
|
||||
{{% note %}}
|
||||
You cannot write to the same bucket you are reading from.
|
||||
#### Important notes
|
||||
- You cannot write to the same bucket you are reading from.
|
||||
- In order to write data into InfluxDB, you must have `_time`, `_measurement`,
|
||||
`_field`, and `_value` columns.
|
||||
{{% /note %}}
|
||||
|
||||
## Full example task script
|
||||
|
@ -132,7 +137,7 @@ option task = {
|
|||
data = from(bucket: "telegraf/default")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" AND
|
||||
r._measurement == "mem" and
|
||||
r.host == "myHost"
|
||||
)
|
||||
|
|
@ -1,9 +1,10 @@
|
|||
---
|
||||
title: Manage tasks in InfluxDB
|
||||
seotitle: Manage data processing tasks in InfluxDB
|
||||
list_title: Manage tasks
|
||||
description: >
|
||||
InfluxDB provides options for managing the creation, reading, updating, and deletion
|
||||
of tasks using both the 'influx' CLI and the InfluxDB UI.
|
||||
InfluxDB provides options for creating, reading, updating, and deleting tasks
|
||||
using both the `influx` CLI and the InfluxDB UI.
|
||||
v2.0/tags: [tasks]
|
||||
menu:
|
||||
v2_0:
|
||||
|
|
|
@ -13,7 +13,10 @@ weight: 201
|
|||
InfluxDB provides multiple ways to create tasks both in the InfluxDB user interface (UI)
|
||||
and the `influx` command line interface (CLI).
|
||||
|
||||
_This article assumes you have already [written a task](/v2.0/process-data/write-a-task)._
|
||||
_Before creating a task, review the [basics criteria for writing a task](/v2.0/process-data/get-started)._
|
||||
|
||||
- [InfluxDB UI](#create-a-task-in-the-influxdb-ui)
|
||||
- [`influx` CLI](#create-a-task-using-the-influx-cli)
|
||||
|
||||
## Create a task in the InfluxDB UI
|
||||
The InfluxDB UI provides multiple ways to create a task:
|
||||
|
@ -21,6 +24,8 @@ The InfluxDB UI provides multiple ways to create a task:
|
|||
- [Create a task from the Data Explorer](#create-a-task-from-the-data-explorer)
|
||||
- [Create a task in the Task UI](#create-a-task-in-the-task-ui)
|
||||
- [Import a task](#import-a-task)
|
||||
- [Create a task from a template](#create-a-task-from-a-template)
|
||||
- [Clone a task](#clone-a-task)
|
||||
|
||||
### Create a task from the Data Explorer
|
||||
1. Click on the **Data Explorer** icon in the left navigation menu.
|
||||
|
@ -40,11 +45,12 @@ The InfluxDB UI provides multiple ways to create a task:
|
|||
|
||||
{{< nav-icon "tasks" >}}
|
||||
|
||||
2. Click **+ Create Task** in the upper right and select **New Task**.
|
||||
3. In the left panel, specify the task options.
|
||||
See [Task options](/v2.0/process-data/task-options)for detailed information about each option.
|
||||
4. In the right panel, enter your task script.
|
||||
5. Click **Save** in the upper right.
|
||||
2. Click **{{< icon "plus" >}} Create Task** in the upper right.
|
||||
3. Select **New Task**.
|
||||
4. In the left panel, specify the task options.
|
||||
See [Task options](/v2.0/process-data/task-options) for detailed information about each option.
|
||||
5. In the right panel, enter your task script.
|
||||
6. Click **Save** in the upper right.
|
||||
|
||||
{{< img-hd src="/img/2-0-tasks-create-edit.png" title="Create a task" />}}
|
||||
|
||||
|
@ -53,16 +59,29 @@ The InfluxDB UI provides multiple ways to create a task:
|
|||
|
||||
{{< nav-icon "tasks" >}}
|
||||
|
||||
2. Click **+ Create Task** in the upper right and select **Import Task**.
|
||||
3. Drag and drop or select a file to upload.
|
||||
4. Click **Import JSON as Task**.
|
||||
2. Click **+ Create Task** in the upper right.
|
||||
3. Select **Import Task**.
|
||||
4. Upload a JSON task file using one of the following options:
|
||||
- Drag and drop a JSON task file in the specified area.
|
||||
- Click to upload and the area to select the JSON task from from your file manager.
|
||||
- Select the **JSON** option and paste in raw task JSON.
|
||||
5. Click **Import JSON as Task**.
|
||||
|
||||
### Create a task from a template
|
||||
1. Click on the **Settings** icon in the left navigation menu.
|
||||
|
||||
{{< nav-icon "Settings" >}}
|
||||
|
||||
2. Select **Templates**.
|
||||
3. Hover over the template to use to create the task and click **Create**.
|
||||
|
||||
|
||||
### Clone a task
|
||||
1. Click on the **Tasks** icon in the left navigation menu.
|
||||
|
||||
{{< nav-icon "tasks" >}}
|
||||
|
||||
2. Hover over the task you would like to clone and click the **{{< icon "duplicate" >}}** that appears.
|
||||
2. Hover over the task you would like to clone and click the **{{< icon "duplicate" >}}** icon that appears.
|
||||
4. Click **Clone**.
|
||||
|
||||
## Create a task using the influx CLI
|
||||
|
|
|
@ -12,17 +12,15 @@ weight: 205
|
|||
InfluxDB lets you export tasks from the InfluxDB user interface (UI).
|
||||
Tasks are exported as downloadable JSON files.
|
||||
|
||||
To export a task:
|
||||
|
||||
## Delete a task in the InfluxDB UI
|
||||
## Export a task in the InfluxDB UI
|
||||
1. Click the **Tasks** icon in the left navigation menu.
|
||||
|
||||
{{< nav-icon "tasks" >}}
|
||||
|
||||
2. In the list of tasks, hover over the task you would like to export and click
|
||||
the **{{< icon "gear" >}}** that appears.
|
||||
the **{{< icon "gear" >}}** icon that appears.
|
||||
3. Select **Export**.
|
||||
4. There are multiple options for downloading or saving the task export file:
|
||||
4. Downloading or save the task export file using one of the following options:
|
||||
- Click **Download JSON** to download the exported JSON file.
|
||||
- Click **Save as template** to save the export file as a task template.
|
||||
- Click **Copy to Clipboard** to copy the raw JSON content to your machine's clipboard.
|
||||
|
|
|
@ -16,7 +16,7 @@ To view your tasks, click the **Tasks** icon in the left navigation menu.
|
|||
{{< nav-icon "tasks" >}}
|
||||
|
||||
#### Update a task's Flux script
|
||||
1. In the list of tasks, click the **Name** of the task you would like to update.
|
||||
1. In the list of tasks, click the **Name** of the task you want to update.
|
||||
2. In the left panel, modify the task options.
|
||||
3. In the right panel, modify the task script.
|
||||
4. Click **Save** in the upper right.
|
||||
|
@ -24,9 +24,8 @@ To view your tasks, click the **Tasks** icon in the left navigation menu.
|
|||
{{< img-hd src="/img/2-0-tasks-create-edit.png" alt="Update a task" />}}
|
||||
|
||||
#### Update the status of a task
|
||||
In the list of tasks, click the toggle in the **Active** column of the task you
|
||||
would like to activate or inactivate.
|
||||
|
||||
In the list of tasks, click the {{< icon "toggle" >}} toggle to the left of the
|
||||
task you want to activate or inactivate.
|
||||
|
||||
## Update a task with the influx CLI
|
||||
Use the `influx task update` command to update or change the status of an existing task.
|
||||
|
@ -35,7 +34,7 @@ _This command requires a task ID, which is available in the output of `influx ta
|
|||
|
||||
#### Update a task's Flux script
|
||||
Pass the file path of your updated Flux script to the `influx task update` command
|
||||
with the ID of the task you would like to update.
|
||||
with the ID of the task you want to update.
|
||||
Modified [task options](/v2.0/process-data/task-options) defined in the Flux
|
||||
script are also updated.
|
||||
|
||||
|
@ -48,7 +47,7 @@ influx task update -i 0343698431c35000 @/tasks/cq-mean-1h.flux
|
|||
```
|
||||
|
||||
#### Update the status of a task
|
||||
Pass the ID of the task you would like to update to the `influx task update`
|
||||
Pass the ID of the task you want to update to the `influx task update`
|
||||
command with the `--status` flag.
|
||||
|
||||
_Possible arguments of the `--status` flag are `active` or `inactive`._
|
||||
|
|
|
@ -17,10 +17,10 @@ Click the **Tasks** icon in the left navigation to view the lists of tasks.
|
|||
|
||||
### Filter the list of tasks
|
||||
|
||||
1. Enable the **Show Inactive** option to include inactive tasks in the list.
|
||||
2. Enter text in the **Filter tasks by name** field to search for tasks by name.
|
||||
3. Select an organization from the **All Organizations** dropdown to filter the list by organization.
|
||||
4. Click on the heading of any column to sort by that field.
|
||||
1. Click the **Show Inactive** {{< icon "toggle" >}} toggle to include or exclude
|
||||
inactive tasks in the list.
|
||||
2. Enter text in the **Filter tasks** field to search for tasks by name or label.
|
||||
3. Click on the heading of any column to sort by that field.
|
||||
|
||||
## View tasks with the influx CLI
|
||||
Use the `influx task find` command to return a list of created tasks.
|
||||
|
|
|
@ -51,6 +51,8 @@ data = from(bucket: "example-bucket") |> range(start: -10m) # ...
|
|||
```
|
||||
|
||||
## InfluxDB API
|
||||
The [InfluxDB v2 API](/v2.0/reference/api) provides a programmatic
|
||||
interface for all interactions with InfluxDB.
|
||||
Query InfluxDB through the `/api/v2/query` endpoint.
|
||||
Queried data is returned in annotated CSV format.
|
||||
|
||||
|
@ -61,7 +63,7 @@ In your request, set the following:
|
|||
- `accept` header to `application/csv`.
|
||||
- `content-type` header to `application/vnd.flux`.
|
||||
|
||||
This allows you to POST the Flux query in plain text and receive the annotated CSV response.
|
||||
This lets you POST the Flux query in plain text and receive the annotated CSV response.
|
||||
|
||||
Below is an example `curl` command that queries InfluxDB:
|
||||
|
||||
|
|
|
@ -62,7 +62,14 @@ shaping your data for the desired output.
|
|||
|
||||
###### Example group key
|
||||
```js
|
||||
[_start, _stop, _field, _measurement, host]
|
||||
Group key: [_start, _stop, _field]
|
||||
_start:time _stop:time _field:string _time:time _value:float
|
||||
------------------------------ ------------------------------ ---------------------- ------------------------------ ----------------------------
|
||||
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:33:56.000000000Z 65.55318832397461
|
||||
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:06.000000000Z 65.52391052246094
|
||||
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:16.000000000Z 65.49603939056396
|
||||
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:26.000000000Z 65.51754474639893
|
||||
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:36.000000000Z 65.536737442016
|
||||
```
|
||||
|
||||
Note that `_time` and `_value` are excluded from the example group key because they
|
||||
|
|
|
@ -0,0 +1,179 @@
|
|||
---
|
||||
title: Query using conditional logic
|
||||
seotitle: Query using conditional logic in Flux
|
||||
description: >
|
||||
This guide describes how to use Flux conditional expressions, such as `if`,
|
||||
`else`, and `then`, to query and transform data.
|
||||
v2.0/tags: [conditionals, flux]
|
||||
menu:
|
||||
v2_0:
|
||||
name: Query using conditionals
|
||||
parent: How-to guides
|
||||
weight: 209
|
||||
---
|
||||
|
||||
Flux provides `if`, `then`, and `else` conditional expressions that allow for powerful and flexible Flux queries.
|
||||
|
||||
##### Conditional expression syntax
|
||||
```js
|
||||
// Pattern
|
||||
if <condition> then <action> else <alternative-action>
|
||||
|
||||
// Example
|
||||
if color == "green" then "008000" else "ffffff"
|
||||
```
|
||||
|
||||
Conditional expressions are most useful in the following contexts:
|
||||
|
||||
- When defining variables.
|
||||
- When using functions that operate on a single row at a time (
|
||||
[`filter()`](/v2.0/reference/flux/functions/built-in/transformations/filter/),
|
||||
[`map()`](/v2.0/reference/flux/functions/built-in/transformations/map/),
|
||||
[`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregations/reduce) ).
|
||||
|
||||
## Examples
|
||||
|
||||
- [Conditionally set the value of a variable](#conditionally-set-the-value-of-a-variable)
|
||||
- [Create conditional filters](#create-conditional-filters)
|
||||
- [Conditionally transform column values with map()](#conditionally-transform-column-values-with-map)
|
||||
- [Conditionally increment a count with reduce()](#conditionally-increment-a-count-with-reduce)
|
||||
|
||||
### Conditionally set the value of a variable
|
||||
The following example sets the `overdue` variable based on the
|
||||
`dueDate` variable's relation to `now()`.
|
||||
|
||||
```js
|
||||
dueDate = 2019-05-01
|
||||
overdue = if dueDate < now() then true else false
|
||||
```
|
||||
|
||||
### Create conditional filters
|
||||
The following example uses an example `metric` [dashboard variable](/v2.0/visualize-data/variables/)
|
||||
to change how the query filters data.
|
||||
`metric` has three possible values:
|
||||
|
||||
- Memory
|
||||
- CPU
|
||||
- Disk
|
||||
|
||||
```js
|
||||
from(bucket: "example-bucket")
|
||||
|> range(start: -1h)
|
||||
|> filter(fn: (r) =>
|
||||
if v.metric == "Memory"
|
||||
then r._measurement == "mem" and r._field == "used_percent"
|
||||
else if v.metric == "CPU"
|
||||
then r._measurement == "cpu" and r._field == "usage_user"
|
||||
else if v.metric == "Disk"
|
||||
then r._measurement == "disk" and r._field == "used_percent"
|
||||
else r._measurement != ""
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
### Conditionally transform column values with map()
|
||||
The following example uses the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map/)
|
||||
to conditionally transform column values.
|
||||
It sets the `level` column to a specific string based on `_value` column.
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[No Comments](#)
|
||||
[Comments](#)
|
||||
{{% /code-tabs %}}
|
||||
{{% code-tab-content %}}
|
||||
```js
|
||||
from(bucket: "example-bucket")
|
||||
|> range(start: -5m)
|
||||
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|
||||
|> map(fn: (r) => ({
|
||||
_time: r._time,
|
||||
_value: r._value,
|
||||
level:
|
||||
if r._value >= 95.0000001 and r._value <= 100.0 then "critical"
|
||||
else if r._value >= 85.0000001 and r._value <= 95.0 then "warning"
|
||||
else if r._value >= 70.0000001 and r._value <= 85.0 then "high"
|
||||
else "normal"
|
||||
})
|
||||
)
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{% code-tab-content %}}
|
||||
```js
|
||||
from(bucket: "example-bucket")
|
||||
|> range(start: -5m)
|
||||
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|
||||
|> map(fn: (r) => ({
|
||||
// Retain the _time column in the mapped row
|
||||
_time: r._time,
|
||||
// Retain the _value column in the mapped row
|
||||
_value: r._value,
|
||||
// Set the level column value based on the _value column
|
||||
level:
|
||||
if r._value >= 95.0000001 and r._value <= 100.0 then "critical"
|
||||
else if r._value >= 85.0000001 and r._value <= 95.0 then "warning"
|
||||
else if r._value >= 70.0000001 and r._value <= 85.0 then "high"
|
||||
else "normal"
|
||||
})
|
||||
)
|
||||
```
|
||||
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
### Conditionally increment a count with reduce()
|
||||
The following example uses the [`aggregateWindow()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/)
|
||||
and [`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/)
|
||||
functions to count the number of records in every five minute window that exceed a defined threshold.
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[No Comments](#)
|
||||
[Comments](#)
|
||||
{{% /code-tabs %}}
|
||||
{{% code-tab-content %}}
|
||||
```js
|
||||
threshold = 65.0
|
||||
|
||||
data = from(bucket: "example-bucket")
|
||||
|> range(start: -1h)
|
||||
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|
||||
|> aggregateWindow(
|
||||
every: 5m,
|
||||
fn: (column, tables=<-) => tables |> reduce(
|
||||
identity: {above_threshold_count: 0.0},
|
||||
fn: (r, accumulator) => ({
|
||||
above_threshold_count:
|
||||
if r._value >= threshold then accumulator.above_threshold_count + 1.0
|
||||
else accumulator.above_threshold_count + 0.0
|
||||
})
|
||||
)
|
||||
)
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{% code-tab-content %}}
|
||||
```js
|
||||
threshold = 65.0
|
||||
|
||||
data = from(bucket: "example-bucket")
|
||||
|> range(start: -1h)
|
||||
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|
||||
// Aggregate data into 5 minute windows using a custom reduce() function
|
||||
|> aggregateWindow(
|
||||
every: 5m,
|
||||
// Use a custom function in the fn parameter.
|
||||
// The aggregateWindow fn parameter requires 'column' and 'tables' parameters.
|
||||
fn: (column, tables=<-) => tables |> reduce(
|
||||
identity: {above_threshold_count: 0.0},
|
||||
fn: (r, accumulator) => ({
|
||||
// Conditionally increment above_threshold_count if
|
||||
// r.value exceeds the threshold
|
||||
above_threshold_count:
|
||||
if r._value >= threshold then accumulator.above_threshold_count + 1.0
|
||||
else accumulator.above_threshold_count + 0.0
|
||||
})
|
||||
)
|
||||
)
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
|
@ -0,0 +1,256 @@
|
|||
---
|
||||
title: Create custom aggregate functions
|
||||
description: Create your own custom aggregate functions in Flux using the `reduce()` function.
|
||||
v2.0/tags: [functions, custom, flux, aggregates]
|
||||
menu:
|
||||
v2_0:
|
||||
name: Custom aggregate functions
|
||||
parent: Create custom functions
|
||||
weight: 301
|
||||
---
|
||||
|
||||
To aggregate your data, use the Flux
|
||||
[built-in aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates/)
|
||||
or create custom aggregate functions using the
|
||||
[`reduce()`function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/).
|
||||
|
||||
## Aggregate function characteristics
|
||||
Aggregate functions all have the same basic characteristics:
|
||||
|
||||
- They operate on individual input tables and transform all records into a single record.
|
||||
- The output table has the same [group key](/v2.0/query-data/get-started/#group-keys) as the input table.
|
||||
|
||||
## How reduce() works
|
||||
The `reduce()` function operates on one row at a time using the function defined in
|
||||
the [`fn` parameter](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#fn).
|
||||
The `fn` function maps keys to specific values using two [objects](/v2.0/query-data/get-started/syntax-basics/#objects)
|
||||
specified by the following parameters:
|
||||
|
||||
| Parameter | Description |
|
||||
|:---------: |:----------- |
|
||||
| `r` | An object that represents the row or record. |
|
||||
| `accumulator` | An object that contains values used in each row's aggregate calculation. |
|
||||
|
||||
{{% note %}}
|
||||
The `reduce()` function's [`identity` parameter](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#identity)
|
||||
defines the initial `accumulator` object.
|
||||
{{% /note %}}
|
||||
|
||||
### Example reduce() function
|
||||
The following example `reduce()` function produces a sum and product of all values
|
||||
in an input table.
|
||||
|
||||
```js
|
||||
|> reduce(fn: (r, accumulator) => ({
|
||||
sum: r._value + accumulator.sum,
|
||||
product: r._value * accumulator.product
|
||||
})
|
||||
identity: {sum: 0.0, product: 1.0}
|
||||
)
|
||||
```
|
||||
|
||||
To illustrate how this function works, take this simplified table for example:
|
||||
|
||||
```txt
|
||||
_time _value
|
||||
----------------------- -------
|
||||
2019-04-23T16:10:49.00Z 1.6
|
||||
2019-04-23T16:10:59.00Z 2.3
|
||||
2019-04-23T16:11:09.00Z 0.7
|
||||
2019-04-23T16:11:19.00Z 1.2
|
||||
2019-04-23T16:11:29.00Z 3.8
|
||||
```
|
||||
|
||||
###### Input objects
|
||||
The `fn` function uses the data in the first row to define the `r` object.
|
||||
It defines the `accumulator` object using the `identity` parameter.
|
||||
|
||||
```js
|
||||
r = { _time: 2019-04-23T16:10:49.00Z, _value: 1.6 }
|
||||
accumulator = { sum : 0.0, product : 1.0 }
|
||||
```
|
||||
|
||||
###### Key mappings
|
||||
It then uses the `r` and `accumulator` objects to populate values in the key mappings:
|
||||
```js
|
||||
// sum: r._value + accumulator.sum
|
||||
sum: 1.6 + 0.0
|
||||
|
||||
// product: r._value * accumulator.product
|
||||
product: 1.6 * 1.0
|
||||
```
|
||||
|
||||
###### Output object
|
||||
This produces an output object with the following key value pairs:
|
||||
|
||||
```js
|
||||
{ sum: 1.6, product: 1.6 }
|
||||
```
|
||||
|
||||
The function then processes the next row using this **output object** as the `accumulator`.
|
||||
|
||||
{{% note %}}
|
||||
Because `reduce()` uses the output object as the `accumulator` when processing the next row,
|
||||
keys mapped in the `fn` function must match keys in the `identity` and `accumulator` objects.
|
||||
{{% /note %}}
|
||||
|
||||
###### Processing the next row
|
||||
```js
|
||||
// Input objects for the second row
|
||||
r = { _time: 2019-04-23T16:10:59.00Z, _value: 2.3 }
|
||||
accumulator = { sum : 1.6, product : 1.6 }
|
||||
|
||||
// Key mappings for the second row
|
||||
sum: 2.3 + 1.6
|
||||
product: 2.3 * 1.6
|
||||
|
||||
// Output object of the second row
|
||||
{ sum: 3.9, product: 3.68 }
|
||||
```
|
||||
|
||||
It then uses the new output object as the `accumulator` for the next row.
|
||||
This cycle continues until all rows in the table are processed.
|
||||
|
||||
##### Final output object and table
|
||||
After all records in the table are processed, `reduce()` uses the final output object
|
||||
to create a transformed table with one row and columns for each mapped key.
|
||||
|
||||
```js
|
||||
// Final output object
|
||||
{ sum: 9.6, product: 11.74656 }
|
||||
|
||||
// Output table
|
||||
sum product
|
||||
---- ---------
|
||||
9.6 11.74656
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
#### What happened to the \_time column?
|
||||
The `reduce()` function only keeps columns that are:
|
||||
|
||||
1. Are part of the input table's [group key](/v2.0/query-data/get-started/#group-keys).
|
||||
2. Explicitly mapped in the `fn` function.
|
||||
|
||||
It drops all other columns.
|
||||
Because `_time` is not part of the group key and is not mapped in the `fn` function,
|
||||
it isn't included in the output table.
|
||||
{{% /note %}}
|
||||
|
||||
## Custom aggregate function examples
|
||||
To create custom aggregate functions, use principles outlined in
|
||||
[Creating custom functions](/v2.0/query-data/guides/custom-functions)
|
||||
and the `reduce()` function to aggregate rows in each input table.
|
||||
|
||||
### Create a custom average function
|
||||
This example illustrates how to create a function that averages values in a table.
|
||||
_This is meant for demonstration purposes only.
|
||||
The built-in [`mean()` function](/v2.0/reference/flux/functions/built-in/tranformations/aggregates/mean/)
|
||||
does the same thing and is much more performant._
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[Comments](#)
|
||||
[No Comments](#)
|
||||
{{% /code-tabs %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
|
||||
```js
|
||||
average = (tables=<-, outputField="average") =>
|
||||
tables
|
||||
|> reduce(
|
||||
// Define the initial accumulator object
|
||||
identity: {
|
||||
count: 1.0,
|
||||
sum: 0.0,
|
||||
avg: 0.0
|
||||
}
|
||||
fn: (r, accumulator) => ({
|
||||
// Increment the counter on each reduce loop
|
||||
count: accumulator.count + 1.0,
|
||||
// Add the _value to the existing sum
|
||||
sum: accumulator.sum + r._value,
|
||||
// Divide the existing sum by the existing count for a new average
|
||||
avg: accumulator.sum / accumulator.count
|
||||
})
|
||||
)
|
||||
// Drop the sum and the count columns since they are no longer needed
|
||||
|> drop(columns: ["sum", "count"])
|
||||
// Set the _field column of the output table to to the value
|
||||
// provided in the outputField parameter
|
||||
|> set(key: "_field", value: outputField)
|
||||
// Rename avg column to _value
|
||||
|> rename(columns: {avg: "_value"})
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
```js
|
||||
average = (tables=<-, outputField="average") =>
|
||||
tables
|
||||
|> reduce(
|
||||
identity: {
|
||||
count: 1.0,
|
||||
sum: 0.0,
|
||||
avg: 0.0
|
||||
}
|
||||
fn: (r, accumulator) => ({
|
||||
count: accumulator.count + 1.0,
|
||||
sum: accumulator.sum + r._value,
|
||||
avg: accumulator.sum / accumulator.count
|
||||
})
|
||||
)
|
||||
|> drop(columns: ["sum", "count"])
|
||||
|> set(key: "_field", value: outputField)
|
||||
|> rename(columns: {avg: "_value"})
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
### Aggregate multiple columns
|
||||
Built-in aggregate functions only operate on one column.
|
||||
Use `reduce()` to create a custom aggregate function that aggregates multiple columns.
|
||||
|
||||
The following function expects input tables to have `c1_value` and `c2_value`
|
||||
columns and generates an average for each.
|
||||
|
||||
```js
|
||||
multiAvg = (tables=<-) =>
|
||||
tables
|
||||
|> reduce(
|
||||
identity: {
|
||||
count: 1.0,
|
||||
c1_sum: 0.0,
|
||||
c1_avg: 0.0,
|
||||
c2_sum: 0.0,
|
||||
c2_avg: 0.0
|
||||
}
|
||||
fn: (r, accumulator) => ({
|
||||
count: accumulator.count + 1.0,
|
||||
c1_sum: accumulator.c1_sum + r.c1_value,
|
||||
c1_avg: accumulator.c1_sum / accumulator.count,
|
||||
c2_sum: accumulator.c2_sum + r.c2_value,
|
||||
c2_avg: accumulator.c2_sum / accumulator.count
|
||||
})
|
||||
)
|
||||
```
|
||||
|
||||
### Aggregate gross and net profit
|
||||
Use `reduce()` to create a function that aggregates gross and net profit.
|
||||
This example expects `profit` and `expenses` columns in the input tables.
|
||||
|
||||
```js
|
||||
profitSummary = (tables=<-) =>
|
||||
tables
|
||||
|> reduce(
|
||||
identity: {
|
||||
gross: 0.0,
|
||||
net: 0.0
|
||||
}
|
||||
fn: (r, accumulator) => ({
|
||||
gross: accumulator.gross + r.profit,
|
||||
net: accumulator.net + r.profit - r.expenses
|
||||
})
|
||||
)
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: InfluxDB v2 API
|
||||
description: >
|
||||
The InfluxDB v2 API provides a programmatic interface for interactions with InfluxDB.
|
||||
Access the InfluxDB API using the `/api/v2/` endpoint.
|
||||
menu: v2_0_ref
|
||||
weight: 2
|
||||
v2.0/tags: [api]
|
||||
---
|
||||
|
||||
The InfluxDB v2 API provides a programmatic interface for interactions with InfluxDB.
|
||||
Access the InfluxDB API using the `/api/v2/` endpoint.
|
||||
|
||||
## Authentication
|
||||
InfluxDB uses [authentication tokens](/v2.0/security/tokens/) to authorize API requests.
|
||||
Include your authentication token as an `Authorization` header in each request.
|
||||
|
||||
```sh
|
||||
curl --request GET \
|
||||
--url http://localhost:9999/api/v2/ \
|
||||
--header 'Authorization: Token YOURAUTHTOKEN'
|
||||
```
|
||||
|
||||
## View Influx v2 API Documentation
|
||||
Full InfluxDB v2 API documentation is built into the `influxd` service.
|
||||
To view the API documentation, [start InfluxDB](/v2.0/get-started/#start-influxdb)
|
||||
and visit the `/docs` endpoint in a browser.
|
||||
|
||||
<a class="btn" href="http://localhost:9999/docs" target="\_blank">localhost:9999/docs</a>
|
|
@ -8,7 +8,7 @@ v2.0/tags: [cli]
|
|||
menu:
|
||||
v2_0_ref:
|
||||
name: Command line tools
|
||||
weight: 2
|
||||
weight: 3
|
||||
---
|
||||
|
||||
InfluxDB provides command line tools designed to aid in managing and working
|
||||
|
|
|
@ -21,6 +21,27 @@ influx [flags]
|
|||
influx [command]
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
#### Store your InfluxDB authentication token
|
||||
To avoid having to pass your InfluxDB [authentication token](/v2.0/users/tokens/)
|
||||
with each `influx` command, use one of the following methods to store your token:
|
||||
|
||||
1. Set the `INFLUX_TOKEN` environment variable using your token.
|
||||
|
||||
```bash
|
||||
export INFLUX_TOKEN=oOooYourAuthTokenOoooOoOO==
|
||||
```
|
||||
|
||||
2. Store your token in `~/.influxdbv2/credentials`.
|
||||
_The content of the `credentials` file should be only your token._
|
||||
|
||||
_**Note:** If you [set up InfluxDB using the CLI](/v2.0/reference/cli/influx/setup),
|
||||
InfluxDB stores your token in the credentials files automatically._
|
||||
|
||||
_See [View tokens](/v2.0/security/tokens/view-tokens/) for information about
|
||||
retrieving authentication tokens._
|
||||
{{% /note %}}
|
||||
|
||||
## Commands
|
||||
| Command | Description |
|
||||
|:------- |:----------- |
|
||||
|
@ -28,6 +49,7 @@ influx [command]
|
|||
| [bucket](/v2.0/reference/cli/influx/bucket) | Bucket management commands |
|
||||
| [help](/v2.0/reference/cli/influx/help) | Help about any command |
|
||||
| [org](/v2.0/reference/cli/influx/org) | Organization management commands |
|
||||
| [ping](/v2.0/reference/cli/influx/ping) | Check the InfluxDB `/health` endpoint |
|
||||
| [query](/v2.0/reference/cli/influx/query) | Execute a Flux query |
|
||||
| [repl](/v2.0/reference/cli/influx/repl) | Interactive REPL (read-eval-print-loop) |
|
||||
| [setup](/v2.0/reference/cli/influx/setup) | Create default username, password, org, bucket, etc. |
|
||||
|
|
|
@ -21,13 +21,13 @@ influx auth [command]
|
|||
`auth`, `authorization`
|
||||
|
||||
## Subcommands
|
||||
| Subcommand | Description |
|
||||
|:---------- |:----------- |
|
||||
| [active](/v2.0/reference/cli/influx/auth/active) | Active authorization |
|
||||
| [create](/v2.0/reference/cli/influx/auth/create) | Create authorization |
|
||||
| [delete](/v2.0/reference/cli/influx/auth/delete) | Delete authorization |
|
||||
| [find](/v2.0/reference/cli/influx/auth/find) | Find authorization |
|
||||
| [inactive](/v2.0/reference/cli/influx/auth/inactive) | Inactive authorization |
|
||||
| Subcommand | Description |
|
||||
|:---------- |:----------- |
|
||||
| [active](/v2.0/reference/cli/influx/auth/active) | Activate authorization |
|
||||
| [create](/v2.0/reference/cli/influx/auth/create) | Create authorization |
|
||||
| [delete](/v2.0/reference/cli/influx/auth/delete) | Delete authorization |
|
||||
| [find](/v2.0/reference/cli/influx/auth/find) | Find authorization |
|
||||
| [inactive](/v2.0/reference/cli/influx/auth/inactive) | Inactivate authorization |
|
||||
|
||||
## Flags
|
||||
| Flag | Description |
|
||||
|
|
|
@ -20,6 +20,8 @@ influx auth find [flags]
|
|||
|:---- |:----------- |:----------: |
|
||||
| `-h`, `--help` | Help for the `find` command | |
|
||||
| `-i`, `--id` | The authorization ID | string |
|
||||
| `-o`, `--org` | The organization | string |
|
||||
| `--org-id` | The organization ID | string |
|
||||
| `-u`, `--user` | The user | string |
|
||||
| `--user-id` | The user ID | string |
|
||||
|
||||
|
|
|
@ -20,7 +20,6 @@ influx bucket create [flags]
|
|||
|:---- |:----------- |:----------: |
|
||||
| `-h`, `--help` | Help for the `create` command | |
|
||||
| `-n`, `--name` | Name of bucket that will be created | string |
|
||||
| `-o`, `--org` | Name of the organization that owns the bucket | string |
|
||||
| `--org-id` | The ID of the organization that owns the bucket | string |
|
||||
| `-r`, `--retention` | Duration in nanoseconds data will live in bucket | duration |
|
||||
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: influx ping – Check the health of InfluxDB
|
||||
description: >
|
||||
The `influx ping` command checks the health of a running InfluxDB instance by
|
||||
querying the `/health` endpoint.
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: influx ping
|
||||
parent: influx
|
||||
weight: 101
|
||||
v2.0/tags: [ping, health]
|
||||
---
|
||||
|
||||
The `influx ping` command checks the health of a running InfluxDB instance by
|
||||
querying the `/health` endpoint.
|
||||
It does not require an authorization token.
|
||||
|
||||
## Usage
|
||||
```
|
||||
influx ping [flags]
|
||||
```
|
||||
|
||||
## Flags
|
||||
| Flag | Description |
|
||||
|:---- |:----------- |
|
||||
| `-h`, `--help` | Help for the `ping` command |
|
||||
|
||||
## Global Flags
|
||||
| Global flag | Description | Input type |
|
||||
|:----------- |:----------- |:----------:|
|
||||
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
|
||||
| `--local` | Run commands locally against the filesystem | |
|
||||
| `-t`, `--token` | API token to be used throughout client calls | string |
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: influx setup – Run the initial Influx DB setup
|
||||
title: influx setup – Run the initial InfluxDB setup
|
||||
description: >
|
||||
The 'influx setup' command walks through the initial InfluxDB setup process,
|
||||
creating a default user, organization, and bucket.
|
||||
|
|
|
@ -21,6 +21,7 @@ influx task find [flags]
|
|||
| `-h`, `--help` | Help for `find` | |
|
||||
| `-i`, `--id` | Task ID | string |
|
||||
| `--limit` | The number of tasks to find (default `100`) | integer |
|
||||
| `--org` | Task organization name | string |
|
||||
| `--org-id` | Task organization ID | string |
|
||||
| `-n`, `--user-id` | Task owner ID | string |
|
||||
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
---
|
||||
title: influxd - InfluxDB daemon
|
||||
seotitle: influxd - InfluxDB daemon
|
||||
description: The influxd daemon starts and runs all the processes necessary for InfluxDB to function.
|
||||
v2.0/tags: [influxd, cli]
|
||||
menu:
|
||||
|
@ -21,9 +20,11 @@ influxd [command]
|
|||
|
||||
## Commands
|
||||
|
||||
| Command | Description |
|
||||
|:------- |:----------- |
|
||||
| [run](/v2.0/reference/cli/influxd/run) | Start the influxd server (default) |
|
||||
| Command | Description |
|
||||
|:------- |:----------- |
|
||||
| [generate](/v2.0/reference/cli/influxd/generate) | Generate time series data sets using TOML schema. |
|
||||
| [inspect](/v2.0/reference/cli/influxd/inspect) | Inspect on-disk database data. |
|
||||
| [run](/v2.0/reference/cli/influxd/run) | Start the influxd server _**(default)**_ |
|
||||
|
||||
## Flags
|
||||
|
||||
|
@ -36,7 +37,6 @@ influxd [command]
|
|||
| `-h`, `--help` | Help for `influxd` | |
|
||||
| `--http-bind-address` | Bind address for the REST HTTP API (default `:9999`) | string |
|
||||
| `--log-level` | Supported log levels are debug, info, and error (default `info`) | string |
|
||||
| `--protos-path` | Path to protos on the filesystem (default `~/.influxdbv2/protos`) | string |
|
||||
| `--reporting-disabled` | Disable sending telemetry data to https://telemetry.influxdata.com | |
|
||||
| `--secret-store` | Data store for secrets (bolt or vault) (default `bolt`) | string |
|
||||
| `--store` | Data store for REST resources (bolt or memory) (default `bolt`) | string |
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: influxd generate
|
||||
description: >
|
||||
The `influxd generate` command generates time series data direct to disk using
|
||||
a schema defined in a TOML file.
|
||||
v2.0/tags: [sample-data]
|
||||
menu:
|
||||
v2_0_ref:
|
||||
parent: influxd
|
||||
weight: 201
|
||||
---
|
||||
|
||||
The `influxd generate` command generates time series data direct to disk using a schema defined in a TOML file.
|
||||
|
||||
{{% note %}}
|
||||
#### Important notes
|
||||
- `influxd generate` cannot run while the `influxd` server is running.
|
||||
The `generate` command modifies index and Time-Structured Merge Tree (TSM) data.
|
||||
- Use `influxd generate` for **development and testing purposes only**.
|
||||
Do not run it on a production server.
|
||||
{{% /note %}}
|
||||
|
||||
## Usage
|
||||
```sh
|
||||
influxd generate <schema.toml> [flags]
|
||||
influxd generate [subcommand]
|
||||
```
|
||||
|
||||
## Subcommands
|
||||
| Subcommand | Description |
|
||||
|:------- |:----------- |
|
||||
| [help-schema](/v2.0/reference/cli/influxd/generate/help-schema) | Print a documented example TOML schema to stdout. |
|
||||
| [simple](/v2.0/reference/cli/influxd/generate/simple) | Generate simple data sets using defaults and CLI flags. |
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input Type |
|
||||
|:---- |:----------- |:----------:|
|
||||
| `--print` | Print data spec and exit | |
|
||||
| `--org` | Name of organization | string |
|
||||
| `--bucket` | Name of bucket | string |
|
||||
| `--start-time` | Start time (`YYYY-MM-DDT00:00:00Z`) (default is 00:00:00 of one week ago) | string |
|
||||
| `--end-time` | End time (`YYYY-MM-DDT00:00:00Z`) (default is 00:00:00 of current day) | string |
|
||||
| `--clean` | Clean time series data files (`none`, `tsm` or `all`) (default `none`) | string |
|
||||
| `--cpuprofile` | Collect a CPU profile | string |
|
||||
| `--memprofile` | Collect a memory profile | string |
|
||||
| `-h`, `--help` | Help for `generate` | |
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
title: influxd generate help-schema
|
||||
description: >
|
||||
The `influxd generate help-schema` command outputs an example TOML schema to stdout
|
||||
that includes documentation describing the available options.
|
||||
v2.0/tags: [sample-data]
|
||||
menu:
|
||||
v2_0_ref:
|
||||
parent: influxd generate
|
||||
weight: 301
|
||||
---
|
||||
|
||||
The `influxd generate help-schema` command outputs an example TOML schema to stdout that includes
|
||||
descriptions of available options. _See [example output](#example-output) below_.
|
||||
Use custom TOML schema files to generate sample data sets with
|
||||
[`influxd generate`](/v2.0/reference/cli/influxd/generate).
|
||||
|
||||
## Usage
|
||||
```sh
|
||||
influxd generate help-schema [flags]
|
||||
```
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input Type |
|
||||
|:---- |:----------- |:----------:|
|
||||
| `--print` | Print data spec and exit | |
|
||||
| `--org` | Name of organization | string |
|
||||
| `--bucket` | Name of bucket | string |
|
||||
| `--start-time` | Start time (`YYYY-MM-DDT00:00:00Z`) (default is 00:00:00 of one week ago) | string |
|
||||
| `--end-time` | End time (`YYYY-MM-DDT00:00:00Z`) (default is 00:00:00 of current day) | string |
|
||||
| `--clean` | Clean time series data files (`none`, `tsm` or `all`) (default `none`) | string |
|
||||
| `--cpuprofile` | Collect a CPU profile | string |
|
||||
| `--memprofile` | Collect a memory profile | string |
|
||||
| `-h`, `--help` | Help for `generate help-schema` | |
|
||||
|
||||
## Example output
|
||||
{{% truncate %}}
|
||||
```toml
|
||||
title = "Documented schema"
|
||||
|
||||
# limit the maximum number of series generated across all measurements
|
||||
#
|
||||
# series-limit: integer, optional (default: unlimited)
|
||||
|
||||
[[measurements]]
|
||||
|
||||
# name of measurement
|
||||
#
|
||||
# NOTE:
|
||||
# Multiple definitions of the same measurement name are allowed and
|
||||
# will be merged together.
|
||||
name = "cpu"
|
||||
|
||||
# sample: float; where 0 < sample ≤ 1.0 (default: 0.5)
|
||||
# sample a subset of the tag set
|
||||
#
|
||||
# sample 25% of the tags
|
||||
#
|
||||
sample = 0.25
|
||||
|
||||
# Keys for defining a tag
|
||||
#
|
||||
# name: string, required
|
||||
# Name of field
|
||||
#
|
||||
# source: array<string> or object
|
||||
#
|
||||
# A literal array of string values defines the tag values.
|
||||
#
|
||||
# An object defines more complex generators. The type key determines the
|
||||
# type of generator.
|
||||
#
|
||||
# source types:
|
||||
#
|
||||
# type: "sequence"
|
||||
# generate a sequence of tag values
|
||||
#
|
||||
# format: string
|
||||
# a format string for the values (default: "value%s")
|
||||
# start: int (default: 0)
|
||||
# beginning value
|
||||
# count: int, required
|
||||
# ending value
|
||||
#
|
||||
# type: "file"
|
||||
# generate a sequence of tag values from a file source.
|
||||
# The data in the file is sorted, deduplicated and verified is valid UTF-8
|
||||
#
|
||||
# path: string
|
||||
# absolute path or relative path to current toml file
|
||||
tags = [
|
||||
# example sequence tag source. The range of values are automatically
|
||||
# prefixed with 0s
|
||||
# to ensure correct sort behavior.
|
||||
{ name = "host", source = { type = "sequence", format = "host-%s", start = 0, count = 5 } },
|
||||
|
||||
# tags can also be sourced from a file. The path is relative to the
|
||||
# schema.toml.
|
||||
# Each value must be on a new line. The file is also sorted, deduplicated
|
||||
# and UTF-8 validated.
|
||||
{ name = "rack", source = { type = "file", path = "files/racks.txt" } },
|
||||
|
||||
# Example string array source, which is also deduplicated and sorted
|
||||
{ name = "region", source = ["us-west-01","us-west-02","us-east"] },
|
||||
]
|
||||
|
||||
# Keys for defining a field
|
||||
#
|
||||
# name: string, required
|
||||
# Name of field
|
||||
#
|
||||
# count: int, required
|
||||
# The maximum number of values to generate. When multiple fields
|
||||
# have the same count and time-spec, they will share timestamps.
|
||||
#
|
||||
# A time-spec can be either time-precision or time-interval, which
|
||||
# determines how timestamps are generated and may also influence
|
||||
# the time range and number of values generated.
|
||||
#
|
||||
# time-precision: string [ns, us, ms, s, m, h] (default: ms)
|
||||
# Specifies the precision (rounding) for generated timestamps.
|
||||
#
|
||||
# If the precision results in fewer than "count" intervals for the
|
||||
# given time range, the number of values will be reduced.
|
||||
#
|
||||
# Example:
|
||||
# count = 1000, start = 0s, end = 100s, time-precison = s
|
||||
# 100 values will be generated at [0s, 1s, 2s, ..., 99s]
|
||||
#
|
||||
# If the precision results in greater than "count" intervals for the
|
||||
# given time range, the interval will be rounded to the nearest multiple of
|
||||
# time-precision.
|
||||
#
|
||||
# Example:
|
||||
# count = 10, start = 0s, end = 100s, time-precison = s
|
||||
# 100 values will be generated at [0s, 10s, 20s, ..., 90s]
|
||||
#
|
||||
# time-interval: Go duration string (eg 90s, 1h30m)
|
||||
# Specifies the delta between generated timestamps.
|
||||
#
|
||||
# If the delta results in fewer than "count" intervals for the
|
||||
# given time range, the number of values will be reduced.
|
||||
#
|
||||
# Example:
|
||||
# count = 100, start = 0s, end = 100s, time-interval = 10s
|
||||
# 10 values will be generated at [0s, 10s, 20s, ..., 90s]
|
||||
#
|
||||
# If the delta results in greater than "count" intervals for the
|
||||
# given time range, the start-time will be adjusted to ensure "count" values.
|
||||
#
|
||||
# Example:
|
||||
# count = 20, start = 0s, end = 1000s, time-interval = 10s
|
||||
# 20 values will be generated at [800s, 810s, ..., 900s, ..., 990s]
|
||||
#
|
||||
# source: int, float, boolean, string, array or object
|
||||
#
|
||||
# A literal int, float, boolean or string will produce
|
||||
# a constant value of the same data type.
|
||||
#
|
||||
# A literal array of homogeneous values will generate a repeating
|
||||
# sequence.
|
||||
#
|
||||
# An object defines more complex generators. The type key determines the
|
||||
# type of generator.
|
||||
#
|
||||
# source types:
|
||||
#
|
||||
# type: "rand<float>"
|
||||
# generate random float values
|
||||
# seed: seed to random number generator (default: 0)
|
||||
# min: minimum value (default: 0.0)
|
||||
# max: maximum value (default: 1.0)
|
||||
#
|
||||
# type: "zipf<integer>"
|
||||
# generate random integer values using a Zipf distribution
|
||||
# The generator generates values k ∈ [0, imax] such that P(k)
|
||||
# is proportional to (v + k) ** (-s). Requirements: s > 1 and v ≥ 1.
|
||||
# See https://golang.org/pkg/math/rand/#NewZipf for more information.
|
||||
#
|
||||
# seed: seed to random number generator (default: 0)
|
||||
# s: float > 1 (required)
|
||||
# v: float ≥ 1 (required)
|
||||
# imax: integer (required)
|
||||
#
|
||||
fields = [
|
||||
# Example constant float
|
||||
{ name = "system", count = 5000, source = 2.5 },
|
||||
|
||||
# Example random floats
|
||||
{ name = "user", count = 5000, source = { type = "rand<float>", seed = 10, min = 0.0, max = 1.0 } },
|
||||
]
|
||||
|
||||
# Multiple measurements may be defined.
|
||||
[[measurements]]
|
||||
name = "mem"
|
||||
tags = [
|
||||
{ name = "host", source = { type = "sequence", format = "host-%s", start = 0, count = 5 } },
|
||||
{ name = "region", source = ["us-west-01","us-west-02","us-east"] },
|
||||
]
|
||||
fields = [
|
||||
# An example of a sequence of integer values
|
||||
{ name = "free", count = 100, source = [10,15,20,25,30,35,30], time-precision = "ms" },
|
||||
{ name = "low_mem", count = 100, source = [false,true,true], time-precision = "ms" },
|
||||
]
|
||||
```
|
||||
{{% /truncate %}}
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: influxd generate simple
|
||||
description: >
|
||||
The `influxd generate simple` command generates and writes a simple data set using
|
||||
reasonable defaults and CLI flags.
|
||||
v2.0/tags: [sample-data]
|
||||
menu:
|
||||
v2_0_ref:
|
||||
parent: influxd generate
|
||||
weight: 301
|
||||
---
|
||||
|
||||
The `influxd generate simple` command generates and writes a simple data set using
|
||||
reasonable defaults and command line interface (CLI) [flags](#flags).
|
||||
|
||||
{{% note %}}
|
||||
#### Important notes
|
||||
- `influxd generate simple` cannot run while the `influxd` server is running.
|
||||
The `generate` command modifies index and Time-Structured Merge Tree (TSM) data.
|
||||
- Use `influxd generate simple` for **development and testing purposes only**.
|
||||
Do not run it on a production server.
|
||||
{{% /note %}}
|
||||
|
||||
## Usage
|
||||
```sh
|
||||
influxd generate simple [flags]
|
||||
```
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input Type |
|
||||
|:---- |:----------- |:----------:|
|
||||
| `--print` | Print data spec and exit | |
|
||||
| `--org` | Name of organization | string |
|
||||
| `--bucket` | Name of bucket | string |
|
||||
| `--start-time` | Start time (`YYYY-MM-DDT00:00:00Z`) (default is 00:00:00 of one week ago) | string |
|
||||
| `--end-time` | End time (`YYYY-MM-DDT00:00:00Z`) (default is 00:00:00 of current day) | string |
|
||||
| `--clean` | Clean time series data files (`none`, `tsm` or `all`) (default `none`) | string |
|
||||
| `--cpuprofile` | Collect a CPU profile | string |
|
||||
| `--memprofile` | Collect a memory profile | string |
|
||||
| `-h`, `--help` | Help for `generate simple` | |
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: influxd inspect
|
||||
description: The `influxd inspect` commands and subcommands inspecting on-disk InfluxDB time series data.
|
||||
v2.0/tags: [inspect]
|
||||
menu:
|
||||
v2_0_ref:
|
||||
parent: influxd
|
||||
weight: 201
|
||||
---
|
||||
|
||||
The `influxd inspect` commands and subcommands inspecting on-disk InfluxDB time series data.
|
||||
|
||||
## Usage
|
||||
```sh
|
||||
influxd inspect [command]
|
||||
```
|
||||
|
||||
## Subcommands
|
||||
| Subcommand | Description |
|
||||
|:---------- |:----------- |
|
||||
| [report-tsm](/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report |
|
||||
|
||||
## Flags
|
||||
| Flag | Description |
|
||||
|:---- |:----------- |
|
||||
| `-h`, `--help` | help for inspect |
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
title: influxd inspect report-tsm
|
||||
description: >
|
||||
The `influxd inspect report-tsm` command analyzes Time-Structured Merge Tree (TSM)
|
||||
files within a storage engine directory and reports the cardinality within the files
|
||||
and the time range the data covers.
|
||||
v2.0/tags: [tsm, cardinality, inspect]
|
||||
menu:
|
||||
v2_0_ref:
|
||||
parent: influxd inspect
|
||||
weight: 301
|
||||
---
|
||||
|
||||
The `influxd inspect report-tsm` command analyzes Time-Structured Merge Tree (TSM)
|
||||
files within a storage engine directory and reports the cardinality within the files
|
||||
and the time range the data covers.
|
||||
|
||||
This command only interrogates the index within each file.
|
||||
It does not read any block data.
|
||||
To reduce heap requirements, by default `report-tsm` estimates the overall
|
||||
cardinality in the file set by using the HLL++ algorithm.
|
||||
Determine exact cardinalities by using the `--exact` flag.
|
||||
|
||||
## Usage
|
||||
```sh
|
||||
influxd inspect report-tsm [flags]
|
||||
```
|
||||
|
||||
## Output details
|
||||
`influxd inspect report-tsm` outputs the following for each file:
|
||||
|
||||
- The full file name.
|
||||
- The series cardinality within the file.
|
||||
- The number of series first encountered within the file.
|
||||
- The minimum and maximum timestamp associated with TSM data in the file.
|
||||
- The time to load the TSM index and apply any tombstones.
|
||||
|
||||
The summary section then outputs the total time range and series cardinality for
|
||||
the file set. Depending on the `--detailed` flag, series cardinality is segmented
|
||||
in the following ways:
|
||||
|
||||
- Series cardinality for each organization.
|
||||
- Series cardinality for each bucket.
|
||||
- Series cardinality for each measurement.
|
||||
- Number of field keys for each measurement.
|
||||
- Number of tag values for each tag key.
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input Type |
|
||||
|:---- |:----------- |:----------:|
|
||||
| `--bucket-id` | Process only data belonging to bucket ID. _Requires `org-id` flag to be set._ | string |
|
||||
| `--data-dir` | Use provided data directory (defaults to ~/.influxdbv2/engine/data). | string |
|
||||
| `--detailed` | Emit series cardinality segmented by measurements, tag keys, and fields. _**May take a while**_. | |
|
||||
| `--exact` | Calculate an exact cardinality count. _**May use significant memory**_. | |
|
||||
| `-h`, `--help` | Help for `report-tsm`. | |
|
||||
| `--org-id` | Process only data belonging to organization ID. | string |
|
||||
| `--pattern` | Only process TSM files containing pattern. | string |
|
|
@ -4,7 +4,6 @@ description: The `influxd run` command is the default `influxd` command and star
|
|||
v2.0/tags: [influxd, cli]
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: run
|
||||
parent: influxd
|
||||
weight: 201
|
||||
---
|
||||
|
@ -38,7 +37,6 @@ influxd run
|
|||
| `-h`, `--help` | Help for `run` | |
|
||||
| `--http-bind-address` | Bind address for the REST HTTP API (default `:9999`) | string |
|
||||
| `--log-level` | Supported log levels are debug, info, and error (default `info`) | string |
|
||||
| `--protos-path` | Path to protos on the filesystem (default `~/.influxdbv2/protos`) | string |
|
||||
| `--reporting-disabled` | Disable sending telemetry data to https://telemetry.influxdata.com | |
|
||||
| `--secret-store` | Data store for secrets (bolt or vault) (default `bolt`) | string |
|
||||
| `--store` | Data store for REST resources (bolt or memory) (default `bolt`) | string |
|
||||
|
|
|
@ -0,0 +1,164 @@
|
|||
---
|
||||
title: InfluxDB configuration options
|
||||
description: >
|
||||
Configure InfluxDB when starting the `influxd` service.
|
||||
This article outlines the different configuration options available.
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: Configuration options
|
||||
weight: 2
|
||||
---
|
||||
|
||||
To configure InfluxDB, use the following configuration options when starting the
|
||||
[`influxd` service](/v2.0/reference/cli/influxd):
|
||||
|
||||
- [--assets-path](#assets-path)
|
||||
- [--bolt-path](#bolt-path)
|
||||
- [--e2e-testing](#e2e-testing)
|
||||
- [--engine-path](#engine-path)
|
||||
- [--http-bind-address](#http-bind-address)
|
||||
- [--log-level](#log-level)
|
||||
- [--reporting-disabled](#reporting-disabled)
|
||||
- [--secret-store](#secret-store)
|
||||
- [--store](#store)
|
||||
- [--tracing-type](#tracing-type)
|
||||
|
||||
```sh
|
||||
influxd \
|
||||
--assets-path=/path/to/custom/assets-dir \
|
||||
--bolt-path=~/.influxdbv2/influxd.bolt \
|
||||
--e2e-testing \
|
||||
--engine-path=~/.influxdbv2/engine \
|
||||
--http-bind-address=:9999 \
|
||||
--log-level=info \
|
||||
--reporting-disabled \
|
||||
--secret-store=bolt \
|
||||
--store=bolt \
|
||||
--tracing-type=log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --assets-path
|
||||
_Typically, InfluxData internal use only._
|
||||
Overrides the default InfluxDB user interface (UI) assets by serving assets from the specified directory.
|
||||
|
||||
```sh
|
||||
influxd --assets-path=/path/to/custom/assets-dir
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --bolt-path
|
||||
Defines the path to the [BoltDB](https://github.com/boltdb/bolt) database.
|
||||
BoltDB is a key value store written in Go.
|
||||
InfluxDB uses BoltDB to store data including organization and
|
||||
user information, UI data, REST resources, and other key value data.
|
||||
|
||||
**Default:** `~/.influxdbv2/influxd.bolt`
|
||||
|
||||
```sh
|
||||
influxd --bolt-path=~/.influxdbv2/influxd.bolt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --e2e-testing
|
||||
Adds a `/debug/flush` endpoint to the InfluxDB HTTP API to clear stores.
|
||||
InfluxData uses this endpoint in end-to-end testing.
|
||||
|
||||
```sh
|
||||
influxd --e2e-testing
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --engine-path
|
||||
Defines the path to persistent storage engine files where InfluxDB stores all
|
||||
Time-Structure Merge Tree (TSM) data on disk.
|
||||
|
||||
**Default:** `~/.influxdbv2/engine`
|
||||
|
||||
```sh
|
||||
influxd --engine-path=~/.influxdbv2/engine
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --http-bind-address
|
||||
Defines the bind address for the InfluxDB HTTP API.
|
||||
Customize the URL and port for the InfluxDB API and UI.
|
||||
|
||||
**Default:** `:9999`
|
||||
|
||||
```sh
|
||||
influxd --http-bind-address=:9999
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --log-level
|
||||
Defines the log output level.
|
||||
InfluxDB outputs log entries with severity levels greater than or equal to the level specified.
|
||||
|
||||
**Options:** `debug`, `info`, `error`
|
||||
**Default:** `info`
|
||||
|
||||
```sh
|
||||
influxd --log-level=info
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --reporting-disabled
|
||||
Disables sending telemetry data to InfluxData.
|
||||
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
|
||||
information about what data is collected and how InfluxData uses it.
|
||||
|
||||
```sh
|
||||
influxd --reporting-disabled
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --secret-store
|
||||
Specifies the data store for secrets such as passwords and tokens.
|
||||
Store secrets in either the InfluxDB [internal BoltDB](#bolt-path)
|
||||
or in [Vault](https://www.vaultproject.io/).
|
||||
|
||||
**Options:** `bolt`, `vault`
|
||||
**Default:** `bolt`
|
||||
|
||||
```sh
|
||||
influxd --secret-store=bolt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --store
|
||||
Specifies the data store for REST resources.
|
||||
|
||||
**Options:** `bolt`, `memory`
|
||||
**Default:** `bolt`
|
||||
|
||||
{{% note %}}
|
||||
`memory` is meant for transient environments, such as testing environments, where
|
||||
data persistence does not matter.
|
||||
InfluxData does not recommend using `memory` in production.
|
||||
{{% /note %}}
|
||||
|
||||
```sh
|
||||
influxd --store=bolt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## --tracing-type
|
||||
Enables tracing in InfluxDB and specifies the tracing type.
|
||||
Tracing is disabled by default.
|
||||
|
||||
**Options:** `log`, `jaeger`
|
||||
|
||||
```sh
|
||||
influxd --tracing-type=log
|
||||
```
|
|
@ -5,7 +5,7 @@ v2.0/tags: [flux]
|
|||
menu:
|
||||
v2_0_ref:
|
||||
name: Flux query language
|
||||
weight: 3
|
||||
weight: 4
|
||||
---
|
||||
|
||||
The following articles are meant as a reference for Flux functions and the
|
||||
|
|
|
@ -6,7 +6,7 @@ menu:
|
|||
v2_0_ref:
|
||||
name: Flux packages and functions
|
||||
parent: Flux query language
|
||||
weight: 101
|
||||
weight: 102
|
||||
---
|
||||
|
||||
Flux's functional syntax allows you to retrieve, transform, process, and output data easily.
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: now() function
|
||||
description: The `now()` function returns the current time (GMT).
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: now
|
||||
parent: built-in-misc
|
||||
weight: 401
|
||||
---
|
||||
|
||||
The `now()` function returns the current time (GMT).
|
||||
|
||||
_**Function type:** Date/Time_
|
||||
_**Output data type:** Time_
|
||||
|
||||
```js
|
||||
now()
|
||||
```
|
||||
|
||||
## Examples
|
||||
```js
|
||||
data
|
||||
|> range(start: -10h, stop: now())
|
||||
```
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
title: systemTime() function
|
||||
description: The `systemTime()` function returns the current system time.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/misc/systemtime
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: systemTime
|
||||
parent: built-in-misc
|
||||
weight: 401
|
||||
---
|
||||
|
||||
The `systemTime()` function returns the current system time.
|
||||
|
||||
_**Function type:** Date/Time_
|
||||
_**Output data type:** Timestamp_
|
||||
|
||||
```js
|
||||
systemTime()
|
||||
```
|
||||
|
||||
## Examples
|
||||
```js
|
||||
offsetTime = (offset) => systemTime() |> shift(shift: offset)
|
||||
```
|
|
@ -12,8 +12,7 @@ weight: 401
|
|||
|
||||
The `to()` function writes data to an **InfluxDB v2.0** bucket.
|
||||
|
||||
_**Function type:** Output_
|
||||
_**Output data type:** Object_
|
||||
_**Function type:** Output_
|
||||
|
||||
```js
|
||||
to(
|
||||
|
@ -39,6 +38,18 @@ to(
|
|||
)
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
### Output data requirements
|
||||
The `to()` function converts output data into line protocol and writes it to InfluxDB.
|
||||
Line protocol requires each record to have a timestamp, a measurement, a field, and a value.
|
||||
All output data must include the following columns:
|
||||
|
||||
- `_time`
|
||||
- `_measurement`
|
||||
- `_field`
|
||||
- `_value`
|
||||
{{% /note %}}
|
||||
|
||||
## Parameters
|
||||
{{% note %}}
|
||||
`bucket` OR `bucketID` is **required**.
|
||||
|
|
|
@ -14,16 +14,15 @@ v2.0/tags: [aggregates, built-in, functions]
|
|||
---
|
||||
|
||||
Flux's built-in aggregate functions take values from an input table and aggregate them in some way.
|
||||
The output table contains is a single row with the aggregated value.
|
||||
The output table contains a single row with the aggregated value.
|
||||
|
||||
Aggregate operations output a table for every input table they receive.
|
||||
A list of columns to aggregate must be provided to the operation.
|
||||
The aggregate function is applied to each column in isolation.
|
||||
You must provide a column to aggregate.
|
||||
Any output table will have the following properties:
|
||||
|
||||
- It always contains a single record.
|
||||
- It will have the same group key as the input table.
|
||||
- It will contain a column for each provided aggregate column.
|
||||
- It will contain a column for the provided aggregate column.
|
||||
The column label will be the same as the input table.
|
||||
The type of the column depends on the specific aggregate operation.
|
||||
The value of the column will be `null` if the input table is empty or the input column has only `null` values.
|
||||
|
|
|
@ -18,15 +18,15 @@ _**Function type:** Aggregate_
|
|||
aggregateWindow(
|
||||
every: 1m,
|
||||
fn: mean,
|
||||
columns: ["_value"],
|
||||
timeColumn: "_stop",
|
||||
column: "_value",
|
||||
timeSrc: "_stop",
|
||||
timeDst: "_time",
|
||||
createEmpty: true
|
||||
)
|
||||
```
|
||||
|
||||
As data is windowed into separate tables and aggregated, the `_time` column is dropped from each group key.
|
||||
This helper copies the timestamp from a remaining column into the `_time` column.
|
||||
This function copies the timestamp from a remaining column into the `_time` column.
|
||||
View the [function definition](#function-definition).
|
||||
|
||||
## Parameters
|
||||
|
@ -37,17 +37,21 @@ The duration of windows.
|
|||
_**Data type:** Duration_
|
||||
|
||||
### fn
|
||||
The aggregate function used in the operation.
|
||||
The [aggregate function](/v2.0/reference/flux/functions/built-in/transformations/aggregates) used in the operation.
|
||||
|
||||
_**Data type:** Function_
|
||||
|
||||
### columns
|
||||
List of columns on which to operate.
|
||||
Defaults to `["_value"]`.
|
||||
{{% note %}}
|
||||
Only aggregate functions with a `column` parameter (singular) work with `aggregateWindow()`.
|
||||
{{% /note %}}
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
### column
|
||||
The column on which to operate.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
### timeColumn
|
||||
_**Data type:** String_
|
||||
|
||||
### timeSrc
|
||||
The time column from which time is copied for the aggregate record.
|
||||
Defaults to `"_stop"`.
|
||||
|
||||
|
@ -92,18 +96,19 @@ from(bucket: "telegraf/autogen")
|
|||
r._measurement == "mem" and
|
||||
r._field == "used_percent")
|
||||
|> aggregateWindow(
|
||||
column: "_value",
|
||||
every: 5m,
|
||||
fn: (columns, tables=<-) => tables |> percentile(percentile: 0.99, columns:columns)
|
||||
fn: (column, tables=<-) => tables |> quantile(q: 0.99, column:column)
|
||||
)
|
||||
```
|
||||
|
||||
## Function definition
|
||||
```js
|
||||
aggregateWindow = (every, fn, columns=["_value"], timeColumn="_stop", timeDst="_time", tables=<-) =>
|
||||
aggregateWindow = (every, fn, column="_value", timeSrc="_stop", timeDst="_time", tables=<-) =>
|
||||
tables
|
||||
|> window(every:every)
|
||||
|> fn(columns:columns)
|
||||
|> duplicate(column:timeColumn, as:timeDst)
|
||||
|> fn(column:column)
|
||||
|> duplicate(column:timeSrc, as:timeDst)
|
||||
|> window(every:inf, timeColumn:timeDst)
|
||||
```
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: count() function
|
||||
description: The `count()` function outputs the number of non-null records in each aggregated column.
|
||||
description: The `count()` function outputs the number of non-null records in a column.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/count
|
||||
menu:
|
||||
|
@ -10,23 +10,23 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `count()` function outputs the number of records in each aggregated column.
|
||||
The `count()` function outputs the number of records in a column.
|
||||
It counts both null and non-null records.
|
||||
|
||||
_**Function type:** Aggregate_
|
||||
_**Output data type:** Integer_
|
||||
|
||||
```js
|
||||
count(columns: ["_value"])
|
||||
count(column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
A list of columns on which to operate
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column on which to operate.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type: Array of strings**_
|
||||
_**Data type: String**_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
@ -38,7 +38,7 @@ from(bucket: "telegraf/autogen")
|
|||
```js
|
||||
from(bucket: "telegraf/autogen")
|
||||
|> range(start: -5m)
|
||||
|> count(columns: ["_value"])
|
||||
|> count(column: "_value")
|
||||
```
|
||||
|
||||
<hr style="margin-top:4rem"/>
|
||||
|
|
|
@ -22,14 +22,10 @@ covariance(columns: ["column_x", "column_y"], pearsonr: false, valueDst: "_value
|
|||
## Parameters
|
||||
|
||||
### columns
|
||||
A list of columns on which to operate.
|
||||
A list of **two columns** on which to operate. <span class="required">Required</span>
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
|
||||
{{% note %}}
|
||||
Exactly two columns must be provided to the `columns` property.
|
||||
{{% /note %}}
|
||||
|
||||
### pearsonr
|
||||
Indicates whether the result should be normalized to be the Pearson R coefficient.
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ weight: 501
|
|||
|
||||
The `derivative()` function computes the rate of change per [`unit`](#unit) of time between subsequent non-null records.
|
||||
It assumes rows are ordered by the `_time` column.
|
||||
The output table schema will be the same as the input table.
|
||||
The output table schema is the same as the input table.
|
||||
|
||||
_**Function type:** Aggregate_
|
||||
_**Output data type:** Float_
|
||||
|
@ -21,7 +21,7 @@ _**Output data type:** Float_
|
|||
derivative(
|
||||
unit: 1s,
|
||||
nonNegative: false,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
timeSrc: "_time"
|
||||
)
|
||||
```
|
||||
|
@ -40,11 +40,11 @@ When set to `true`, if a value is less than the previous value, it is assumed th
|
|||
|
||||
_**Data type:** Boolean_
|
||||
|
||||
### columns
|
||||
A list of columns on which to compute the derivative.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column to use to compute the derivative.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### timeSrc
|
||||
The column containing time values.
|
||||
|
|
|
@ -11,13 +11,13 @@ weight: 501
|
|||
---
|
||||
|
||||
The `difference()` function computes the difference between subsequent records.
|
||||
Every user-specified column of numeric type is subtracted while others are kept intact.
|
||||
The user-specified column of numeric type is subtracted while others are kept intact.
|
||||
|
||||
_**Function type:** Aggregate_
|
||||
_**Output data type:** Float_
|
||||
|
||||
```js
|
||||
difference(nonNegative: false, columns: ["_value"])
|
||||
difference(nonNegative: false, column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
@ -28,11 +28,11 @@ When set to `true`, if a value is less than the previous value, it is assumed th
|
|||
|
||||
_**Data type:** Boolean_
|
||||
|
||||
### columns
|
||||
A list of columns on which to compute the difference.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column to use to compute the difference.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
## Subtraction rules for numeric types
|
||||
- The difference between two non-null values is their algebraic difference;
|
||||
|
@ -58,37 +58,37 @@ from(bucket: "telegraf/autogen")
|
|||
### Example data transformation
|
||||
|
||||
###### Input table
|
||||
| _time | A | B | C | tag |
|
||||
|:-----:|:----:|:----:|:----:|:---:|
|
||||
| 0001 | null | 1 | 2 | tv |
|
||||
| 0002 | 6 | 2 | null | tv |
|
||||
| 0003 | 4 | 2 | 4 | tv |
|
||||
| 0004 | 10 | 10 | 2 | tv |
|
||||
| 0005 | null | null | 1 | tv |
|
||||
| _time | _value | tag |
|
||||
|:-----:|:------:|:---:|
|
||||
| 0001 | null | tv |
|
||||
| 0002 | 6 | tv |
|
||||
| 0003 | 4 | tv |
|
||||
| 0004 | 10 | tv |
|
||||
| 0005 | null | tv |
|
||||
|
||||
#### With nonNegative set to false
|
||||
```js
|
||||
|> difference(nonNegative: false)
|
||||
```
|
||||
###### Output table
|
||||
| _time | A | B | C | tag |
|
||||
|:-----:|:----:|:----:|:----:|:---:|
|
||||
| 0002 | null | 1 | null | tv |
|
||||
| 0003 | -2 | 0 | 2 | tv |
|
||||
| 0004 | 6 | 8 | -2 | tv |
|
||||
| 0005 | null | null | -1 | tv |
|
||||
| _time | _value | tag |
|
||||
|:-----:|:------:|:---:|
|
||||
| 0002 | null | tv |
|
||||
| 0003 | -2 | tv |
|
||||
| 0004 | 6 | tv |
|
||||
| 0005 | null | tv |
|
||||
|
||||
#### With nonNegative set to true
|
||||
```js
|
||||
|> difference(nonNegative: true):
|
||||
```
|
||||
###### Output table
|
||||
| _time | A | B | C | tag |
|
||||
|:-----:|:----:|:----:|:----:|:---:|
|
||||
| 0002 | null | 1 | null | tv |
|
||||
| 0003 | null | 0 | 2 | tv |
|
||||
| 0004 | 6 | 8 | null | tv |
|
||||
| 0005 | null | null | null | tv |
|
||||
| _time | _value | tag |
|
||||
|:-----:|:------:|:---:|
|
||||
| 0002 | null | tv |
|
||||
| 0003 | null | tv |
|
||||
| 0004 | 6 | tv |
|
||||
| 0005 | null | tv |
|
||||
|
||||
<hr style="margin-top:4rem"/>
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
title: histogramQuantile() function
|
||||
description: The `histogramQuantile()` function approximates a quantile given a histogram that approximates the cumulative distribution of the dataset.
|
||||
description: >
|
||||
The `histogramQuantile()` function approximates a quantile given a histogram
|
||||
that approximates the cumulative distribution of the dataset.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/histogramquantile
|
||||
menu:
|
||||
|
|
|
@ -20,16 +20,16 @@ _**Function type:** Aggregate_
|
|||
_**Output data type:** Float_
|
||||
|
||||
```js
|
||||
increase(columns: ["_values"])
|
||||
increase(column: "_values")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
The list of columns for which the increase is calculated.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column for which the increase is calculated.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** Strings_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
@ -61,8 +61,8 @@ Given the following input table:
|
|||
|
||||
## Function definition
|
||||
```js
|
||||
increase = (tables=<-, columns=["_value"]) =>
|
||||
increase = (tables=<-, column="_value") =>
|
||||
tables
|
||||
|> difference(nonNegative: true, columns:columns)
|
||||
|> difference(nonNegative: true, column:column)
|
||||
|> cumulativeSum()
|
||||
```
|
||||
|
|
|
@ -17,7 +17,7 @@ _**Function type:** Aggregate_
|
|||
_**Output data type:** Float_
|
||||
|
||||
```js
|
||||
integral(unit: 10s, columns: ["_value"])
|
||||
integral(unit: 10s, column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
@ -27,11 +27,11 @@ The time duration used when computing the integral.
|
|||
|
||||
_**Data type:** Duration_
|
||||
|
||||
### columns
|
||||
A list of columns on which to operate.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column on which to operate.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
|
|
@ -16,16 +16,16 @@ _**Function type:** Aggregate_
|
|||
_**Output data type:** Float_
|
||||
|
||||
```js
|
||||
mean(columns: ["_value"])
|
||||
mean(column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
A list of columns on which to compute the mean.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column to use to compute the mean.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
title: median() function
|
||||
description: The `median()` function returns the median `_value` of an input table or all non-null records in the input table with values that fall within the 50th percentile
|
||||
description: >
|
||||
The `median()` function returns the median `_value` of an input table or all non-null records
|
||||
in the input table with values that fall within the `0.5` quantile or 50th percentile.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/median
|
||||
menu:
|
||||
|
@ -10,9 +12,9 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `median()` function is a special application of the [`percentile()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/percentile)
|
||||
The `median()` function is a special application of the [`quantile()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/quantile)
|
||||
that returns the median `_value` of an input table or all non-null records in the input table
|
||||
with values that fall within the 50th percentile depending on the [method](#method) used.
|
||||
with values that fall within the `0.5` quantile (50th percentile) depending on the [method](#method) used.
|
||||
|
||||
_**Function type:** Selector or Aggregate_
|
||||
_**Output data type:** Object_
|
||||
|
@ -23,15 +25,15 @@ median(method: "estimate_tdigest", compression: 0.0)
|
|||
```
|
||||
|
||||
When using the `estimate_tdigest` or `exact_mean` methods, it outputs non-null
|
||||
records with values that fall within the 50th percentile.
|
||||
records with values that fall within the `0.5` quantile.
|
||||
|
||||
When using the `exact_selector` method, it outputs the non-null record with the
|
||||
value that represents the 50th percentile.
|
||||
value that represents the `0.5` quantile.
|
||||
|
||||
{{% note %}}
|
||||
The `median()` function can only be used with float value types.
|
||||
It is a special application of the [`percentile()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/percentile) which
|
||||
uses an approximation implementation that requires floats.
|
||||
It is a special application of the [`quantile()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/quantile)
|
||||
which uses an approximation implementation that requires floats.
|
||||
You can convert your value column to a float column using the [`toFloat()` function](/v2.0/reference/flux/functions/built-in/transformations/type-conversions/tofloat).
|
||||
{{% /note %}}
|
||||
|
||||
|
@ -46,18 +48,18 @@ The available options are:
|
|||
|
||||
##### estimate_tdigest
|
||||
An aggregate method that uses a [t-digest data structure](https://github.com/tdunning/t-digest)
|
||||
to compute an accurate percentile estimate on large data sources.
|
||||
to compute an accurate quantile estimate on large data sources.
|
||||
|
||||
##### exact_mean
|
||||
An aggregate method that takes the average of the two points closest to the percentile value.
|
||||
An aggregate method that takes the average of the two points closest to the quantile value.
|
||||
|
||||
##### exact_selector
|
||||
A selector method that returns the data point for which at least percentile points are less than.
|
||||
A selector method that returns the data point for which at least `q` points are less than.
|
||||
|
||||
### compression
|
||||
Indicates how many centroids to use when compressing the dataset.
|
||||
A larger number produces a more accurate result at the cost of increased memory requirements.
|
||||
Defaults to 1000.
|
||||
Defaults to `1000.0`.
|
||||
|
||||
_**Data type:** Float_
|
||||
|
||||
|
@ -90,8 +92,8 @@ from(bucket: "telegraf/autogen")
|
|||
## Function definition
|
||||
```js
|
||||
median = (method="estimate_tdigest", compression=0.0, tables=<-) =>
|
||||
percentile(
|
||||
percentile:0.5,
|
||||
quantile(
|
||||
q:0.5,
|
||||
method:method,
|
||||
compression:compression
|
||||
)
|
||||
|
|
|
@ -1,47 +1,48 @@
|
|||
---
|
||||
title: percentile() function
|
||||
description: The `percentile()` function outputs non-null records with values that fall within the specified percentile or the non-null record with the value that represents the specified percentile.
|
||||
title: quantile() function
|
||||
description: The `quantile()` function outputs non-null records with values that fall within the specified quantile or the non-null record with the value that represents the specified quantile.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/percentile
|
||||
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/percentile
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: percentile
|
||||
name: quantile
|
||||
parent: built-in-aggregates
|
||||
weight: 501
|
||||
---
|
||||
|
||||
The `percentile()` function returns records from an input table with `_value`s that fall within
|
||||
a specified percentile or it returns the record with the `_value` that represents the specified percentile.
|
||||
The `quantile()` function returns records from an input table with `_value`s that fall within
|
||||
a specified quantile or it returns the record with the `_value` that represents the specified quantile.
|
||||
Which it returns depends on the [method](#method) used.
|
||||
|
||||
_**Function type:** Aggregate or Selector_
|
||||
_**Output data type:** Float or Object_
|
||||
|
||||
```js
|
||||
percentile(
|
||||
columns: ["_value"],
|
||||
percentile: 0.99,
|
||||
quantile(
|
||||
column: "_value",
|
||||
q: 0.99,
|
||||
method: "estimate_tdigest",
|
||||
compression: 1000.0
|
||||
)
|
||||
```
|
||||
|
||||
When using the `estimate_tdigest` or `exact_mean` methods, it outputs non-null
|
||||
records with values that fall within the specified percentile.
|
||||
records with values that fall within the specified quantile.
|
||||
|
||||
When using the `exact_selector` method, it outputs the non-null record with the
|
||||
value that represents the specified percentile.
|
||||
value that represents the specified quantile.
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
A list of columns on which to compute the percentile.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column to use to compute the quantile.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### percentile
|
||||
A value between 0 and 1 indicating the desired percentile.
|
||||
### q
|
||||
A value between 0 and 1 indicating the desired quantile.
|
||||
|
||||
_**Data type:** Float_
|
||||
|
||||
|
@ -54,13 +55,13 @@ The available options are:
|
|||
|
||||
##### estimate_tdigest
|
||||
An aggregate method that uses a [t-digest data structure](https://github.com/tdunning/t-digest)
|
||||
to compute an accurate percentile estimate on large data sources.
|
||||
to compute an accurate quantile estimate on large data sources.
|
||||
|
||||
##### exact_mean
|
||||
An aggregate method that takes the average of the two points closest to the percentile value.
|
||||
An aggregate method that takes the average of the two points closest to the quantile value.
|
||||
|
||||
##### exact_selector
|
||||
A selector method that returns the data point for which at least percentile points are less than.
|
||||
A selector method that returns the data point for which at least `q` points are less than.
|
||||
|
||||
### compression
|
||||
Indicates how many centroids to use when compressing the dataset.
|
||||
|
@ -71,29 +72,29 @@ _**Data type:** Float_
|
|||
|
||||
## Examples
|
||||
|
||||
###### Percentile as an aggregate
|
||||
###### Quantile as an aggregate
|
||||
```js
|
||||
from(bucket: "telegraf/autogen")
|
||||
|> range(start: -5m)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "cpu" and
|
||||
r._field == "usage_system")
|
||||
|> percentile(
|
||||
percentile: 0.99,
|
||||
|> quantile(
|
||||
q: 0.99,
|
||||
method: "estimate_tdigest",
|
||||
compression: 1000.0
|
||||
)
|
||||
```
|
||||
|
||||
###### Percentile as a selector
|
||||
###### Quantile as a selector
|
||||
```js
|
||||
from(bucket: "telegraf/autogen")
|
||||
|> range(start: -5m)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "cpu" and
|
||||
r._field == "usage_system")
|
||||
|> percentile(
|
||||
percentile: 0.99,
|
||||
|> quantile(
|
||||
q: 0.99,
|
||||
method: "exact_selector"
|
||||
)
|
||||
```
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: reduce() function
|
||||
description: >
|
||||
The `reduce()` function aggregates records in each table according to the reducer,
|
||||
`fn`, providing a way to create custom table aggregations.
|
||||
menu:
|
||||
v2_0_ref:
|
||||
name: reduce
|
||||
parent: built-in-aggregates
|
||||
weight: 501
|
||||
---
|
||||
|
||||
The `reduce()` function aggregates records in each table according to the reducer,
|
||||
`fn`, providing a way to create custom aggregations.
|
||||
The output for each table is the group key of the table with columns corresponding
|
||||
to each field in the reducer object.
|
||||
|
||||
_**Function type:** Transformation_
|
||||
|
||||
```js
|
||||
reduce(
|
||||
fn: (r, accumulator) => ({ sum: r._value + accumulator.sum }),
|
||||
identity: {sum: 0.0}
|
||||
)
|
||||
```
|
||||
|
||||
If the reducer record contains a column with the same name as a group key column,
|
||||
the group key column's value is overwritten, and the outgoing group key is changed.
|
||||
However, if two reduced tables write to the same destination group key, the function will error.
|
||||
|
||||
## Parameters
|
||||
|
||||
### fn
|
||||
Function to apply to each record with a reducer object ([`identity`](#identity)).
|
||||
|
||||
_**Data type:** Function_
|
||||
|
||||
###### fn syntax
|
||||
```js
|
||||
// Pattern
|
||||
fn: (r, accumulator) => ({ identityKey: r.column + accumulator.identityKey })
|
||||
|
||||
// Example
|
||||
fn: (r, accumulator) => ({ sum: r._value + accumulator.sum })
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
#### Matching output object keys and types
|
||||
The output object from `fn` must have the same key names and value types as the [`identity`](#identity).
|
||||
After operating on a record, the output object is given back to `fn` as the input accumulator.
|
||||
If the output object keys and value types do not match the `identity` keys and value types,
|
||||
it will return a type error.
|
||||
{{% /note %}}
|
||||
|
||||
#### r
|
||||
Object representing each row or record.
|
||||
|
||||
#### accumulator
|
||||
Reducer object defined by [`identity`](#identity).
|
||||
|
||||
### identity
|
||||
Defines the reducer object and provides initial values to use when creating a reducer.
|
||||
May be used more than once in asynchronous processing use cases.
|
||||
_The data type of values in the `identity` object determine the data type of output values._
|
||||
|
||||
_**Data type:** Object_
|
||||
|
||||
###### identity object syntax
|
||||
```js
|
||||
// Pattern
|
||||
identity: {identityKey1: value1, identityKey2: value2}
|
||||
|
||||
// Example
|
||||
identity: {sum: 0.0, count: 0.0}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
##### Compute the sum of the value column
|
||||
```js
|
||||
from(bucket:"example-bucket")
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "cpu" and
|
||||
r._field == "usage_system" and
|
||||
r.service == "app-server"
|
||||
)
|
||||
|> range(start:-12h)
|
||||
|> reduce(
|
||||
fn: (r, accumulator) => ({
|
||||
sum: r._value + accumulator.sum
|
||||
}),
|
||||
identity: {sum: 0.0}
|
||||
)
|
||||
```
|
||||
|
||||
##### Compute the sum and count in a single reducer
|
||||
```js
|
||||
from(bucket:"example-bucket")
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "cpu" and
|
||||
r._field == "usage_system" and
|
||||
r.service == "app-server"
|
||||
)
|
||||
|> range(start:-12h)
|
||||
|> reduce(
|
||||
fn: (r, accumulator) => ({
|
||||
sum: r._value + accumulator.sum,
|
||||
count: accumulator.count + 1.0
|
||||
}),
|
||||
identity: {sum: 0.0, count: 0.0}
|
||||
)
|
||||
```
|
||||
|
||||
##### Compute the product of all values
|
||||
```js
|
||||
from(bucket:"example-bucket")
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "cpu" and
|
||||
r._field == "usage_system" and
|
||||
r.service == "app-server")
|
||||
|> range(start:-12h)
|
||||
|> reduce(
|
||||
fn: (r, accumulator) => ({
|
||||
prod: r._value * accumulator.prod
|
||||
}),
|
||||
identity: {prod: 1.0}
|
||||
)
|
||||
```
|
|
@ -16,15 +16,16 @@ _**Function type:** Aggregate_
|
|||
_**Output data type:** Float_
|
||||
|
||||
```js
|
||||
skew(columns: ["_value"])
|
||||
skew(column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
Specifies a list of columns on which to operate. Defaults to `["_value"]`.
|
||||
### column
|
||||
The column on which to operate.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: spread() function
|
||||
description: The `spread()` function outputs the difference between the minimum and maximum values in each specified column.
|
||||
description: The `spread()` function outputs the difference between the minimum and maximum values in a specified column.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/spread
|
||||
menu:
|
||||
|
@ -10,26 +10,26 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `spread()` function outputs the difference between the minimum and maximum values in each specified column.
|
||||
The `spread()` function outputs the difference between the minimum and maximum values in a specified column.
|
||||
Only `uint`, `int`, and `float` column types can be used.
|
||||
The type of the output column depends on the type of input column:
|
||||
|
||||
- For input columns with type `uint` or `int`, the output is an `int`
|
||||
- For input columns with type `float` the output is a float.
|
||||
- For columns with type `uint` or `int`, the output is an `int`
|
||||
- For columns with type `float`, the output is a float.
|
||||
|
||||
_**Function type:** Aggregate_
|
||||
_**Output data type:** Integer or Float (inherited from input column type)_
|
||||
|
||||
```js
|
||||
spread(columns: ["_value"])
|
||||
spread(column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
Specifies a list of columns on which to operate. Defaults to `["_value"]`.
|
||||
### column
|
||||
The column on which to operate. Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: stddev() function
|
||||
description: The `stddev()` function computes the standard deviation of non-null records in specified columns.
|
||||
description: The `stddev()` function computes the standard deviation of non-null records in a specified column.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/stddev
|
||||
menu:
|
||||
|
@ -10,22 +10,39 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `stddev()` function computes the standard deviation of non-null records in specified columns.
|
||||
The `stddev()` function computes the standard deviation of non-null records in a specified column.
|
||||
|
||||
_**Function type:** Aggregate_
|
||||
_**Output data type:** Float_
|
||||
|
||||
```js
|
||||
stddev(columns: ["_value"])
|
||||
stddev(
|
||||
column: "_value",
|
||||
mode: "sample"
|
||||
)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
Specifies a list of columns on which to operate.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column on which to operate.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### mode
|
||||
The standard deviation mode or type of standard deviation to calculate.
|
||||
Defaults to `"sample"`.
|
||||
|
||||
_**Data type:** String_
|
||||
|
||||
The available options are:
|
||||
|
||||
##### sample
|
||||
Calculates the sample standard deviation where the data is considered to be part of a larger population.
|
||||
|
||||
##### population
|
||||
Calculates the population standard deviation where the data is considered a population of its own.
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: sum() function
|
||||
description: The `sum()` function computes the sum of non-null records in specified columns.
|
||||
description: The `sum()` function computes the sum of non-null records in a specified column.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/aggregates/sum
|
||||
menu:
|
||||
|
@ -10,22 +10,22 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `sum()` function computes the sum of non-null records in specified columns.
|
||||
The `sum()` function computes the sum of non-null records in a specified column.
|
||||
|
||||
_**Function type:** Aggregate_
|
||||
_**Output data type:** Integer, UInteger, or Float (inherited from column type)_
|
||||
|
||||
```js
|
||||
sum(columns: ["_value"])
|
||||
sum(column: "_value")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### columns
|
||||
Specifies a list of columns on which to operate.
|
||||
Defaults to `["_value"]`.
|
||||
### column
|
||||
The column on which to operate.
|
||||
Defaults to `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
## Examples
|
||||
```js
|
||||
|
|
|
@ -33,6 +33,10 @@ The name assigned to the duplicate column.
|
|||
|
||||
_**Data type:** String_
|
||||
|
||||
{{% note %}}
|
||||
If the `as` column already exists, this function will overwrite the existing values.
|
||||
{{% /note %}}
|
||||
|
||||
## Examples
|
||||
```js
|
||||
from(bucket: "telegraf/autogen")
|
||||
|
|
|
@ -12,6 +12,7 @@ weight: 401
|
|||
|
||||
The `group()` function groups records based on their values for specific columns.
|
||||
It produces tables with new group keys based on provided properties.
|
||||
Specify an empty array of columns to ungroup data or merge all input tables into a single output table.
|
||||
|
||||
_**Function type:** Transformation_
|
||||
|
||||
|
@ -69,8 +70,9 @@ from(bucket: "telegraf/autogen")
|
|||
|> group(columns: ["_time"], mode: "except")
|
||||
```
|
||||
|
||||
###### Remove all grouping
|
||||
###### Ungroup data
|
||||
```js
|
||||
// Merge all tables into a single table
|
||||
from(bucket: "telegraf/autogen")
|
||||
|> range(start: -30m)
|
||||
|> group()
|
||||
|
|
|
@ -46,7 +46,7 @@ The resulting group keys for all tables will be: `[_time, _field_d1, _field_d2]`
|
|||
## Parameters
|
||||
|
||||
### tables
|
||||
The map of streams to be joined. <span style="color:#FF8564; font-weight:700;">Required</span>.
|
||||
The map of streams to be joined. <span class="required">Required</span>
|
||||
|
||||
_**Data type:** Object_
|
||||
|
||||
|
@ -55,7 +55,7 @@ _**Data type:** Object_
|
|||
{{% /note %}}
|
||||
|
||||
### on
|
||||
The list of columns on which to join. <span style="color:#FF8564; font-weight:700;">Required</span>.
|
||||
The list of columns on which to join. <span class="required">Required</span>
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ _**Function type:** Transformation_
|
|||
_**Output data type:* Object_
|
||||
|
||||
```js
|
||||
range(start: -15m, stop: now)
|
||||
range(start: -15m, stop: now())
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
@ -43,6 +43,11 @@ Absolute stop times are defined using timestamps.
|
|||
|
||||
_**Data type:** Duration or Timestamp_
|
||||
|
||||
{{% note %}}
|
||||
Flux only honors [RFC3339 timestamps](/v2.0/reference/flux/language/types#timestamp-format)
|
||||
and ignores dates and times provided in other formats.
|
||||
{{% /note %}}
|
||||
|
||||
## Examples
|
||||
|
||||
###### Time range relative to now
|
||||
|
|
|
@ -26,4 +26,4 @@ The following functions can be used as both selectors or aggregates, but they ar
|
|||
categorized as aggregate functions in this documentation:
|
||||
|
||||
- [median](/v2.0/reference/flux/functions/built-in/transformations/aggregates/median)
|
||||
- [percentile](/v2.0/reference/flux/functions/built-in/transformations/aggregates/percentile)
|
||||
- [quantile](/v2.0/reference/flux/functions/built-in/transformations/aggregates/quantile)
|
||||
|
|
|
@ -11,6 +11,7 @@ weight: 501
|
|||
---
|
||||
|
||||
The `distinct()` function returns the unique values for a given column.
|
||||
The `_value` of each output record is set to the distinct value in the specified column.
|
||||
`null` is considered its own distinct value if it is present.
|
||||
|
||||
_**Function type:** Selector_
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: highestAverage() function
|
||||
description: The `highestAverage()` function returns the top `n` records from all groups using the average of each group.
|
||||
description: The `highestAverage()` function calculates the average of each table in the input stream returns the top `n` records.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/selectors/highestaverage
|
||||
menu:
|
||||
|
@ -10,14 +10,15 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `highestAverage()` function returns the top `n` records from all groups using the average of each group.
|
||||
The `highestAverage()` function calculates the average of each table in the input stream returns the top `n` records.
|
||||
It outputs a single aggregated table containing `n` records.
|
||||
|
||||
_**Function type:** Selector, Aggregate_
|
||||
|
||||
```js
|
||||
highestAverage(
|
||||
n:10,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
groupColumns: []
|
||||
)
|
||||
```
|
||||
|
@ -29,12 +30,11 @@ Number of records to return.
|
|||
|
||||
_**Data type:** Integer_
|
||||
|
||||
### columns
|
||||
List of columns by which to sort.
|
||||
Sort precedence is determined by list order (left to right).
|
||||
Default is `["_value"]`.
|
||||
### column
|
||||
Column by which to sort.
|
||||
Default is `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### groupColumns
|
||||
The columns on which to group before performing the aggregation.
|
||||
|
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
|
|||
|
||||
// _highestOrLowest is a helper function which reduces all groups into a single
|
||||
// group by specific tags and a reducer function. It then selects the highest or
|
||||
// lowest records based on the columns and the _sortLimit function.
|
||||
// lowest records based on the column and the _sortLimit function.
|
||||
// The default reducer assumes no reducing needs to be performed.
|
||||
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> group(columns:groupColumns)
|
||||
|> reducer()
|
||||
|> group(columns:[])
|
||||
|> _sortLimit(n:n, columns:columns)
|
||||
|> _sortLimit(n:n, columns:[column])
|
||||
|
||||
highestAverage = (n, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
highestAverage = (n, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> _highestOrLowest(
|
||||
n:n,
|
||||
columns:columns,
|
||||
groupColumns:groupColumns,
|
||||
reducer: (tables=<-) => tables |> mean(columns:[columns[0]]),
|
||||
reducer: (tables=<-) => tables |> mean(column:column),
|
||||
_sortLimit: top,
|
||||
)
|
||||
```
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: highestCurrent() function
|
||||
description: The `highestCurrent()` function returns the top `n` records from all groups using the last value of each group.
|
||||
description: The `highestCurrent()` function selects the last record of each table in the input stream and returns the top `n` records.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/selectors/highestcurrent
|
||||
menu:
|
||||
|
@ -10,14 +10,15 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `highestCurrent()` function returns the top `n` records from all groups using the last value of each group.
|
||||
The `highestCurrent()` function selects the last record of each table in the input stream and returns the top `n` records.
|
||||
It outputs a single aggregated table containing `n` records.
|
||||
|
||||
_**Function type:** Selector, Aggregate_
|
||||
|
||||
```js
|
||||
highestCurrent(
|
||||
n:10,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
groupColumns: []
|
||||
)
|
||||
```
|
||||
|
@ -29,12 +30,11 @@ Number of records to return.
|
|||
|
||||
_**Data type:** Integer_
|
||||
|
||||
### columns
|
||||
List of columns by which to sort.
|
||||
Sort precedence is determined by list order (left to right).
|
||||
Default is `["_value"]`.
|
||||
### column
|
||||
Column by which to sort.
|
||||
Default is `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### groupColumns
|
||||
The columns on which to group before performing the aggregation.
|
||||
|
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
|
|||
|
||||
// _highestOrLowest is a helper function which reduces all groups into a single
|
||||
// group by specific tags and a reducer function. It then selects the highest or
|
||||
// lowest records based on the columns and the _sortLimit function.
|
||||
// lowest records based on the column and the _sortLimit function.
|
||||
// The default reducer assumes no reducing needs to be performed.
|
||||
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> group(columns:groupColumns)
|
||||
|> reducer()
|
||||
|> group(columns:[])
|
||||
|> _sortLimit(n:n, columns:columns)
|
||||
|> _sortLimit(n:n, columns:[column])
|
||||
|
||||
highestCurrent = (n, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
highestCurrent = (n, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> _highestOrLowest(
|
||||
n:n,
|
||||
columns:columns,
|
||||
column:column,
|
||||
groupColumns:groupColumns,
|
||||
reducer: (tables=<-) => tables |> last(column:columns[0]),
|
||||
reducer: (tables=<-) => tables |> last(column:column),
|
||||
_sortLimit: top,
|
||||
)
|
||||
```
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: highestMax() function
|
||||
description: The `highestMax()` function returns the top `n` records from all groups using the maximum of each group.
|
||||
description: The `highestMax()` function selects the maximum record from each table in the input stream and returns the top `n` records.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/selectors/highestmax
|
||||
menu:
|
||||
|
@ -10,14 +10,15 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `highestMax()` function returns the top `n` records from all groups using the maximum of each group.
|
||||
The `highestMax()` function selects the maximum record from each table in the input stream and returns the top `n` records.
|
||||
It outputs a single aggregated table containing `n` records.
|
||||
|
||||
_**Function type:** Selector, Aggregate_
|
||||
|
||||
```js
|
||||
highestMax(
|
||||
n:10,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
groupColumns: []
|
||||
)
|
||||
```
|
||||
|
@ -29,12 +30,11 @@ Number of records to return.
|
|||
|
||||
_**Data type:** Integer_
|
||||
|
||||
### columns
|
||||
List of columns by which to sort.
|
||||
Sort precedence is determined by list order (left to right).
|
||||
Default is `["_value"]`.
|
||||
### column
|
||||
Column by which to sort.
|
||||
Default is `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### groupColumns
|
||||
The columns on which to group before performing the aggregation.
|
||||
|
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
|
|||
|
||||
// _highestOrLowest is a helper function which reduces all groups into a single
|
||||
// group by specific tags and a reducer function. It then selects the highest or
|
||||
// lowest records based on the columns and the _sortLimit function.
|
||||
// lowest records based on the column and the _sortLimit function.
|
||||
// The default reducer assumes no reducing needs to be performed.
|
||||
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> group(columns:groupColumns)
|
||||
|> reducer()
|
||||
|> group(columns:[])
|
||||
|> _sortLimit(n:n, columns:columns)
|
||||
|> _sortLimit(n:n, columns:[column])
|
||||
|
||||
highestMax = (n, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
highestMax = (n, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> _highestOrLowest(
|
||||
n:n,
|
||||
columns:columns,
|
||||
column:column,
|
||||
groupColumns:groupColumns,
|
||||
reducer: (tables=<-) => tables |> max(column:columns[0]),
|
||||
reducer: (tables=<-) => tables |> max(column:column),
|
||||
_sortLimit: top
|
||||
)
|
||||
```
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: lowestAverage() function
|
||||
description: The `lowestAverage()` function returns the bottom `n` records from all groups using the average of each group.
|
||||
description: The `lowestAverage()` function calculates the average of each table in the input stream returns the lowest `n` records.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/selectors/lowestaverage
|
||||
menu:
|
||||
|
@ -10,14 +10,15 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `lowestAverage()` function returns the bottom `n` records from all groups using the average of each group.
|
||||
The `lowestAverage()` function calculates the average of each table in the input stream returns the lowest `n` records.
|
||||
It outputs a single aggregated table containing `n` records.
|
||||
|
||||
_**Function type:** Selector, Aggregate_
|
||||
|
||||
```js
|
||||
lowestAverage(
|
||||
n:10,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
groupColumns: []
|
||||
)
|
||||
```
|
||||
|
@ -29,12 +30,11 @@ Number of records to return.
|
|||
|
||||
_**Data type:** Integer_
|
||||
|
||||
### columns
|
||||
List of columns by which to sort.
|
||||
Sort precedence is determined by list order (left to right).
|
||||
Default is `["_value"]`.
|
||||
### column
|
||||
Column by which to sort.
|
||||
Default is `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### groupColumns
|
||||
The columns on which to group before performing the aggregation.
|
||||
|
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
|
|||
|
||||
// _highestOrLowest is a helper function which reduces all groups into a single
|
||||
// group by specific tags and a reducer function. It then selects the highest or
|
||||
// lowest records based on the columns and the _sortLimit function.
|
||||
// lowest records based on the column and the _sortLimit function.
|
||||
// The default reducer assumes no reducing needs to be performed.
|
||||
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> group(columns:groupColumns)
|
||||
|> reducer()
|
||||
|> group(columns:[])
|
||||
|> _sortLimit(n:n, columns:columns)
|
||||
|> _sortLimit(n:n, columns:[column])
|
||||
|
||||
lowestAverage = (n, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
lowestAverage = (n, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> _highestOrLowest(
|
||||
n:n,
|
||||
columns:columns,
|
||||
column:column,
|
||||
groupColumns:groupColumns,
|
||||
reducer: (tables=<-) => tables |> mean(columns:[columns[0]]),
|
||||
reducer: (tables=<-) => tables |> mean(column:column]),
|
||||
_sortLimit: bottom,
|
||||
)
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: lowestCurrent() function
|
||||
description: The `lowestCurrent()` function returns the bottom `n` records from all groups using the last value of each group.
|
||||
description: The `lowestCurrent()` function selects the last record of each table in the input stream and returns the lowest `n` records.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/selectors/lowestcurrent
|
||||
menu:
|
||||
|
@ -10,14 +10,15 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `lowestCurrent()` function returns the bottom `n` records from all groups using the last value of each group.
|
||||
The `lowestCurrent()` function selects the last record of each table in the input stream and returns the lowest `n` records.
|
||||
It outputs a single aggregated table containing `n` records.
|
||||
|
||||
_**Function type:** Selector, Aggregate_
|
||||
|
||||
```js
|
||||
lowestCurrent(
|
||||
n:10,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
groupColumns: []
|
||||
)
|
||||
```
|
||||
|
@ -29,12 +30,11 @@ Number of records to return.
|
|||
|
||||
_**Data type:** Integer_
|
||||
|
||||
### columns
|
||||
List of columns by which to sort.
|
||||
Sort precedence is determined by list order (left to right).
|
||||
Default is `["_value"]`.
|
||||
### column
|
||||
Column by which to sort.
|
||||
Default is `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### groupColumns
|
||||
The columns on which to group before performing the aggregation.
|
||||
|
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
|
|||
|
||||
// _highestOrLowest is a helper function which reduces all groups into a single
|
||||
// group by specific tags and a reducer function. It then selects the highest or
|
||||
// lowest records based on the columns and the _sortLimit function.
|
||||
// lowest records based on the column and the _sortLimit function.
|
||||
// The default reducer assumes no reducing needs to be performed.
|
||||
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> group(columns:groupColumns)
|
||||
|> reducer()
|
||||
|> group(columns:[])
|
||||
|> _sortLimit(n:n, columns:columns)
|
||||
|> _sortLimit(n:n, columns:[column])
|
||||
|
||||
lowestCurrent = (n, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
lowestCurrent = (n, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> _highestOrLowest(
|
||||
n:n,
|
||||
columns:columns,
|
||||
column:column,
|
||||
groupColumns:groupColumns,
|
||||
reducer: (tables=<-) => tables |> last(column:columns[0]),
|
||||
reducer: (tables=<-) => tables |> last(column:column),
|
||||
_sortLimit: bottom,
|
||||
)
|
||||
```
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: lowestMin() function
|
||||
description: The `lowestMin()` function returns the bottom `n` records from all groups using the minimum of each group.
|
||||
description: The `lowestMin()` function selects the minimum record from each table in the input stream and returns the lowest `n` records.
|
||||
aliases:
|
||||
- /v2.0/reference/flux/functions/transformations/selectors/lowestmin
|
||||
menu:
|
||||
|
@ -10,14 +10,15 @@ menu:
|
|||
weight: 501
|
||||
---
|
||||
|
||||
The `lowestMin()` function returns the bottom `n` records from all groups using the minimum of each group.
|
||||
The `lowestMin()` function selects the minimum record from each table in the input stream and returns the lowest `n` records.
|
||||
It outputs a single aggregated table containing `n` records.
|
||||
|
||||
_**Function type:** Selector, Aggregate_
|
||||
|
||||
```js
|
||||
lowestMin(
|
||||
n:10,
|
||||
columns: ["_value"],
|
||||
column: "_value",
|
||||
groupColumns: []
|
||||
)
|
||||
```
|
||||
|
@ -29,12 +30,11 @@ Number of records to return.
|
|||
|
||||
_**Data type:** Integer_
|
||||
|
||||
### columns
|
||||
List of columns by which to sort.
|
||||
Sort precedence is determined by list order (left to right).
|
||||
Default is `["_value"]`.
|
||||
### column
|
||||
Column by which to sort.
|
||||
Default is `"_value"`.
|
||||
|
||||
_**Data type:** Array of strings_
|
||||
_**Data type:** String_
|
||||
|
||||
### groupColumns
|
||||
The columns on which to group before performing the aggregation.
|
||||
|
@ -63,23 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
|
|||
|
||||
// _highestOrLowest is a helper function which reduces all groups into a single
|
||||
// group by specific tags and a reducer function. It then selects the highest or
|
||||
// lowest records based on the columns and the _sortLimit function.
|
||||
// lowest records based on the column and the _sortLimit function.
|
||||
// The default reducer assumes no reducing needs to be performed.
|
||||
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> group(columns:groupColumns)
|
||||
|> reducer()
|
||||
|> group(columns:[])
|
||||
|> _sortLimit(n:n, columns:columns)
|
||||
|> _sortLimit(n:n, columns:[column])
|
||||
|
||||
lowestMin = (n, columns=["_value"], groupColumns=[], tables=<-) =>
|
||||
lowestMin = (n, column="_value", groupColumns=[], tables=<-) =>
|
||||
tables
|
||||
|> _highestOrLowest(
|
||||
n:n,
|
||||
columns:columns,
|
||||
column:column,
|
||||
groupColumns:groupColumns,
|
||||
// TODO(nathanielc): Once max/min support selecting based on multiple columns change this to pass all columns.
|
||||
reducer: (tables=<-) => tables |> min(column:columns[0]),
|
||||
reducer: (tables=<-) => tables |> min(column:column),
|
||||
_sortLimit: bottom,
|
||||
)
|
||||
```
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue