Merge branch 'master' into control-dashboards

pull/502/head
Nora 2019-09-30 09:57:53 -07:00
commit d230743488
414 changed files with 6732 additions and 1146 deletions

View File

@ -4,7 +4,7 @@ jobs:
docker:
- image: circleci/node:latest
environment:
HUGO_VERSION: "0.55.1"
HUGO_VERSION: "0.56.3"
S3DEPLOY_VERSION: "2.3.2"
steps:
- checkout

View File

@ -4,6 +4,13 @@ This repository contains the InfluxDB 2.x documentation published at [docs.influ
## Contributing
We welcome and encourage community contributions. For information about contributing to the InfluxData documentation, see [Contribution guidelines](CONTRIBUTING.md).
## Reporting a Vulnerability
InfluxData takes security and our users' trust very seriously.
If you believe you have found a security issue in any of our open source projects,
please responsibly disclose it by contacting security@influxdata.com.
More details about security vulnerability reporting,
including our GPG key, can be found here. https://www.influxdata.com/how-to-report-security-vulnerabilities/
## Run the docs locally
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site
generator built in Go.

11
SECURITY.md Normal file
View File

@ -0,0 +1,11 @@
# Security Policy
## Reporting a Vulnerability
Reporting a Vulnerability
InfluxData takes security and our users' trust very seriously.
If you believe you have found a security issue in any of our open source projects,
please responsibly disclose it by contacting security@influxdata.com.
More details about security vulnerability reporting, including our GPG key,
can be found here. https://www.influxdata.com/how-to-report-security-vulnerabilities/

View File

@ -18,7 +18,8 @@
}
}
h2,h3,h4,h5,h6 {
& + .highlight pre { margin-top: .5rem; }
& + .highlight pre { margin-top: .5rem }
& + pre { margin-top: .5rem }
& + .code-tabs-wrapper { margin-top: 0; }
}
h1 {

View File

@ -26,22 +26,22 @@
&.ui-toggle {
display: inline-block;
position: relative;
width: 34px;
height: 22px;
background: #1C1C21;
border: 2px solid #383846;
width: 28px;
height: 16px;
background: $b-pool;
border-radius: .7rem;
vertical-align: text-bottom;
vertical-align: text-top;
margin-top: 2px;
.circle {
display: inline-block;
position: absolute;
border-radius: 50%;
height: 12px;
width: 12px;
background: #22ADF6;
top: 3px;
right: 3px;
height: 8px;
width: 8px;
background: $g20-white;
top: 4px;
right: 4px;
}
}
}

View File

@ -56,7 +56,7 @@ pre {
padding: 0;
font-size: .95rem;
line-height: 1.5rem;
white-space: pre-wrap;
white-space: pre;
}
}

View File

@ -86,7 +86,7 @@ $article-note-table-row-alt: #3B2862;
$article-note-table-scrollbar: $np-deepnight;
$article-note-shadow: $np-deepnight;
$article-note-code: $cp-comet;
$article-note-code-bg: $wp-telopea;
$article-note-code-bg: $wp-jaguar;
$article-note-code-accent1: #567375;
$article-note-code-accent2: $b-pool;
$article-note-code-accent3: $gr-viridian;

View File

@ -24,6 +24,7 @@ $g19-ghost: #FAFAFC;
$g20-white: #FFFFFF; // Brand color
// Warm Purples - Magentas
$wp-jaguar: #1d0135;
$wp-telopea: #23043E;
$wp-violentdark: #2d0749;
$wp-violet: #32094E;

View File

@ -1,10 +1,10 @@
@font-face {
font-family: 'icomoon';
src: url('fonts/icomoon.eot?e8u66e');
src: url('fonts/icomoon.eot?e8u66e#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?e8u66e') format('truetype'),
url('fonts/icomoon.woff?e8u66e') format('woff'),
url('fonts/icomoon.svg?e8u66e#icomoon') format('svg');
src: url('fonts/icomoon.eot?9r9zke');
src: url('fonts/icomoon.eot?9r9zke#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?9r9zke') format('truetype'),
url('fonts/icomoon.woff?9r9zke') format('woff'),
url('fonts/icomoon.svg?9r9zke#icomoon') format('svg');
font-weight: normal;
font-style: normal;
}
@ -24,9 +24,24 @@
-moz-osx-font-smoothing: grayscale;
}
.icon-ui-disks-nav:before {
content: "\e93c";
}
.icon-ui-wrench-nav:before {
content: "\e93d";
}
.icon-ui-eye-closed:before {
content: "\e956";
}
.icon-ui-eye-open:before {
content: "\e957";
}
.icon-ui-chat:before {
content: "\e93a";
}
.icon-ui-bell:before {
content: "\e93b";
}
.icon-ui-cloud:before {
content: "\e93f";
}

View File

@ -0,0 +1,99 @@
---
title: Add payment method and view billing
list_title: Add payment and view billing
description: >
Add your InfluxDB Cloud payment method and view billing information.
weight: 103
menu:
v2_0_cloud:
parent: Account management
name: Add payment and view billing
---
- Hover over the **Usage** icon in the left navigation bar and select **Billing**.
{{< nav-icon "cloud" >}}
Complete the following procedures as needed:
- [Add or update your {{< cloud-name >}} payment method](#add-or-update-your-influxdb-cloud-2-0-payment-method)
- [Add or update your contact information](#add-or-update-your-contact-information)
- [Send notifications when usage exceeds an amount](#send-notifications-when-usage-exceeds-an-amount)
View information about:
- [Pay As You Go billing](#view-pay-as-you-go-billing-information)
- [Free plan](#view-free-plan-information)
- [Exceeded rate limits](#exceeded-rate-limits)
- [Billing cycle](#billing-cycle)
- [Declined or late payments](#declined-or-late-payments)
### Add or update your InfluxDB Cloud 2.0 payment method
1. On the Billing page:
- To update, click the **Change Payment** button on the Billing page.
- In the **Payment Method** section:
- Enter your cardholder name and number
- Select your expiration month and year
- Enter your CVV code and select your card type
- Enter your card billing address
2. Click **Add Card**.
### Add or update your contact information
1. On the Billing page:
- To update, click the **Edit Information** button.
- In the **Contact Information** section, enter your name, company, and address.
2. Click **Save Contact Info**.
### Send notifications when usage exceeds an amount
1. On the Billing page, click **Notification Settings**.
2. Select the **Send email notification** toggle, and then enter the email address to notify.
3. Enter the dollar amount to trigger a notification email. By default, an email is triggered when the amount exceeds $10. (Whole dollar amounts only. For example, $10.50 is not a supported amount.)
### View Pay As You Go billing information
- On the Billing page, view your billing information, including:
- Account balance
- Last billing update (updated hourly)
- Past invoices
- Payment method
- Contact information
- Notification settings
### View Free plan information
- On the Billing page, view the total limits available for the Free plan.
### Exceeded rate limits
If you exceed your plan's [rate limits](/v2.0/cloud/pricing-plans/), {{< cloud-name >}} provides a notification in the {{< cloud-name "short" >}} user interface (UI) and adds a rate limit event to your **Usage** page for review.
All rate-limited requests are rejected; including both read and write requests.
_Rate-limited requests are **not** queued._
_To remove rate limits, [upgrade to a Pay As You Go Plan](/v2.0/cloud/account-management/upgrade-to-payg/)._
#### Rate-limited HTTP response code
When a request exceeds your plan's rate limit, the InfluxDB API returns the following response:
```
HTTP 429 “Too Many Requests”
Retry-After: xxx (seconds to wait before retrying the request)
```
### Billing cycle
Billing occurs on the first day of the month for the previous month. For example, if you start the Pay As You Go plan on September 15, you're billed on October 1 for your usage from September 15-30.
### Declined or late payments
| Timeline | Action |
|:----------------------------|:------------------------------------------------------------------------------------------------------------------------|
| **Initial declined payment**| We'll retry charge every 72 hours. During this period, update your payment method to successfully process your payment. |
| **One week later** | Account disabled except data writes. Update your payment method to successfully process your payment and enable your account. |
| **10-14 days later** | Account completely disabled. During this period, you must contact us at support@influxdata.com to process your payment and enable your account. |
| **21 days later** | Account suspended. Contact support@influxdata.com to settle your final bill and retrieve a copy of your data or access to InfluxDB Cloud dashboards, tasks, Telegraf configurations, and so on.|

View File

@ -30,15 +30,15 @@ For details, see [Export a task](/v2.0/process-data/manage-tasks/export-task/).
#### Export dashboards
For details, see [Export a dashboard](v2.0/visualize-data/dashboards/export-dashboard/).
For details, see [Export a dashboard](/v2.0/visualize-data/dashboards/export-dashboard/).
#### Telegraf configurations
**To save a Telegraf configuration:**
1. Click in the **Organizations** icon in the navigation bar.
1. Click in the **Settings** icon in the navigation bar.
{{< nav-icon "orgs" >}}
{{< nav-icon "settings" >}}
2. Select the **Telegraf** tab. A list of existing Telegraf configurations appears.
3. Click on the name of a Telegraf configuration.

View File

@ -43,7 +43,7 @@ Once you're ready to grow, [upgrade to the Pay As You Go Plan](/v2.0/cloud/accou
5. Click **Continue**. {{< cloud-name >}} opens with a default organization
and bucket (both created from your email address).
_To update organization and bucket names, see [Update an organtization](/v2.0/organizations/update-org/)
_To update organization and bucket names, see [Update an organization](/v2.0/organizations/update-org/)
and [Update a bucket](/v2.0/organizations/buckets/update-bucket/#update-a-bucket-s-name-in-the-influxdb-ui)._
{{% cloud-msg %}}
@ -117,7 +117,7 @@ The primary differences between InfluxDB OSS 2.0 and InfluxDB Cloud 2.0 are:
targets are not available in {{< cloud-name "short" >}}.
- {{< cloud-name "short" >}} instances are currently limited to a single organization with a single user.
- Retrieving data from a file based CSV source using the `file` parameter of the
[`csv.from()`](/v2/reference/flux/functions/csv/from) function is not supported;
[`csv.from()`](/v2.0/reference/flux/functions/csv/from) function is not supported;
however you can use raw CSV data with the `csv` parameter.
- Multi-organization accounts and multi-user organizations are currently not
available in {{< cloud-name >}}.
@ -139,4 +139,4 @@ The primary differences between InfluxDB OSS 2.0 and InfluxDB Cloud 2.0 are:
the new user interface (InfluxDB UI) offers quick and effortless onboarding,
richer user experiences, and significantly quicker results.
- **Usage-based pricing**: The [The Pay As You Go Plan](/v2.0/cloud/pricing-plans/#pay-as-you-go-plan)
offers more flexibility and ensures that you only pay for what you use.
offers more flexibility and ensures that you only pay for what you use. To estimate your projected usage costs, use the [InfluxDB Cloud 2.0 pricing calculator](/v2.0/cloud/pricing-calculator/).

View File

@ -0,0 +1,49 @@
---
title: InfluxDB Cloud 2.0 pricing calculator
description: >
Use the InfluxDB Cloud 2.0 pricing calculator to estimate costs by adjusting the number of devices,
plugins, metrics, and writes for the Pay As You Go Plan.
weight: 2
menu:
v2_0_cloud:
name: Pricing calculator
---
Use the {{< cloud-name >}} pricing calculator to estimate costs for the Pay As You Go plan by adjusting your number of devices,
plugins, users, dashboards, writes, and retention. Default configurations include:
| Configuration | Hobby | Standard | Professional | Enterprise |
|:-----------------------------------|-------:|---------:|-------------:|-----------:|
| **Devices** | 8 | 200 | 500 | 1000 |
| **Plugins per device** | 1 | 4 | 4 | 5 |
| **Users** | 1 | 2 | 10 | 20 |
| **Concurrent dashboards per user** | 2 | 2 | 2 | 2 |
| **Writes per minute** | 6 | 4 | 3 | 3 |
| **Average retention in days** | 7 | 30 | 30 | 30 |
Guidelines used to estimate costs for default configurations:
- Average metrics per plugin = 25
- Average KB per value = 0.01
- Number of cells per dashboard = 10
- Average response KB per cell = 0.5
- Average query duration = 75ms
**To estimate costs**
1. Do one of the following:
- Free plan. Hover over the **Usage** icon in the left navigation bar and select **Billing**.
{{< nav-icon "cloud" >}}
Then click the **Pricing Calculator** link at the bottom of the page.
- Pay As You Go plan. Open the pricing calculator [here](https://cloud2.influxdata.com/pricing).
3. Choose your region.
4. Select your configuration:
- **Hobby**. For a single user monitoring a few machines or sensors.
- **Standard**. For a single team requiring real-time visibility and monitoring a single set of use cases.
- **Professional**. For teams monitoring multiple disparate systems or use cases.
- **Enterprise**. For teams monitoring multiple domains and use cases accessing a variety of dashboards.
5. Adjust the default configuration values to match your number of devices, plugins, metrics, and so on. The **Projected Usage** costs are automatically updated as you adjust your configuration.
6. Click **Get started with InfluxDB Cloud** [to get started](https://v2.docs.influxdata.com/v2.0/cloud/get-started/).

View File

@ -16,35 +16,50 @@ InfluxDB Cloud 2.0 offers two pricing plans:
- [Free Plan](#free-plan)
- [Pay As You Go Plan](#pay-as-you-go-plan)
To estimate your projected usage costs, use the [InfluxDB Cloud 2.0 pricing calculator](/v2.0/cloud/pricing-calculator/).
## Free Plan
All new {{< cloud-name >}} accounts start with a rate-limited Free Plan.
Use it as much and as long as you want within the Free Plan rate limits:
Use this plan as much and as long as you want within the Free Plan rate limits:
#### Free Plan rate limits
- **Writes:** 3MB every 5 minutes
- **Query:** 30MB every 5 minutes
- **Storage:** 72-hour data retention
- **Series cardinality:** 10,000
- **Create:**
- Up to 5 dashboards
- Up to 5 tasks
- Up to 2 buckets
- Up to 2 checks
- Up to 2 notification rules
- Unlimited Slack notification endpoints
_To remove rate limits, [upgrade to a Pay As You Go Plan](/v2.0/cloud/account-management/upgrade-to-payg/)._
## Pay As You Go Plan
Pay As You Go Plans offer more flexibility and ensure you only pay for what you use.
The Pay As You Go Plan offers more flexibility and ensures you only pay for what you [use]((/v2.0/cloud/account-management/data-usage/).
#### Pay As You Go Plan rate limits
In order to protect against any intentional or unintentional harm,
Pay As You Go Plans include soft rate limits:
To protect against any intentional or unintentional harm, Pay As You Go Plans include soft rate limits:
- **Writes:** 300MB every 5 minutes
- **Ingest batch size:** 50MB
- **Queries:** 3000MB every 5 minutes
- **Storage:** Unlimited retention
- **Series cardinality:** 1,000,000
- **Create:**
- Unlimited dashboards
- Unlimited tasks
- Unlimited buckets
- Unlimited users
- Unlimited checks
- Unlimited notification rules
- Unlimited PagerDuty, Slack, and HTTP notification endpoints
_To request higher rate limits, contact [InfluxData Support](mailto:support@influxdata.com)._

View File

@ -27,7 +27,7 @@ This article describes how to get started with InfluxDB OSS. To get started with
### Download and install InfluxDB v2.0 alpha
Download InfluxDB v2.0 alpha for macOS.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.16_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.18_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
### Unpackage the InfluxDB binaries
Unpackage the downloaded archive.
@ -36,7 +36,7 @@ _**Note:** The following commands are examples. Adjust the file paths to your ow
```sh
# Unpackage contents to the current working directory
gunzip -c ~/Downloads/influxdb_2.0.0-alpha.16_darwin_amd64.tar.gz | tar xopf -
gunzip -c ~/Downloads/influxdb_2.0.0-alpha.18_darwin_amd64.tar.gz | tar xopf -
```
If you choose, you can place `influx` and `influxd` in your `$PATH`.
@ -44,7 +44,7 @@ You can also prefix the executables with `./` to run then in place.
```sh
# (Optional) Copy the influx and influxd binary to your $PATH
sudo cp influxdb_2.0.0-alpha.16_darwin_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb_2.0.0-alpha.18_darwin_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}
@ -90,8 +90,8 @@ influxd --reporting-disabled
### Download and install InfluxDB v2.0 alpha
Download the InfluxDB v2.0 alpha package appropriate for your chipset.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.16_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.16_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.18_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.18_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
### Place the executables in your $PATH
Unpackage the downloaded archive and place the `influx` and `influxd` executables in your system `$PATH`.
@ -100,10 +100,10 @@ _**Note:** The following commands are examples. Adjust the file names, paths, an
```sh
# Unpackage contents to the current working directory
tar xvzf path/to/influxdb_2.0.0-alpha.16_linux_amd64.tar.gz
tar xvzf path/to/influxdb_2.0.0-alpha.18_linux_amd64.tar.gz
# Copy the influx and influxd binary to your $PATH
sudo cp influxdb_2.0.0-alpha.16_linux_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb_2.0.0-alpha.18_linux_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}

View File

@ -0,0 +1,38 @@
---
title: Monitor data and send alerts
seotitle: Monitor data and send alerts
description: >
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints.
menu:
v2_0:
name: Monitor & alert
weight: 6
v2.0/tags: [monitor, alert, checks, notification, endpoints]
---
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints.
## The monitoring workflow
1. A [check](/v2.0/reference/glossary/#check) in InfluxDB queries data and assigns a status with a `_level` based on specific conditions.
2. InfluxDB stores the output of a check in the `statuses` measurement in the `_monitoring` system bucket.
3. [Notification rules](/v2.0/reference/glossary/#notification-rule) check data in the `statuses`
measurement and, based on conditions set in the notification rule, send a message
to a [notification endpoint](/v2.0/reference/glossary/#notification-endpoint).
4. InfluxDB stores notifications in the `notifications` measurement in the `_monitoring` system bucket.
## Monitor your data
To get started, do the following:
1. [Create checks](/v2.0/monitor-alert/checks/create/) to monitor data and assign a status.
2. [Add notification endpoints](/v2.0/monitor-alert/notification-endpoints/create/)
to send notifications to third parties.
3. [Create notification rules](/v2.0/monitor-alert/notification-rules/create) to check
statuses and send notifications to your notifications endpoints.
## Manage your monitoring and alerting pipeline
{{< children >}}

View File

@ -0,0 +1,19 @@
---
title: Manage checks
seotitle: Manage monitoring checks in InfluxDB
description: >
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
menu:
v2_0:
parent: Monitor & alert
weight: 101
v2.0/tags: [monitor, checks, notifications, alert]
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
Learn how to create and manage checks:
{{< children >}}

View File

@ -0,0 +1,155 @@
---
title: Create checks
seotitle: Create monitoring checks in InfluxDB
description: >
Create a check in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 201
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
Create a check in the InfluxDB user interface (UI).
Checks query data and apply a status to each point based on specified conditions.
## Check types
There are two types of checks a threshold check and a deadman check.
#### Threshold check
A threshold check assigns a status based on a value being above, below,
inside, or outside of defined thresholds.
[Create a threshold check](#create-a-threshold-check).
#### Deadman check
A deadman check assigns a status to data when a series or group doesn't report
in a specified amount of time.
[Create a deadman check](#create-a-deadman-check).
## Parts of a check
A check consists of two parts a query and check configuration.
#### Check query
- Specifies the dataset to monitor.
- May include tags to narrow results.
#### Check configuration
- Defines check properties, including the check interval and status message.
- Evaluates specified conditions and applies a status (if applicable) to each data point:
- `crit`
- `warn`
- `info`
- `ok`
- Stores status in the `_level` column.
## Create a check in the InfluxDB UI
1. Click **Monitoring & Alerting** in the sidebar in the InfluxDB UI.
{{< nav-icon "alerts" >}}
2. In the top right corner of the **Checks** column, click **{{< icon "plus" >}} Create**
and select the [type of check](#check-types) to create.
3. Click **Name this check** in the top left corner and provide a unique name for the check.
#### Configure the check query
1. Select the **bucket**, **measurement**, **field** and **tag sets** to query.
2. If creating a threshold check, select an **aggregate function**.
Aggregate functions aggregate data between the specified check intervals and
return a single value for the check to process.
In the **Aggregate functions** column, select an interval from the interval drop-down list
(for example, "Every 5 minutes") and an aggregate function from the list of functions.
3. Click **Submit** to run the query and preview the results.
To see the raw query results, click the **{{< icon "toggle" >}} View Raw Data** toggle.
#### Configure the check
1. Click **2. Check** near the top of the window.
2. In the **Properties** column, configure the following:
##### Schedule Every
Select the interval to run the check (for example, "Every 5 minutes").
This interval matches the aggregate function interval for the check query.
_Changing the interval here will update the aggregate function interval._
##### Offset
Delay the execution of a task to account for any late data.
Offset queries do not change the queried time range.
{{% note %}}Your offset must be shorter than your [check interval](#schedule-every).
{{% /note %}}
##### Tags
Add custom tags to the query output.
Each custom tag appends a new column to each row in the query output.
The column label is the tag key and the column value is the tag value.
Use custom tags to associate additional metadata with the check.
Common metadata tags across different checks lets you easily group and organize checks.
You can also use custom tags in [notification rules](/v2.0/monitor-alert/notification-rules/create/).
3. In the **Status Message Template** column, enter the status message template for the check.
Use [Flux string interpolation](/v2.0/reference/flux/language/string-interpolation/)
to populate the message with data from the query.
{{% note %}}
#### Flux only interpolates string values
Flux currently interpolates only string values.
Use the [string() function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/string/)
to convert non-string values to strings.
```js
count = 12
"I currently have ${string(v: count)} cats."
```
{{% /note %}}
Check data is represented as an object, `r`.
Access specific column values using dot notation: `r.columnName`.
Use data from the following columns:
- columns included in the query output
- [custom tags](#tags) added to the query output
- `_check_id`
- `_check_name`
- `_level`
- `_source_measurement`
- `_type`
###### Example status message template
```
From ${r._check_name}:
${r._field} is ${r._level}.
Its value is ${string(v: r._value)}.
```
When a check generates a status, it stores the message in the `_message` column.
4. Define check conditions that assign statuses to points.
Condition options depend on your check type.
##### Configure a threshold check
1. In the **Thresholds** column, click the status name (CRIT, WARN, INFO, or OK)
to define conditions for that specific status.
2. From the **When value** drop-down list, select a threshold: is above, is below,
is inside of, is outside of.
3. Enter a value or values for the threshold.
You can also use the threshold sliders in the data visualization to define threshold values.
##### Configure a deadman check
1. In the **Deadman** column, enter a duration for the deadman check in the **for** field.
For example, `90s`, `5m`, `2h30m`, etc.
2. Use the **set status to** drop-down list to select a status to set on a dead series.
3. In the **And stop checking after** field, enter the time to stop monitoring the series.
For example, `30m`, `2h`, `3h15m`, etc.
5. Click the green **{{< icon "check" >}}** in the top right corner to save the check.
## Clone a check
Create a new check by cloning an existing check.
1. In the **Checks** column, hover over the check you want to clone.
2. Click the **{{< icon "clone" >}}** icon, then **Clone**.

View File

@ -0,0 +1,34 @@
---
title: Delete checks
seotitle: Delete monitoring checks in InfluxDB
description: >
Delete checks in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 204
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
If you no longer need a check, use the InfluxDB user interface (UI) to delete it.
{{% warn %}}
Deleting a check cannot be undone.
{{% /warn %}}
1. Click **Monitoring & Alerting** in the sidebar.
{{< nav-icon "alerts" >}}
2. In the **Checks** column, hover over the check you want to delete, click the
**{{< icon "delete" >}}** icon, then **Delete**.
After a check is deleted, all statuses generated by the check remain in the `_monitoring`
bucket until the retention period for the bucket expires.
{{% note %}}
You can also [disable a check](/v2.0/monitor-alert/checks/update/#enable-or-disable-a-check)
without having to delete it.
{{% /note %}}

View File

@ -0,0 +1,62 @@
---
title: Update checks
seotitle: Update monitoring checks in InfluxDB
description: >
Update, rename, enable or disable checks in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 203
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
Update checks in the InfluxDB user interface (UI).
Common updates include:
- [Update check queries and logic](#update-check-queries-and-logic)
- [Enable or disable a check](#enable-or-disable-a-check)
- [Rename a check](#rename-a-check)
- [Add or update a check description](#add-or-update-a-check-description)
- [Add a label to a check](#add-a-label-to-a-check)
To update checks, click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
## Update check queries and logic
1. In the **Checks** column, click the name of the check you want to update.
The check builder appears.
2. To edit the check query, click **1. Query** at the top of the check builder window.
3. To edit the check logic, click **2. Check** at the top of the check builder window.
_For details about using the check builder, see [Create checks](/v2.0/monitor-alert/checks/create/)._
## Enable or disable a check
In the **Checks** column, click the {{< icon "toggle" >}} toggle next to a check
to enable or disable it.
## Rename a check
1. In the **Checks** column, hover over the name of the check you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the check name.
2. Enter a new name and click out of the name field or press enter to save.
_You can also rename a check in the [check builder](#update-check-queries-and-logic)._
## Add or update a check description
1. In the **Checks** column, hover over the check description you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the description.
2. Enter a new description and click out of the name field or press enter to save.
## Add a label to a check
1. In the **Checks** column, click **Add a label** next to the check you want to add a label to.
The **Add Labels** box opens.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **Create Label**.
4. To remove a label, hover over the label under to a rule and click **{{< icon "x" >}}**.

View File

@ -0,0 +1,43 @@
---
title: View checks
seotitle: View monitoring checks in InfluxDB
description: >
View check details and statuses and notifications generated by checks in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 202
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
View check details and statuses and notifications generated by checks in the InfluxDB user interface (UI).
- [View a list of all checks](#view-a-list-of-all-checks)
- [View check details](#view-check-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-check)
- [View notifications triggered by a check](#view-notifications-triggered-by-a-check)
To view checks, click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
## View a list of all checks
The **Checks** column on the Monitoring & Alerting landing page displays all existing checks.
## View check details
In the **Checks** column, click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a check
1. In the **Checks** column, hover over the check, click the **{{< icon "view" >}}**
icon, then **View History**.
The Statuses History page displays statuses generated by the selected check.
## View notifications triggered by a check
1. In the **Checks** column, hover over the check, click the **{{< icon "view" >}}**
icon, then **View History**.
2. In the top left corner, click **Notifications**.
The Notifications History page displays notifications initiated by the selected check.

View File

@ -0,0 +1,20 @@
---
title: Manage notification endpoints
list_title: Manage notification endpoints
description: >
Create, read, update, and delete endpoints in the InfluxDB UI.
v2.0/tags: [monitor, endpoints, notifications, alert]
menu:
v2_0:
parent: Monitor & alert
weight: 102
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
Notification endpoints store information to connect to a third party service.
If you're using the Free plan, create a Slack endpoint.
If you're using the Pay as You Go plan, create a connection to a HTTP, Slack, or PagerDuty endpoint.
{{< children >}}

View File

@ -0,0 +1,45 @@
---
title: Create notification endpoints
description: >
Create notification endpoints to send alerts on your time series data.
menu:
v2_0:
name: Create endpoints
parent: Manage notification endpoints
weight: 201
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
To send notifications about changes in your data, start by creating a notification endpoint to a third party service. After creating notification endpoints, [create notification rules](/v2.0/monitor-alert/notification-rules/create) to send alerts to third party services on [check statuses](/v2.0/monitor-alert/checks/create).
## Create a notification endpoint in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Next to **Notification Endpoints**, click **Create**.
3. From the **Destination** drop-down list, select a destination endpoint to send notifications.
The following endpoints are available for InfluxDB 2.0 OSS, the InfluxDB Cloud 2.0 Free Plan,
and the InfluxDB Cloud 2.0 Pay As You Go (PAYG) Plan:
| Endpoint | OSS | Free Plan _(Cloud)_ | PAYG Plan _(Cloud)_ |
|:-------- |:--------: |:-------------------: |:----------------------------:|
| **Slack** | **{{< icon "check" >}}** | **{{< icon "check" >}}** | **{{< icon "check" >}}** |
| **PagerDuty** | **{{< icon "check" >}}** | | **{{< icon "check" >}}** |
| **HTTP** | **{{< icon "check" >}}** | | **{{< icon "check" >}}** |
4. In the **Name** and **Description** fields, enter a name and description for the endpoint.
5. Enter enter information to connect to the endpoint:
- For HTTP, enter the **URL** to send the notification. Select the **auth method** to use: **None** for no authentication. To authenticate with a username and password, select **Basic** and then enter credentials in the **Username** and **Password** fields. To authenticate with a token, select **Bearer**, and then enter the authentication token in the **Token** field.
- For Slack, create an [Incoming WebHook](https://api.slack.com/incoming-webhooks#posting_with_webhooks) in Slack, and then enter your webHook URL in the **Slack Incoming WebHook URL** field.
- For PagerDuty:
- [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service), [add an integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service), and then enter the PagerDuty integration key for your new service in the **Routing Key** field.
- The **Client URL** provides a useful link in your PagerDuty notification. Enter any URL that you'd like to use to investigate issues. This URL is sent as the `client_url` property in the PagerDuty trigger event. By default, the **Client URL** is set to your Monitoring & Alerting History page, and the following included in the PagerDuty trigger event: `"client_url": "https://twodotoh.a.influxcloud.net/orgs/<your-org-ID>/alert-history”`
6. Click **Create Notification Endpoint**.

View File

@ -0,0 +1,24 @@
---
title: Delete notification endpoints
description: >
Delete a notification endpoint in the InfluxDB UI.
menu:
v2_0:
name: Delete endpoints
parent: Manage notification endpoints
weight: 204
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
If notifications are no longer sent to an endpoint, complete the steps below to delete the endpoint, and then [update notification rules](/v2.0/monitor-alert/notification-rules/update) with a new notification endpoint as needed.
## Delete a notification endpoint in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, find the rule you want to delete.
3. Click the delete icon, then click **Delete** to confirm.

View File

@ -0,0 +1,65 @@
---
title: Update notification endpoints
description: >
Update notification endpoints in the InfluxDB UI.
menu:
v2_0:
name: Update endpoints
parent: Manage notification endpoints
weight: 203
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
To update the notification endpoint details, complete the procedures below as needed. To update the notification endpoint selected for a notification rule, see [update notification rules](/v2.0/monitor-alert/notification-rules/update/).
## Add a label to notification endpoint
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, click **Add a label** next to the endpoint you want to add a label to. The **Add Labels** box opens.
3. To add an existing label, select the label from the list.
4. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **Create Label**.
5. To remove a label, hover over the label under an endpoint and click X.
## Disable notification endpoint
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, find the endpoint you want to disable.
3. Click the blue toggle to disable the notification endpoint.
## Update the name or description for notification endpoint
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, hover over the name or description of the endpoint.
3. Click the pencil icon to edit the field.
4. Click outside of the field to save your changes.
## Change endpoint details
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, click the endpoint to update.
3. Update details as needed, and then click **Edit a Notification Endpoint**. For details about each field, see [Create notification endpoints](/v2.0/monitor-alert/notification-endpoints/create/).

View File

@ -0,0 +1,43 @@
---
title: View notification endpoint history
seotitle: View notification endpoint details and history
description: >
View notification endpoint details and history in the InfluxDB UI.
menu:
v2_0:
name: View endpoint history
parent: Manage notification endpoints
weight: 202
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
View notification endpoint details and history in the InfluxDB user interface (UI).
- [View notification endpoints](#view-notification-endpoints)
- [View notification endpoint details](#view-notification-endpoint-details)
- [View history notification endpoint history](#view-notification-endpoint-history), including statues and notifications sent to the endpoint
## View notification endpoints
- Click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
In the **Notification Endpoints** column, view existing notification endpoints.
## View notification endpoint details
1. Click **Monitoring & Alerting** in the InfluxDB UI sidebar.
2. In the **Notification Endpoints** column, click the name of the notification endpoint you want to view.
3. View the notification endpoint destination, name, and information to connect to the endpoint.
## View notification endpoint history
1. Click **Monitoring & Alerting** in the InfluxDB UI sidebar.
2. In the **Notification Endpoints** column, hover over the notification endpoint, click the **{{< icon "view" >}}** icon, then **View History**.
The Check Statuses History page displays:
- Statuses generated for the selected notification endpoint
- Notifications sent to the selected notification endpoint

View File

@ -0,0 +1,17 @@
---
title: Manage notification rules
description: >
Manage notification rules in InfluxDB.
weight: 103
v2.0/tags: [monitor, notifications, alert]
menu:
v2_0:
parent: Monitor & alert
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
The following articles provide information on managing your notification rules:
{{< children >}}

View File

@ -0,0 +1,42 @@
---
title: Create notification rules
description: >
Create notification rules to send alerts on your time series data.
weight: 201
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
Once you've set up checks and notification endpoints, create notification rules to alert you.
_For details, see [Manage checks](/v2.0/monitor-alert/checks/) and
[Manage notification endpoints](/v2.0/monitor-alert/notification-endpoints/)._
## Create a new notification rule in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, click **+Create**.
3. Complete the **About** section:
1. In the **Name** field, enter a name for the notification rule.
2. In the **Schedule Every** field, enter how frequently the rule should run.
3. In the **Offset** field, enter an offset time. For example,if a task runs on the hour, a 10m offset delays the task to 10 minutes after the hour. Time ranges defined in the task are relative to the specified execution time.
4. In the **Conditions** section, build a condition using a combination of status and tag keys.
- Next to **When status is equal to**, select a status from the drop-down field.
- Next to **AND When**, enter one or more tag key-value pairs to filter by.
5. In the **Message** section, select an endpoint to notify.
6. Click **Create Notification Rule**.
## Clone an existing notification rule in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, hover over the rule you want to clone.
3. Click the clone icon and select **Clone**. The cloned rule appears.

View File

@ -0,0 +1,21 @@
---
title: Delete notification rules
description: >
If you no longer need to receive an alert, delete the associated notification rule.
weight: 204
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
## Delete a notification rule in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, find the rule you want to delete.
3. Click the delete icon, then click **Delete** to confirm.

View File

@ -0,0 +1,51 @@
---
title: Update notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 203
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
## Add a label to notification rules
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, click **Add a label** next to the rule you want to add a label to. The **Add Labels** box opens.
3. To add an existing label, select the label from the list.
4. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **Create Label**.
5. To remove a label, hover over the label under to a rule and click X.
## Disable notification rules
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, find the rule you want to disable.
3. Click the blue toggle to disable the notification rule.
## Update the name or description for notification rules
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, hover over the name or description of a rule.
3. Click the pencil icon to edit the field.
4. Click outside of the field to save your changes.

View File

@ -0,0 +1,42 @@
---
title: View notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 202
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
View notification rule details and statuses and notifications generated by notification rules in the InfluxDB user interface (UI).
- [View a list of all notification rules](#view-a-list-of-all-notification-rules)
- [View notification rule details](#view-notification-rule-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-notification-rule)
- [View notifications triggered by a notification rule](#view-notifications-triggered-by-a-notification-rule)
To view notification rules, click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
## View a list of all notification rules
The **Notification Rules** column on the Monitoring & Alerting landing page displays all existing checks.
## View notification rule details
In the **Notification Rules** column, click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a notification rule
1. In the **Notification Rules** column, hover over the check, click the **{{< icon "view" >}}**
icon, then **View History**.
The Statuses History page displays statuses generated by the selected check.
## View notifications triggered by a notification rule
1. In the **Notification Rules** column, hover over the notification rule, click the **{{< icon "view" >}}**
icon, then **View History**.
2. In the top left corner, click **Notifications**.
The Notifications History page displays notifications initiated by the selected notification rule.

View File

@ -14,16 +14,16 @@ to create a bucket.
## Create a bucket in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Click **{{< icon "plus" >}} Create Bucket** in the upper right.
4. Enter a **Name** for the bucket.
5. Select **How often to clear data?**:
Select **Never** to retain data forever.
Select **Periodically** to define a specific retention policy.
5. Select when to **Delete Data**:
- **Never** to retain data forever.
- **Older than** to choose a specific retention policy.
5. Click **Create** to create the bucket.
## Create a bucket using the influx CLI
@ -32,7 +32,7 @@ Use the [`influx bucket create` command](/v2.0/reference/cli/influx/bucket/creat
to create a new bucket. A bucket requires the following:
- A name
- The name or ID of the organization to which it belongs
- The name or ID of the organization the bucket belongs to
- A retention period in nanoseconds
```sh

View File

@ -14,13 +14,13 @@ to delete a bucket.
## Delete a bucket in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Hover over the bucket you would like to delete.
4. Click **Delete** and **Confirm** to delete the bucket.
4. Click **{{< icon "delete" >}} Delete Bucket** and **Confirm** to delete the bucket.
## Delete a bucket using the influx CLI

View File

@ -8,6 +8,7 @@ menu:
parent: Manage buckets
weight: 202
---
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update a bucket.
Note that updating an bucket's name will affect any assets that reference the bucket by name, including the following:
@ -23,23 +24,22 @@ If you change a bucket name, be sure to update the bucket in the above places as
## Update a bucket's name in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
3. Hover over the name of the bucket you want to rename in the list.
4. Click **Rename**.
5. Review the information in the window that appears and click **I understand, let's rename my bucket**.
6. Update the bucket's name and click **Change Bucket Name**.
2. Select **Buckets**.
3. Click **Rename** under the bucket you want to rename.
4. Review the information in the window that appears and click **I understand, let's rename my bucket**.
5. Update the bucket's name and click **Change Bucket Name**.
## Update a bucket's retention policy in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Click the name of the bucket you want to update from the list.
4. In the window that appears, edit the bucket's retention policy.
5. Click **Save Changes**.
@ -50,7 +50,7 @@ Use the [`influx bucket update` command](/v2.0/reference/cli/influx/bucket/updat
to update a bucket. Updating a bucket requires the following:
- The bucket ID _(provided in the output of `influx bucket find`)_
- The name or ID of the organization to which the bucket belongs
- The name or ID of the organization the bucket belongs to.
##### Update the name of a bucket
```sh

View File

@ -11,18 +11,17 @@ weight: 202
## View buckets in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Click on a bucket to view details.
## View buckets using the influx CLI
Use the [`influx bucket find` command](/v2.0/reference/cli/influx/bucket/find)
to view a buckets in an organization. Viewing bucket requires the following:
to view a buckets in an organization.
```sh
influx bucket find

View File

@ -32,7 +32,7 @@ A separate bucket where aggregated, downsampled data is stored.
To downsample data, it must be aggregated in some way.
What specific method of aggregation you use depends on your specific use case,
but examples include mean, median, top, bottom, etc.
View [Flux's aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates/)
View [Flux's aggregate functions](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/)
for more information and ideas.
## Create a destination bucket
@ -47,7 +47,7 @@ The example task script below is a very basic form of data downsampling that doe
1. Defines a task named "cq-mem-data-1w" that runs once a week.
2. Defines a `data` variable that represents all data from the last 2 weeks in the
`mem` measurement of the `system-data` bucket.
3. Uses the [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/)
3. Uses the [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/)
to window the data into 1 hour intervals and calculate the average of each interval.
4. Stores the aggregated data in the `system-data-downsampled` bucket under the
`my-org` organization.

View File

@ -54,8 +54,8 @@ in form fields when creating the task.
{{% /note %}}
## Define a data source
Define a data source using Flux's [`from()` function](/v2.0/reference/flux/functions/built-in/inputs/from/)
or any other [Flux input functions](/v2.0/reference/flux/functions/built-in/inputs/).
Define a data source using Flux's [`from()` function](/v2.0/reference/flux/stdlib/built-in/inputs/from/)
or any other [Flux input functions](/v2.0/reference/flux/stdlib/built-in/inputs/).
For convenience, consider creating a variable that includes the sourced data with
the required time range and any relevant filters.
@ -88,7 +88,7 @@ specific use case.
The example below illustrates a task that downsamples data by calculating the average of set intervals.
It uses the `data` variable defined [above](#define-a-data-source) as the data source.
It then windows the data into 5 minute intervals and calculates the average of each
window using the [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/).
window using the [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/).
```js
data
@ -104,7 +104,7 @@ _See [Common tasks](/v2.0/process-data/common-tasks) for examples of tasks commo
In the vast majority of task use cases, once data is transformed, it needs to sent and stored somewhere.
This could be a separate bucket with a different retention policy, another measurement, or even an alert endpoint _(Coming)_.
The example below uses Flux's [`to()` function](/v2.0/reference/flux/functions/built-in/outputs/to)
The example below uses Flux's [`to()` function](/v2.0/reference/flux/stdlib/built-in/outputs/to)
to send the transformed data to another bucket:
```js

View File

@ -55,8 +55,6 @@ The InfluxDB UI provides multiple ways to create a task:
6. In the right panel, enter your task script.
7. Click **Save** in the upper right.
{{< img-hd src="/img/2-0-tasks-create-edit.png" title="Create a task" />}}
### Import a task
1. Click on the **Tasks** icon in the left navigation menu.

View File

@ -60,35 +60,50 @@ In your request, set the following:
- Your organization via the `org` or `orgID` URL parameters.
- `Authorization` header to `Token ` + your authentication token.
- `accept` header to `application/csv`.
- `content-type` header to `application/vnd.flux`.
- `Accept` header to `application/csv`.
- `Content-type` header to `application/vnd.flux`.
- Your plain text query as the request's raw data.
This lets you POST the Flux query in plain text and receive the annotated CSV response.
InfluxDB returns the query results in [annotated CSV](/v2.0/reference/annotated-csv/).
{{% note %}}
#### Use gzip to compress the query response
To compress the query response, set the `Accept-Encoding` header to `gzip`.
This saves network bandwidth, but increases server-side load.
{{% /note %}}
Below is an example `curl` command that queries InfluxDB:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
[Without compression](#)
[With compression](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'accept:application/csv' \
-H 'content-type:application/vnd.flux' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'Accept: application/csv' \
-H 'Content-type: application/vnd.flux' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS -H 'Authorization: Token TOKENSTRINGHERE' -H 'accept:application/csv' -H 'content-type:application/vnd.flux' -d 'from(bucket:“test”) |> range(start:-1000h) |> group(columns:[“_measurement”], mode:“by”) |> sum()'
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'Accept: application/csv' \
-H 'Content-type: application/vnd.flux' \
-H 'Accept-Encoding: gzip' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -11,7 +11,7 @@ menu:
parent: Query data
related:
- /v2.0/reference/flux/
- /v2.0/reference/flux/functions/
- /v2.0/reference/flux/stdlib/
---
Flux is InfluxData's functional data scripting language designed for querying,

View File

@ -9,9 +9,9 @@ menu:
weight: 201
related:
- /v2.0/query-data/guides/
- /v2.0/reference/flux/functions/built-in/inputs/from
- /v2.0/reference/flux/functions/built-in/transformations/range
- /v2.0/reference/flux/functions/built-in/transformations/filter
- /v2.0/reference/flux/stdlib/built-in/inputs/from
- /v2.0/reference/flux/stdlib/built-in/transformations/range
- /v2.0/reference/flux/stdlib/built-in/transformations/filter
---
This guide walks through the basics of using Flux to query data from InfluxDB.
@ -23,8 +23,8 @@ Every Flux query needs the following:
## 1. Define your data source
Flux's [`from()`](/v2.0/reference/flux/functions/built-in/inputs/from) function defines an InfluxDB data source.
It requires a [`bucket`](/v2.0/reference/flux/functions/built-in/inputs/from#bucket) parameter.
Flux's [`from()`](/v2.0/reference/flux/stdlib/built-in/inputs/from) function defines an InfluxDB data source.
It requires a [`bucket`](/v2.0/reference/flux/stdlib/built-in/inputs/from#bucket) parameter.
The following examples use `example-bucket` as the bucket name.
```js
@ -36,7 +36,7 @@ Flux requires a time range when querying time series data.
"Unbounded" queries are very resource-intensive and as a protective measure,
Flux will not query the database without a specified range.
Use the pipe-forward operator (`|>`) to pipe data from your data source into the [`range()`](/v2.0/reference/flux/functions/built-in/transformations/range)
Use the pipe-forward operator (`|>`) to pipe data from your data source into the [`range()`](/v2.0/reference/flux/stdlib/built-in/transformations/range)
function, which specifies a time range for your query.
It accepts two properties: `start` and `stop`.
Ranges can be **relative** using negative [durations](/v2.0/reference/flux/language/lexical-elements#duration-literals)

View File

@ -8,15 +8,15 @@ menu:
parent: Get started with Flux
weight: 202
related:
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow
- /v2.0/reference/flux/functions/built-in/transformations/window
- /v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow
- /v2.0/reference/flux/stdlib/built-in/transformations/window
---
When [querying data from InfluxDB](/v2.0/query-data/get-started/query-influxdb),
you often need to transform that data in some way.
Common examples are aggregating data into averages, downsampling data, etc.
This guide demonstrates using [Flux functions](/v2.0/reference/flux/functions) to transform your data.
This guide demonstrates using [Flux functions](/v2.0/reference/flux/stdlib) to transform your data.
It walks through creating a Flux script that partitions data into windows of time,
averages the `_value`s in each window, and outputs the averages as a new table.
@ -39,13 +39,13 @@ from(bucket:"example-bucket")
## Flux functions
Flux provides a number of functions that perform specific operations, transformations, and tasks.
You can also [create custom functions](/v2.0/query-data/guides/custom-functions) in your Flux queries.
_Functions are covered in detail in the [Flux functions](/v2.0/reference/flux/functions) documentation._
_Functions are covered in detail in the [Flux functions](/v2.0/reference/flux/stdlib) documentation._
A common type of function used when transforming data queried from InfluxDB is an aggregate function.
Aggregate functions take a set of `_value`s in a table, aggregate them, and transform
them into a new value.
This example uses the [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean)
This example uses the [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean)
to average values within each time window.
{{% note %}}
@ -55,7 +55,7 @@ It's just good to understand the steps in the process.
{{% /note %}}
## Window your data
Flux's [`window()` function](/v2.0/reference/flux/functions/built-in/transformations/window) partitions records based on a time value.
Flux's [`window()` function](/v2.0/reference/flux/stdlib/built-in/transformations/window) partitions records based on a time value.
Use the `every` parameter to define a duration of each window.
For this example, window data in five minute intervals (`5m`).
@ -78,7 +78,7 @@ When visualized, each table is assigned a unique color.
## Aggregate windowed data
Flux aggregate functions take the `_value`s in each table and aggregate them in some way.
Use the [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean) to average the `_value`s of each table.
Use the [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean) to average the `_value`s of each table.
```js
from(bucket:"example-bucket")
@ -104,7 +104,7 @@ Aggregate functions don't infer what time should be used for the aggregate value
Therefore the `_time` column is dropped.
A `_time` column is required in the [next operation](#unwindow-aggregate-tables).
To add one, use the [`duplicate()` function](/v2.0/reference/flux/functions/built-in/transformations/duplicate)
To add one, use the [`duplicate()` function](/v2.0/reference/flux/stdlib/built-in/transformations/duplicate)
to duplicate the `_stop` column as the `_time` column for each windowed table.
```js
@ -149,7 +149,7 @@ process helps to understand how data changes "shape" as it is passed through eac
Flux provides (and allows you to create) "helper" functions that abstract many of these steps.
The same operation performed in this guide can be accomplished using the
[`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow).
[`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow).
```js
from(bucket:"example-bucket")

View File

@ -27,9 +27,9 @@ Conditional expressions are most useful in the following contexts:
- When defining variables.
- When using functions that operate on a single row at a time (
[`filter()`](/v2.0/reference/flux/functions/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/functions/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce) ).
[`filter()`](/v2.0/reference/flux/stdlib/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/stdlib/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce) ).
## Examples
@ -72,7 +72,7 @@ from(bucket: "example-bucket")
### Conditionally transform column values with map()
The following example uses the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map/)
The following example uses the [`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map/)
to conditionally transform column values.
It sets the `level` column to a specific string based on `_value` column.
@ -119,8 +119,8 @@ from(bucket: "example-bucket")
{{< /code-tabs-wrapper >}}
### Conditionally increment a count with reduce()
The following example uses the [`aggregateWindow()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/)
and [`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/)
The following example uses the [`aggregateWindow()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/)
and [`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/)
functions to count the number of records in every five minute window that exceed a defined threshold.
{{< code-tabs-wrapper >}}

View File

@ -70,14 +70,14 @@ functionName = (tables=<-) => tables |> functionOperations
###### Multiply row values by x
The example below defines a `multByX` function that multiplies the `_value` column
of each row in the input table by the `x` parameter.
It uses the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map)
It uses the [`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map)
to modify each `_value`.
```js
// Function definition
multByX = (tables=<-, x) =>
tables
|> map(fn: (r) => r._value * x)
|> map(fn: (r) => ({ r with _value: r._value * x}))
// Function usage
from(bucket: "example-bucket")
@ -104,9 +104,9 @@ Defaults are overridden by explicitly defining the parameter in the function cal
###### Get the winner or the "winner"
The example below defines a `getWinner` function that returns the record with the highest
or lowest `_value` (winner versus "winner") depending on the `noSarcasm` parameter which defaults to `true`.
It uses the [`sort()` function](/v2.0/reference/flux/functions/built-in/transformations/sort)
It uses the [`sort()` function](/v2.0/reference/flux/stdlib/built-in/transformations/sort)
to sort records in either descending or ascending order.
It then uses the [`limit()` function](/v2.0/reference/flux/functions/built-in/transformations/limit)
It then uses the [`limit()` function](/v2.0/reference/flux/stdlib/built-in/transformations/limit)
to return the first record from the sorted table.
```js

View File

@ -10,9 +10,9 @@ weight: 301
---
To aggregate your data, use the Flux
[built-in aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates/)
[built-in aggregate functions](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/)
or create custom aggregate functions using the
[`reduce()`function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/).
[`reduce()`function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/).
## Aggregate function characteristics
Aggregate functions all have the same basic characteristics:
@ -22,7 +22,7 @@ Aggregate functions all have the same basic characteristics:
## How reduce() works
The `reduce()` function operates on one row at a time using the function defined in
the [`fn` parameter](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#fn).
the [`fn` parameter](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/#fn).
The `fn` function maps keys to specific values using two [objects](/v2.0/query-data/get-started/syntax-basics/#objects)
specified by the following parameters:
@ -32,7 +32,7 @@ specified by the following parameters:
| `accumulator` | An object that contains values used in each row's aggregate calculation. |
{{% note %}}
The `reduce()` function's [`identity` parameter](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#identity)
The `reduce()` function's [`identity` parameter](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/#identity)
defines the initial `accumulator` object.
{{% /note %}}
@ -50,7 +50,7 @@ in an input table.
```
{{% note %}}
To preserve existing columns, [use the `with` operator](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#preserve-columns)
To preserve existing columns, [use the `with` operator](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/#preserve-columns)
when mapping values in the `r` object.
{{% /note %}}
@ -150,7 +150,7 @@ and the `reduce()` function to aggregate rows in each input table.
### Create a custom average function
This example illustrates how to create a function that averages values in a table.
_This is meant for demonstration purposes only.
The built-in [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean/)
The built-in [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean/)
does the same thing and is much more performant._
{{< code-tabs-wrapper >}}

View File

@ -0,0 +1,69 @@
---
title: Check if a value exists
seotitle: Use Flux to check if a value exists
description: >
Use the Flux `exists` operator to check if an object contains a key or if that
key's value is `null`.
v2.0/tags: [exists]
menu:
v2_0:
name: Check if a value exists
parent: How-to guides
weight: 209
---
Use the Flux `exists` operator to check if an object contains a key or if that
key's value is `null`.
```js
p = {firstName: "John", lastName: "Doe", age: 42}
exists p.firstName
// Returns true
exists p.height
// Returns false
```
Use `exists` with row functions (
[`filter()`](/v2.0/reference/flux/stdlib/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/stdlib/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/))
to check if a row includes a column or if the value for that column is `null`.
#### Filter out null values
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => exists r._value)
```
#### Map values based on existence
```js
from(bucket: "default")
|> range(start: -30s)
|> map(fn: (r) => ({
r with
human_readable:
if exists r._value then "${r._field} is ${string(v:r._value)}."
else "${r._field} has no value."
}))
```
#### Ignore null values in a custom aggregate function
```js
customSumProduct = (tables=<-) =>
tables
|> reduce(
identity: {sum: 0.0, product: 1.0},
fn: (r, accumulator) => ({
r with
sum:
if exists r._value then r._value + accumulator.sum
else accumulator.sum,
product:
if exists r._value then r.value * accumulator.product
else accumulator.product
})
)
```

View File

@ -28,7 +28,7 @@ Understanding how modifying group keys shapes output data is key to successfully
grouping and transforming data into your desired output.
## group() Function
Flux's [`group()` function](/v2.0/reference/flux/functions/built-in/transformations/group) defines the
Flux's [`group()` function](/v2.0/reference/flux/stdlib/built-in/transformations/group) defines the
group key for output tables, i.e. grouping records based on values for specific columns.
###### group() example

View File

@ -14,7 +14,7 @@ Histograms provide valuable insight into the distribution of your data.
This guide walks through using Flux's `histogram()` function to transform your data into a **cumulative histogram**.
## histogram() function
The [`histogram()` function](/v2.0/reference/flux/functions/built-in/transformations/histogram) approximates the
The [`histogram()` function](/v2.0/reference/flux/stdlib/built-in/transformations/histogram) approximates the
cumulative distribution of a dataset by counting data frequencies for a list of "bins."
A **bin** is simply a range in which a data point falls.
All data points that are less than or equal to the bound are counted in the bin.
@ -41,7 +41,7 @@ Flux provides two helper functions for generating histogram bins.
Each generates an array of floats designed to be used in the `histogram()` function's `bins` parameter.
### linearBins()
The [`linearBins()` function](/v2.0/reference/flux/functions/built-in/misc/linearbins) generates a list of linearly separated floats.
The [`linearBins()` function](/v2.0/reference/flux/stdlib/built-in/misc/linearbins) generates a list of linearly separated floats.
```js
linearBins(start: 0.0, width: 10.0, count: 10)
@ -50,10 +50,10 @@ linearBins(start: 0.0, width: 10.0, count: 10)
```
### logarithmicBins()
The [`logarithmicBins()` function](/v2.0/reference/flux/functions/built-in/misc/logarithmicbins) generates a list of exponentially separated floats.
The [`logarithmicBins()` function](/v2.0/reference/flux/stdlib/built-in/misc/logarithmicbins) generates a list of exponentially separated floats.
```js
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinity: true)
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
```
@ -74,7 +74,7 @@ Because the Histogram visualization uses visualization controls to creates bins
{{% note %}}
Output of the [`histogram()` function](#histogram-function) is **not** compatible
with the Histogram visualization type.
View the example [below](#visualize-error-counts-by-severity-over-time).
View the example [below](#visualize-errors-by-severity).
{{% /note %}}
## Examples
@ -160,7 +160,8 @@ Table: keys: [_start, _stop, _field, _measurement, host]
```
### Visualize errors by severity
Use the [Telegraf Syslog plugin](Telegraf Syslog plugin) to collect error information from your system.
Use the [Telegraf Syslog plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/syslog)
to collect error information from your system.
Query the `severity_code` field in the `syslog` measurement:
```js

View File

@ -10,7 +10,7 @@ menu:
weight: 205
---
The [`join()` function](/v2.0/reference/flux/functions/built-in/transformations/join) merges two or more
The [`join()` function](/v2.0/reference/flux/stdlib/built-in/transformations/join) merges two or more
input streams, whose values are equal on a set of common columns, into a single output stream.
Flux allows you to join on any columns common between two data streams and opens the door
for operations such as cross-measurement joins and math across measurements.
@ -205,7 +205,7 @@ These represent the columns with values unique to the two input tables.
## Calculate and create a new table
With the two streams of data joined into a single table, use the
[`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map)
[`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map)
to build a new table by mapping the existing `_time` column to a new `_time`
column and dividing `_value_mem` by `_value_proc` and mapping it to a
new `_value` column.

View File

@ -0,0 +1,108 @@
---
title: Manipulate timestamps with Flux
description: >
Use Flux to process and manipulate timestamps.
menu:
v2_0:
name: Manipulate timestamps
parent: How-to guides
weight: 209
---
Every point stored in InfluxDB has an associated timestamp.
Use Flux to process and manipulate timestamps to suit your needs.
- [Convert timestamp format](#convert-timestamp-format)
- [Time-related Flux functions](#time-related-flux-functions)
## Convert timestamp format
### Convert nanosecond epoch timestamp to RFC3339
Use the [`time()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/time/)
to convert a **nanosecond** epoch timestamp to an RFC3339 timestamp.
```js
time(v: 1568808000000000000)
// Returns 2019-09-18T12:00:00.000000000Z
```
### Convert RFC3339 to nanosecond epoch timestamp
Use the [`uint()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/unit/)
to convert an RFC3339 timestamp to a nanosecond epoch timestamp.
```js
uint(v: 2019-09-18T12:00:00.000000000Z)
// Returns 1568808000000000000
```
### Calculate the duration between two timestamps
Flux doesn't support mathematical operations using [time type](/v2.0/reference/flux/language/types/#time-types) values.
To calculate the duration between two timestamps:
1. Use the `uint()` function to convert each timestamp to a nanosecond epoch timestamp.
2. Subtract one nanosecond epoch timestamp from the other.
3. Use the `duration()` function to convert the result into a duration.
```js
time1 = uint(v: 2019-09-17T21:12:05Z)
time2 = uint(v: 2019-09-18T22:16:35Z)
duration(v: time2 - time1)
// Returns 25h4m30s
```
{{% note %}}
Flux doesn't support duration column types.
To store a duration in a column, use the [`string()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/string/)
to convert the duration to a string.
{{% /note %}}
## Time-related Flux functions
### Retrieve the current time
Use the [`now()` function](/v2.0/reference/flux/stdlib/built-in/misc/now/) to
return the current UTC time in RFC3339 format.
```js
now()
```
### Add a duration to a timestamp
The [`experimental.addDuration()` function](/v2.0/reference/flux/stdlib/experimental/addduration/)
adds a duration to a specified time and returns the resulting time.
{{% warn %}}
By using `experimental.addDuration()`, you accept the
[risks of experimental functions](/v2.0/reference/flux/stdlib/experimental/#use-experimental-functions-at-your-own-risk).
{{% /warn %}}
```js
import "experimental"
experimental.addDuration(
d: 6h,
to: 2019-09-16T12:00:00Z,
)
// Returns 2019-09-16T18:00:00.000000000Z
```
### Subtract a duration from a timestamps
The [`experimental.addDuration()` function](/v2.0/reference/flux/stdlib/experimental/subduration/)
subtracts a duration from a specified time and returns the resulting time.
{{% warn %}}
By using `experimental.addDuration()`, you accept the
[risks of experimental functions](/v2.0/reference/flux/stdlib/experimental/#use-experimental-functions-at-your-own-risk).
{{% /warn %}}
```js
import "experimental"
experimental.subDuration(
d: 6h,
from: 2019-09-16T12:00:00Z,
)
// Returns 2019-09-16T06:00:00.000000000Z
```

View File

@ -40,7 +40,7 @@ Otherwise, you will get an error similar to:
Error: type error: float != int
```
To convert operands to the same type, use [type-conversion functions](/v2.0/reference/flux/functions/built-in/transformations/type-conversions/)
To convert operands to the same type, use [type-conversion functions](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/)
or manually format operands.
The operand data type determines the output data type.
For example:
@ -82,7 +82,7 @@ percent(sample: 20.0, total: 80.0)
To transform multiple values in an input stream, your function needs to:
- [Handle piped-forward data](/v2.0/query-data/guides/custom-functions/#functions-that-manipulate-piped-forward-data).
- Use the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map) to iterate over each row.
- Use the [`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map) to iterate over each row.
The example `multiplyByX()` function below includes:
@ -146,7 +146,7 @@ data
#### Include partial gigabytes
Because the original metric (bytes) is an integer, the output of the operation is an integer and does not include partial GBs.
To calculate partial GBs, convert the `_value` column and its values to floats using the
[`float()` function](/v2.0/reference/flux/functions/built-in/transformations/type-conversions/float)
[`float()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/float)
and format the denominator in the division operation as a float.
```js

View File

@ -12,7 +12,7 @@ menu:
weight: 206
---
The [`sort()`function](/v2.0/reference/flux/functions/built-in/transformations/sort)
The [`sort()`function](/v2.0/reference/flux/stdlib/built-in/transformations/sort)
orders the records within each table.
The following example orders system uptime first by region, then host, then value.
@ -26,7 +26,7 @@ from(bucket:"example-bucket")
|> sort(columns:["region", "host", "_value"])
```
The [`limit()` function](/v2.0/reference/flux/functions/built-in/transformations/limit)
The [`limit()` function](/v2.0/reference/flux/stdlib/built-in/transformations/limit)
limits the number of records in output tables to a fixed number, `n`.
The following example shows up to 10 records from the past hour.
@ -52,6 +52,6 @@ from(bucket:"example-bucket")
```
You now have created a Flux query that sorts and limits data.
Flux also provides the [`top()`](/v2.0/reference/flux/functions/built-in/transformations/selectors/top)
and [`bottom()`](/v2.0/reference/flux/functions/built-in/transformations/selectors/bottom)
Flux also provides the [`top()`](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/top)
and [`bottom()`](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/bottom)
functions to perform both of these functions at the same time.

View File

@ -12,7 +12,7 @@ weight: 207
---
The [Flux](/v2.0/reference/flux) `sql` package provides functions for working with SQL data sources.
[`sql.from()`](/v2.0/reference/flux/functions/sql/from/) lets you query SQL data sources
[`sql.from()`](/v2.0/reference/flux/stdlib/sql/from/) lets you query SQL data sources
like [PostgreSQL](https://www.postgresql.org/) and [MySQL](https://www.mysql.com/)
and use the results with InfluxDB dashboards, tasks, and other operations.
@ -59,7 +59,7 @@ sql.from(
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
_See the [`sql.from()` documentation](/v2.0/reference/flux/functions/sql/from/) for
_See the [`sql.from()` documentation](/v2.0/reference/flux/stdlib/sql/from/) for
information about required function parameters._
## Join SQL data with data in InfluxDB
@ -94,7 +94,7 @@ join(tables: {metric: sensorMetrics, info: sensorInfo}, on: ["sensor_id"])
## Use SQL results to populate dashboard variables
Use `sql.from()` to [create dashboard variables](/v2.0/visualize-data/variables/create-variable/)
from SQL query results.
The following example uses the [air sensor sample data](#sample-data) below to
The following example uses the [air sensor sample data](#sample-sensor-data) below to
create a variable that lets you select the location of a sensor.
```js
@ -167,7 +167,7 @@ To use `air-sensor-data.rb`:
_**Note:** Use the `--help` flag to view other configuration options._
5. [Query your target bucket](v2.0/query-data/execute-queries/) to ensure the
5. [Query your target bucket](/v2.0/query-data/execute-queries/) to ensure the
generated data is writing successfully.
The generator doesn't catch errors from write requests, so it will continue running
even if data is not writing to InfluxDB successfully.
@ -212,6 +212,6 @@ To use `air-sensor-data.rb`:
#### Import the sample data dashboard
Download and import the Air Sensors dashboard to visualize the generated data:
<a class="btn download" href="/downloads/air_sensors_dashboard.json" download>Download Air Sensors dashboard</a>
<a class="btn download" href="/downloads/air-sensors-dashboard.json" download>Download Air Sensors dashboard</a>
_For information about importing a dashboard, see [Create a dashboard](/v2.0/visualize-data/dashboards/create-dashboard/#create-a-new-dashboard)._

View File

@ -86,7 +86,7 @@ Table: keys: [_start, _stop, _field, _measurement]
{{% /truncate %}}
## Windowing data
Use the [`window()` function](/v2.0/reference/flux/functions/built-in/transformations/window)
Use the [`window()` function](/v2.0/reference/flux/stdlib/built-in/transformations/window)
to group your data based on time bounds.
The most common parameter passed with the `window()` is `every` which
defines the duration of time between windows.
@ -170,14 +170,14 @@ When visualized in the InfluxDB UI, each window table is displayed in a differen
![Windowed data](/img/simple-windowed-data.png)
## Aggregate data
[Aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates) take the values
[Aggregate functions](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates) take the values
of all rows in a table and use them to perform an aggregate operation.
The result is output as a new value in a single-row table.
Since windowed data is split into separate tables, aggregate operations run against
each table separately and output new tables containing only the aggregated value.
For this example, use the [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean)
For this example, use the [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean)
to output the average of each window:
```js
@ -241,7 +241,7 @@ These represent the lower and upper bounds of the time window.
Many Flux functions rely on the `_time` column.
To further process your data after an aggregate function, you need to re-add `_time`.
Use the [`duplicate()` function](/v2.0/reference/flux/functions/built-in/transformations/duplicate) to
Use the [`duplicate()` function](/v2.0/reference/flux/stdlib/built-in/transformations/duplicate) to
duplicate either the `_start` or `_stop` column as a new `_time` column.
```js
@ -329,7 +329,7 @@ With the aggregate values in a single table, data points in the visualization ar
You have now created a Flux query that windows and aggregates data.
The data transformation process outlined in this guide should be used for all aggregation operations.
Flux also provides the [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow)
Flux also provides the [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow)
which performs all these separate functions for you.
The following Flux query will return the same results:

View File

@ -8,11 +8,12 @@ menu:
name: Annotated CSV
---
Annotated CSV (comma-separated values) format is used to encode HTTP responses and results returned to the Flux [`csv.from()` function](https://v2.docs.influxdata.com/v2.0/reference/flux/functions/csv/from/).
Annotated CSV (comma-separated values) format is used to encode HTTP responses and results returned to the Flux [`csv.from()` function](https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/csv/from/).
CSV tables must be encoded in UTF-8 and Unicode Normal Form C as defined in [UAX15](http://www.unicode.org/reports/tr15/). Line endings must be CRLF (Carriage Return Line Feed) as defined by the `text/csv` MIME type in [RFC 4180](https://tools.ietf.org/html/rfc4180).
## Examples
In this topic, you'll find examples of valid CSV syntax for responses to the following query:
```js
@ -23,12 +24,15 @@ from(bucket:"mydb/autogen")
```
## CSV response format
Flux supports encodings listed below.
### Tables
A table may have the following rows and columns.
#### Rows
- **Annotation rows**: describe column properties.
- **Header row**: defines column labels (one header row per table).
@ -36,6 +40,7 @@ A table may have the following rows and columns.
- **Record row**: describes data in the table (one record per row).
##### Example
Encoding of a table with and without a header row.
{{< code-tabs-wrapper >}}
@ -63,6 +68,7 @@ my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,
{{< /code-tabs-wrapper >}}
#### Columns
In addition to the data columns, a table may include the following columns:
- **Annotation column**: Only used in annotation rows. Always the first column. Displays the name of an annotation. Value can be empty or a supported [annotation](#annotations). You'll notice a space for this column for the entire length of the table, so rows appear to start with `,`.
@ -72,6 +78,7 @@ In addition to the data columns, a table may include the following columns:
- **Table column**: Contains a unique ID for each table in a result.
### Multiple tables and results
If a file or data stream contains multiple tables or results, the following requirements must be met:
- A table column indicates which table a row belongs to.
@ -82,6 +89,7 @@ If a file or data stream contains multiple tables or results, the following requ
- Each new table boundary starts with new annotation and header rows.
##### Example
Encoding of two tables in the same result with the same schema (header row) and different schema.
{{< code-tabs-wrapper >}}
@ -119,6 +127,7 @@ my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,
{{< /code-tabs-wrapper >}}
### Dialect options
Flux supports the following dialect options for `text/csv` format.
| Option | Description| Default |
@ -130,27 +139,28 @@ Flux supports the following dialect options for `text/csv` format.
| **commentPrefix** | String prefix to identify a comment. Always added to annotations. |`#`|
### Annotations
Annotation rows are optional, describe column properties, and start with `#` (or commentPrefix value). The first column in an annotation row always contains the annotation name. Subsequent columns contain annotation values as shown in the table below.
Annotation rows describe column properties, and start with `#` (or commentPrefix value). The first column in an annotation row always contains the annotation name. Subsequent columns contain annotation values as shown in the table below.
|Annotation name | Values| Description |
| :-------- | :--------- | :-------|
| **datatype** | a [valid data type](#Valid-data-types) | Describes the type of data. |
| **datatype** | a [valid data type](#valid-data-types) | Describes the type of data. |
| **group** | boolean flag `true` or `false` | Indicates the column is part of the group key.|
| **default** | a [valid data type](#Valid-data-types) |Value to use for rows with an empty string value.|
| **default** | a [valid data type](#valid-data-types) |Value to use for rows with an empty string value.|
{{% note %}}
To encode a table with its group key, the `datatype`, `group`, and `default` annotations must be included. If a table has no rows, the `default` annotation provides the group key values.
{{% /note %}}
##### Example
Encoding of datatype and group annotations for two tables.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Datatype annotation](#)
[Group annotation](#)
[Datatype and group annotations](#)
{{% /code-tabs %}}
Example encoding of datatype, group, and default annotations.
{{% code-tab-content %}}
```js
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
import "csv"
a = "#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
#group,false,false,false,false,false,false,false,false
#default,,,,,,,,
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
@ -158,46 +168,9 @@ Encoding of datatype and group annotations for two tables.
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
"
csv.from(csv:a) |> yield()
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
#group,false,false,true,true,false,true,false,false
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
,result,table,_start,_stop,_time,region,host,_value
#group,false,false,true,true,false,true,false,false
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
**Notes:**
{{% note %}}
To encode a table with its group key, the `datatype`, `group`, and `default` annotations must be included.
If a table has no rows, the `default` annotation provides the group key values.
{{% /note %}}
### Valid data types
@ -213,6 +186,7 @@ If a table has no rows, the `default` annotation provides the group key values.
| duration | duration | a length of time represented as an unsigned 64-bit integer number of nanoseconds |
## Errors
If an error occurs during execution, a table returns with:
- An error column that contains an error message.
@ -225,6 +199,7 @@ If an error occurs:
- After partial results are sent to the client, the error is encoded as the next table and remaining results are discarded. In this case, the HTTP status code remains 200 OK.
##### Example
Encoding for an error with the datatype annotation:
```js
#datatype,string,long

View File

@ -16,12 +16,18 @@ influxd inspect [subcommand]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [export-blocks](/v2.0/reference/cli/influxd/inspect/export-blocks/) | Export block data |
| [report-tsm](/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report |
| [verify-tsm](/v2.0/reference/cli/influxd/inspect/verify-tsm/) | Check the consistency of TSM files |
| [verify-wal](/v2.0/reference/cli/influxd/inspect/verify-wal/) | Check for corrupt WAL files |
| Subcommand | Description |
|:---------- |:----------- |
| [build-tsi](/v2.0/reference/cli/influxd/inspect/build-tsi/) | Rebuild the TSI index and series file. |
| [dump-tsi](/v2.0/reference/cli/influxd/inspect/dump-tsi/) | Output low level TSI information |
| [dumpwal](/v2.0/reference/cli/influxd/inspect/dumpwal/) | Output TSM data from WAL files |
| [export-blocks](/v2.0/reference/cli/influxd/inspect/export-blocks/) | Export block data |
| [export-index](/v2.0/reference/cli/influxd/inspect/export-index/) | Export TSI index data |
| [report-tsi](/v2.0/reference/cli/influxd/inspect/report-tsi/) | Report the cardinality of TSI files |
| [report-tsm](/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report |
| [verify-seriesfile](/v2.0/reference/cli/influxd/inspect/verify-seriesfile/) | Verify the integrity of series files |
| [verify-tsm](/v2.0/reference/cli/influxd/inspect/verify-tsm/) | Check the consistency of TSM files |
| [verify-wal](/v2.0/reference/cli/influxd/inspect/verify-wal/) | Check for corrupt WAL files |
## Flags
| Flag | Description |

View File

@ -0,0 +1,58 @@
---
title: influxd inspect build-tsi
description: >
The `influxd inspect build-tsi` command rebuilds the TSI index and, if necessary,
the series file.
v2.0/tags: [tsi]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect build-tsi` command rebuilds the TSI index and, if necessary,
the series file.
## Usage
```sh
influxd inspect build-tsi [flags]
```
InfluxDB builds the index by reading all Time-Structured Merge tree (TSM) indexes
and Write Ahead Log (WAL) entries in the TSM and WAL data directories.
If the series file directory is missing, it rebuilds the series file.
If the TSI index directory already exists, the command will fail.
### Adjust performance
Use the following options to adjust the performance of the indexing process:
##### --max-log-file-size
`--max-log-file-size` determines how much of an index to store in memory before
compacting it into memory-mappable index files.
If you find the memory requirements of your TSI index are too high, consider
decreasing this setting.
##### --max-cache-size
`--max-cache-size` defines the maximum cache size.
The indexing process replays WAL files into a `tsm1.Cache`.
If the maximum cache size is too low, the indexing process will fail.
Increase `--max-cache-size` to account for the size of your WAL files.
##### --batch-size
`--batch-size` defines the size of the batches written into the index.
Altering the batch size can improve performance but may result in significantly
higher memory usage.
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `--batch-size` | The size of the batches to write to the index. Defaults to `10000`. [See above](#batch-size). | integer |
| `--concurrency` | Number of workers to dedicate to shard index building. Defaults to `GOMAXPROCS` (8 by default). | integer |
| `-h`, `--help` | Help for `build-tsi`. | |
| `--max-cache-size` | Maximum cache size. Defaults to `1073741824`. [See above](#max-cache-size). | uinteger |
| `--max-log-file-size` | Maximum log file size. Defaults to `1048576`. [See above](#max-log-file-size) . | integer |
| `--sfile-path` | Path to the series file directory. Defaults to `~/.influxdbv2/engine/_series`. | string |
| `--tsi-path` | Path to the TSI index directory. Defaults to `~/.influxdbv2/engine/index`. | string |
| `--tsm-path` | Path to the TSM data directory. Defaults to `~/.influxdbv2/engine/data`. | string |
| `-v`, `--verbose` | Enable verbose output. | |
| `--wal-path` | Path to the WAL data directory. Defaults to `~/.influxdbv2/engine/wal`. | string |

View File

@ -0,0 +1,33 @@
---
title: influxd inspect dump-tsi
description: >
The `influxd inspect dump-tsi` command outputs low-level information about `tsi1` files.
v2.0/tags: [tsi, inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect dump-tsi` command outputs low-level information about
Time Series Index (`tsi1`) files.
## Usage
```sh
influxd inspect dump-tsi [flags]
```
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for `dump-tsi`. | |
| `--index-path` | Path to data engine index directory (defaults to `~/.influxdbv2/engine/index`). | string |
| `--measurement-filter` | Regular expression measurement filter. | string |
| `--measurements` | Show raw measurement data. | |
| `--series` | Show raw series data. | |
| `--series-path` | Path to series file (defaults to `~/.influxdbv2/engine/_series`). | string |
| `--tag-key-filter` | Regular expression tag key filter. | string |
| `--tag-keys` | Show raw tag key data. | |
| `--tag-value-filter` | Regular expression tag value filter. | string |
| `--tag-value-series` | Show raw series data for each value. | |
| `--tag-values` | Show raw tag value data. | |

View File

@ -0,0 +1,68 @@
---
title: influxd inspect dumpwal
description: >
The `influxd inspect dumpwal` command outputs data from WAL files.
v2.0/tags: [wal, inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect dumpwal` command outputs data from Write Ahead Log (WAL) files.
Given a list of file path globs (patterns that match `.wal` file paths),
the command parses and prints out entries in each file.
## Usage
```sh
influxd inspect dumpwal [flags] <globbing-patterns>
```
## Output details
The `--find-duplicates` flag determines the `influxd inspect dumpwal` output.
**Without `--find-duplicates`**, the command outputs the following for each file
that matches the specified [globbing patterns](#globbing-patterns):
- The file name
- For each entry in a file:
- The type of the entry (`[write]` or `[delete-bucket-range]`)
- The formatted entry contents
**With `--find-duplicates`**, the command outputs the following for each file
that matches the specified [globbing patterns](#globbing-patterns):
- The file name
- A list of keys with timestamps in the wrong order
## Arguments
### Globbing patterns
Globbing patterns provide partial paths used to match file paths and names.
##### Example globbing patterns
```sh
# Match any file or folder starting with "foo"
foo*
# Match any file or folder starting with "foo" and ending with .txt
foo*.txt
# Match any file or folder ending with "foo"
*foo
# Match foo/bar/baz but not foo/bar/bin/baz
foo/*/baz
# Match foo/baz and foo/bar/baz and foo/bar/bin/baz
foo/**/baz
# Matches cat but not can or c/t
/c?t
```
## Flags
| Flag | Description |
|:---- |:----------- |
| `--find-duplicates` | Ignore dumping entries; only report keys in the WAL that are out of order. |
| `-h`, `--help` | Help for `dumpwal`. |

View File

@ -3,7 +3,7 @@ title: influxd inspect export-blocks
description: >
The `influxd inspect export-blocks` command exports all blocks in one or more
TSM1 files to another format for easier inspection and debugging.
v2.0/tags: [wal, inspect]
v2.0/tags: [inspect]
menu:
v2_0_ref:
parent: influxd inspect

View File

@ -0,0 +1,26 @@
---
title: influxd inspect export-index
description: >
The `influxd inspect export-index` command exports all series in a TSI index to
SQL format for inspection and debugging.
v2.0/tags: [inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect export-index` command exports all series in a TSI index to
SQL format for inspection and debugging.
## Usage
```sh
influxd inspect export-index [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for `export-index`. | |
| `--index-path` | Path to the index directory. Defaults to `~/.influxdbv2/engine/index`). | string |
| `--series-path` | Path to series file. Defaults to `~/.influxdbv2/engine/_series`). | string |

View File

@ -0,0 +1,44 @@
---
title: influxd inspect report-tsi
description: >
The `influxd inspect report-tsi` command analyzes Time Series Index (TSI) files
in a storage directory and reports the cardinality of data stored in the files.
v2.0/tags: [tsi, cardinality, inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect report-tsi` command analyzes Time Series Index (TSI) files
in a storage directory and reports the cardinality of data stored in the files
by organization and bucket.
## Output details
`influxd inspect report-tsi` outputs the following:
- All organizations and buckets in the index.
- The series cardinality within each organization and bucket.
- Time to read the index.
When the `--measurements` flag is included, series cardinality is grouped by:
- organization
- bucket
- measurement
## Usage
```sh
influxd inspect report-tsi [flags]
```
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `--bucket-id` | Process data for specified bucket ID. _Requires `org-id` flag to be set._ | string |
| `-h`, `--help` | View help for `report-tsi`. | |
| `-m`, `--measurements` | Group cardinality by measurements. | |
| `-o`, `--org-id` | Process data for specified organization ID. | string |
| `--path` | Specify path to index. Defaults to `~/.influxdbv2/engine/index`. | string |
| `--series-file` | Specify path to series file. Defaults to `~/.influxdbv2/engine/_series`. | string |
| `-t`, `-top` | Limit results to the top n. | integer |

View File

@ -49,7 +49,7 @@ in the following ways:
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `--bucket-id` | Process only data belonging to bucket ID. _Requires `org-id` flag to be set._ | string |
| `--data-dir` | Use provided data directory (defaults to ~/.influxdbv2/engine/data). | string |
| `--data-dir` | Use provided data directory (defaults to `~/.influxdbv2/engine/data`). | string |
| `--detailed` | Emit series cardinality segmented by measurements, tag keys, and fields. _**May take a while**_. | |
| `--exact` | Calculate an exact cardinality count. _**May use significant memory**_. | |
| `-h`, `--help` | Help for `report-tsm`. | |

View File

@ -0,0 +1,25 @@
---
title: influxd inspect verify-seriesfile
description: >
The `influxd inspect verify-seriesfile` command verifies the integrity of series files.
v2.0/tags: [inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect verify-seriesfile` command verifies the integrity of series files.
## Usage
```sh
influxd inspect verify-seriesfile [flags]
```
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `-c`, `--c` | Number of workers to run concurrently (defaults to 8). | integer |
| `-h`, `--help` | Help for `verify-seriesfile`. | |
| `--series-file` | Path to series file (defaults to `~/.influxdbv2/engine/_series`). | string |
| `-v`, `--verbose` | Enable verbose output. | |

View File

@ -3,7 +3,7 @@ title: influxd inspect verify-tsm
description: >
The `influxd inspect verify-tsm` command analyzes a set of TSM files for inconsistencies
between the TSM index and the blocks.
v2.0/tags: [wal, inspect]
v2.0/tags: [tsm, inspect]
menu:
v2_0_ref:
parent: influxd inspect

View File

@ -18,5 +18,8 @@ These client libraries are in active development and may not be feature-complete
This list will continue to grow as more client libraries are released.
{{% /note %}}
- [C#](https://github.com/influxdata/influxdb-client-csharp)
- [Go](https://github.com/influxdata/influxdb-client-go)
- [Java](https://github.com/influxdata/influxdb-client-java)
- [JavaScript/Node.js](https://github.com/influxdata/influxdb-client-js)
- [Python](https://github.com/influxdata/influxdb-client-python)

View File

@ -8,7 +8,7 @@ menu:
weight: 4
---
The following articles are meant as a reference for Flux functions and the
Flux language specification.
The following articles are meant as a reference for the Flux standard library and
the Flux language specification.
{{< children >}}

View File

@ -1,15 +0,0 @@
---
title: Flux packages and functions
description: Flux packages and functions allows you to retrieve, transform, process, and output data easily.
v2.0/tags: [flux, functions, package]
menu:
v2_0_ref:
name: Flux packages and functions
parent: Flux query language
weight: 102
---
Flux's functional syntax allows you to retrieve, transform, process, and output data easily.
There is a large library of built-in functions and importable packages:
{{< children >}}

View File

@ -1,48 +0,0 @@
---
title: limit() function
description: The `limit()` function limits the number of records in output tables to a fixed number (n).
aliases:
- /v2.0/reference/flux/functions/transformations/limit
menu:
v2_0_ref:
name: limit
parent: built-in-transformations
weight: 401
---
The `limit()` function limits the number of records in output tables to a fixed number ([`n`](#n)).
One output table is produced for each input table.
Each output table contains the first `n` records after the first `offset` records of the input table.
If the input table has less than `offset + n` records, all records except the first `offset` ones are output.
_**Function type:** Filter_
_**Output data type:** Object_
```js
limit(n:10, offset: 0)
```
## Parameters
### n
The maximum number of records to output.
_**Data type:** Integer_
### offset
The number of records to skip per table before limiting to `n`.
Defaults to `0`.
_**Data type:** Integer_
## Examples
```js
from(bucket:"example-bucket")
|> range(start:-1h)
|> limit(n:10, offset: 1)
```
<hr style="margin-top:4rem"/>
##### Related InfluxQL functions and statements:
[LIMIT](https://docs.influxdata.com/influxdb/latest/query_language/data_exploration/#the-limit-and-slimit-clauses)

View File

@ -8,12 +8,6 @@ menu:
weight: 202
---
{{% note %}}
This document is a living document and may not represent the current implementation of Flux.
Any section that is not currently implemented is commented with a **[IMPL#XXX]** where
**XXX** is an issue number tracking discussion and progress towards implementation.
{{% /note %}}
An assignment binds an identifier to a variable, option, or function.
Every identifier in a program must be assigned.
@ -32,10 +26,6 @@ Note that the package clause is not an assignment.
The package name does not appear in any scope.
Its purpose is to identify the files belonging to the same package and to specify the default package name for import declarations.
{{% note %}}
[IMPL#247](https://github.com/influxdata/platform/issues/247) Add package/namespace support.
{{% /note %}}
## Variable assignment
A variable assignment creates a variable bound to an identifier and gives it a type and value.
A variable keeps the same type and value for the remainder of its lifetime.

View File

@ -269,4 +269,8 @@ PostfixOperator = MemberExpression
| IndexExpression .
```
{{% warn %}}
Dividing by 0 or using the mod operator with a divisor of 0 will result in an error.
{{% /warn %}}
_Also see [Flux Operators](/v2.0/reference/flux/language/operators)._

View File

@ -269,8 +269,7 @@ String literals support several escape sequences.
\t U+0009 horizontal tab
\" U+0022 double quote
\\ U+005C backslash
\{ U+007B open curly bracket
\} U+007D close curly bracket
\${ U+0024 U+007B dollar sign and opening curly bracket
```
Additionally, any byte value may be specified via a hex encoding using `\x` as the prefix.
@ -281,15 +280,9 @@ byte_value = `\` "x" hex_digit hex_digit .
hex_digit = "0" … "9" | "A" … "F" | "a" … "f" .
unicode_value = unicode_char | escaped_char .
escaped_char = `\` ( "n" | "r" | "t" | `\` | `"` ) .
StringExpression = "{" Expression "}" .
StringExpression = "${" Expression "}" .
```
{{% note %}}
To be added: TODO: With string interpolation `string_lit` is not longer a lexical token as part of a literal, but an entire expression in and of itself.
[IMPL#252](https://github.com/influxdata/platform/issues/252) Parse string literals.
{{% /note %}}
##### Examples of string literals
```js
@ -301,12 +294,12 @@ To be added: TODO: With string interpolation `string_lit` is not longer a lexica
```
String literals are also interpolated for embedded expressions to be evaluated as strings.
Embedded expressions are enclosed in curly brackets (`{}`).
Embedded expressions are enclosed in a dollar sign and curly braces (`${}`).
The expressions are evaluated in the scope containing the string literal.
The result of an expression is formatted as a string and replaces the string content between the brackets.
The result of an expression is formatted as a string and replaces the string content between the braces.
All types are formatted as strings according to their literal representation.
A function `printf` exists to allow more precise control over formatting of various types.
To include the literal curly brackets within a string they must be escaped.
To include the literal `${` within a string, it must be escaped.
{{% note %}}
[IMPL#248](https://github.com/influxdata/platform/issues/248) Add printf function.
@ -316,14 +309,13 @@ To include the literal curly brackets within a string they must be escaped.
```js
n = 42
"the answer is {n}" // the answer is 42
"the answer is not {n+1}" // the answer is not 43
"openinng curly bracket \{" // openinng curly bracket {
"closing curly bracket \}" // closing curly bracket }
"the answer is ${n}" // the answer is 42
"the answer is not ${n+1}" // the answer is not 43
"dollar sign opening curly bracket \${" // dollar sign opening curly bracket ${
```
{{% note %}}
[IMPL#251](https://github.com/influxdata/platform/issues/251) Add string interpolation support
[IMPL#1775](https://github.com/influxdata/flux/issues/1775) Interpolate arbitrary expressions in string literals
{{% /note %}}
### Regular expression literals

View File

@ -13,12 +13,6 @@ menu:
weight: 207
---
{{% note %}}
This document is a living document and may not represent the current implementation of Flux.
Any section that is not currently implemented is commented with a **[IMPL#XXX]** where
**XXX** is an issue number tracking discussion and progress towards implementation.
{{% /note %}}
Flux source is organized into packages.
A package consists of one or more source files.
Each source file is parsed individually and composed into a single package.
@ -41,10 +35,6 @@ All files in the same package must declare the same package name.
When a file does not declare a package clause, all identifiers in that
file will belong to the special `main` package.
{{% note %}}
[IMPL#247](https://github.com/influxdata/platform/issues/247) Add package/namespace support.
{{% /note %}}
### Package main
The `main` package is special for a few reasons:

View File

@ -119,6 +119,7 @@ duration // duration of time
time // time
string // utf-8 encoded string
regexp // regular expression
bytes // sequence of byte values
type // a type that itself describes a type
```

View File

@ -0,0 +1,96 @@
---
title: String interpolation
description: >
Flux string interpolation evaluates string literals containing one or more placeholders
and returns a result with placeholders replaced with their corresponding values.
menu:
v2_0_ref:
parent: Flux specification
name: String interpolation
weight: 211
---
Flux string interpolation evaluates string literals containing one or more placeholders
and returns a result with placeholders replaced with their corresponding values.
## String interpolation syntax
To use Flux string interpolation, enclose embedded [expressions](/v2.0/reference/flux/language/expressions/)
in a dollar sign and curly braces `${}`.
Flux replaces the content between the braces with the result of the expression and
returns a string literal.
```js
name = "John"
"My name is ${name}."
// My name is John.
```
{{% note %}}
#### Flux only interpolates string values
Flux currently interpolates only string values ([IMP#1775](https://github.com/influxdata/flux/issues/1775)).
Use the [string() function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/string/)
to convert non-string values to strings.
```js
count = 12
"I currently have ${string(v: count)} cats."
```
{{% /note %}}
## Use dot notation to interpolate object values
[Objects](/v2.0/reference/flux/language/expressions/#object-literals) consist of key-value pairs.
Use [dot notation](/v2.0/reference/flux/language/expressions/#member-expressions)
to interpolate values from an object.
```js
person = {
name: "John",
age: 42
}
"My name is ${person.name} and I'm ${string(v: person.age)} years old."
// My name is John and I'm 42 years old.
```
Flux returns each record in query results as an object.
In Flux row functions, each row object is represented by `r`.
Use dot notation to interpolate specific column values from the `r` object.
##### Use string interpolation to add a human-readable message
```js
from(bucket: "example-bucket")
|> range(start: -30m)
|> map(fn: (r) => ({
r with
human-readable: "${r._field} is ${r._value} at ${string(v: r._time)}."
}))
```
## String interpolation versus concatenation
Flux supports both string interpolation and string concatenation.
String interpolation is a more concise method for achieving the same result.
```js
person = {
name: "John",
age: 42
}
// String interpolation
"My name is ${person.name} and I'm ${string(v: person.age)} years old."
// String concatenation
"My name is " + person.name + " and I'm " + string(v: person.age) + " years old."
// Both return: My name is John and I'm 42 years old.
```
{{% note %}}
Check and notification message templates configured in the InfluxDB user interface
**do not** support string concatenation.
{{% /note %}}

View File

@ -105,7 +105,12 @@ The string type is nullable.
An empty string is **not** a _null_ value.
{{% /note %}}
The length of a string is its size in bytes, not the number of characters, since a single character may be multiple bytes.
The length of a string is its size in bytes, not the number of characters,
since a single character may be multiple bytes.
### Bytes types
A _bytes type_ represents a sequence of byte values.
The bytes type name is `bytes`.
## Regular expression types
A _regular expression type_ represents the set of all patterns for regular expressions.

View File

@ -0,0 +1,18 @@
---
title: Flux standard library
description: >
The Flux standard library includes built-in functions and importable packages
that retrieve, transform, process, and output data.
aliases:
- /v2.0/reference/flux/functions/
v2.0/tags: [flux, functions, package]
menu:
v2_0_ref:
parent: Flux query language
weight: 102
---
The Flux standard library includes built-in functions and importable packages
that retrieve, transform,rocess, and output data.
{{< children >}}

View File

@ -1,10 +1,12 @@
---
title: Complete list of Flux functions
description: View the full library of documented Flux functions.
aliases:
- /v2.0/reference/flux/functions/all-functions/
menu:
v2_0_ref:
name: View all functions
parent: Flux packages and functions
parent: Flux standard library
weight: 299
---

View File

@ -4,10 +4,12 @@ list_title: Built-in functions
description: >
Built-in functions provide a foundation for working with data using Flux.
They do not require an import statement and are usable without any extra setup.
aliases:
- /v2.0/reference/flux/functions/built-in/
menu:
v2_0_ref:
name: Built-in
parent: Flux packages and functions
parent: Flux standard library
weight: 201
v2.0/tags: [built-in, functions, package]
---

View File

@ -3,7 +3,8 @@ title: Flux built-in input functions
list_title: Built-in input functions
description: Flux's built-in input functions define sources of data or or display information about data sources.
aliases:
- /v2.0/reference/flux/functions/inputs
- /v2.0/reference/flux/functions/inputs
- /v2.0/reference/flux/functions/built-in/inputs/
menu:
v2_0_ref:
parent: Built-in

View File

@ -3,6 +3,7 @@ title: buckets() function
description: The `buckets()` function returns a list of buckets in the organization.
aliases:
- /v2.0/reference/flux/functions/inputs/buckets
- /v2.0/reference/flux/functions/built-in/inputs/buckets/
menu:
v2_0_ref:
name: buckets

View File

@ -3,6 +3,7 @@ title: from() function
description: The `from()` function retrieves data from an InfluxDB data source.
aliases:
- /v2.0/reference/flux/functions/inputs/from
- /v2.0/reference/flux/functions/built-in/inputs/from/
menu:
v2_0_ref:
name: from

View File

@ -6,6 +6,7 @@ description: >
retrieving, transforming, or outputting data.
aliases:
- /v2.0/reference/flux/functions/misc
- /v2.0/reference/flux/functions/built-in/misc/
menu:
v2_0_ref:
parent: Built-in

View File

@ -3,11 +3,13 @@ title: intervals() function
description: The `intervals()` function generates a set of time intervals over a range of time.
aliases:
- /v2.0/reference/flux/functions/misc/intervals
- /v2.0/reference/flux/functions/built-in/misc/intervals/
menu:
v2_0_ref:
name: intervals
parent: built-in-misc
weight: 401
draft: true
---
The `intervals()` function generates a set of time intervals over a range of time.
@ -19,7 +21,7 @@ The set of intervals includes all intervals that intersect with the initial rang
{{% note %}}
The `intervals()` function is designed to be used with the intervals parameter
of the [`window()` function](/v2.0/reference/flux/functions/built-in/transformations/window).
of the [`window()` function](/v2.0/reference/flux/stdlib/built-in/transformations/window).
{{% /note %}}
By default the end boundary of an interval will align with the Unix epoch (zero time)

View File

@ -3,6 +3,7 @@ title: linearBins() function
description: The `linearBins()` function generates a list of linearly separated floats.
aliases:
- /v2.0/reference/flux/functions/misc/linearbins
- /v2.0/reference/flux/functions/built-in/misc/linearbins/
menu:
v2_0_ref:
name: linearBins
@ -12,7 +13,7 @@ weight: 401
The `linearBins()` function generates a list of linearly separated floats.
It is a helper function meant to generate bin bounds for the
[`histogram()` function](/v2.0/reference/flux/functions/built-in/transformations/histogram).
[`histogram()` function](/v2.0/reference/flux/stdlib/built-in/transformations/histogram).
_**Function type:** Miscellaneous_
_**Output data type:** Array of floats_

View File

@ -3,6 +3,7 @@ title: logarithmicBins() function
description: The `logarithmicBins()` function generates a list of exponentially separated floats.
aliases:
- /v2.0/reference/flux/functions/misc/logarithmicbins
- /v2.0/reference/flux/functions/built-in/misc/logarithmicbins/
menu:
v2_0_ref:
name: logarithmicBins
@ -12,7 +13,7 @@ weight: 401
The `logarithmicBins()` function generates a list of exponentially separated floats.
It is a helper function meant to generate bin bounds for the
[`histogram()` function](/v2.0/reference/flux/functions/built-in/transformations/histogram).
[`histogram()` function](/v2.0/reference/flux/stdlib/built-in/transformations/histogram).
_**Function type:** Miscellaneous_
_**Output data type:** Array of floats_
@ -46,7 +47,7 @@ _**Data type:** Boolean_
## Examples
```js
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinity: true)
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
```

View File

@ -1,6 +1,8 @@
---
title: now() function
description: The `now()` function returns the current time (UTC).
aliases:
- /v2.0/reference/flux/functions/built-in/misc/now/
menu:
v2_0_ref:
name: now

View File

@ -1,6 +1,8 @@
---
title: sleep() function
description: The `sleep()` function delays execution by a specified duration.
aliases:
- /v2.0/reference/flux/functions/built-in/misc/sleep/
menu:
v2_0_ref:
name: sleep

View File

@ -4,6 +4,7 @@ list_title: Built-in output functions
description: Flux's built-in output functions yield results or send data to a specified output destination.
aliases:
- /v2.0/reference/flux/functions/outputs
- /v2.0/reference/flux/functions/built-in/outputs/
menu:
v2_0_ref:
parent: Built-in

View File

@ -3,6 +3,7 @@ title: to() function
description: The `to()` function writes data to an InfluxDB v2.0 bucket.
aliases:
- /v2.0/reference/flux/functions/outputs/to
- /v2.0/reference/flux/functions/built-in/outputs/to/
menu:
v2_0_ref:
name: to
@ -51,38 +52,30 @@ All output data must include the following columns:
## Parameters
{{% note %}}
`bucket` OR `bucketID` is **required**.
You must provide a `bucket` or `bucketID` and an `org` or `orgID`.
{{% /note %}}
### bucket
The bucket to which data is written. Mutually exclusive with `bucketID`.
The bucket to write data to.
`bucket` and `bucketID` are mutually exclusive.
_**Data type:** String_
### bucketID
The ID of the bucket to which data is written. Mutually exclusive with `bucket`.
The ID of the bucket to write data to.
`bucketID` and `bucket` are mutually exclusive.
_**Data type:** String_
### org
The organization name of the specified [`bucket`](#bucket).
Only required when writing to a remote host.
Mutually exclusive with `orgID`
`org` and `orgID` are mutually exclusive.
_**Data type:** String_
{{% note %}}
Specify either an `org` or an `orgID`, but not both.
{{% /note %}}
### orgID
The organization ID of the specified [`bucket`](#bucket).
Only required when writing to a remote host.
Mutually exclusive with `org`.
`orgID` and `org` are mutually exclusive.
_**Data type:** String_
@ -108,21 +101,24 @@ _**Data type:** String_
### tagColumns
The tag columns of the output.
Defaults to all columns with type `string`, excluding all value columns and the `_field` column if present.
Defaults to all columns with type `string`, excluding all value columns and the
`_field` column if present.
_**Data type:** Array of strings_
### fieldFn
Function that takes a record from the input table and returns an object.
For each record from the input table, `fieldFn` returns an object that maps output the field key to the output value.
For each record from the input table, `fieldFn` returns an object that maps output
the field key to the output value.
Default is `(r) => ({ [r._field]: r._value })`
_**Data type:** Function_
_**Output data type:** Object_
{{% note %}}
Make sure `fieldFn` parameter names match each specified parameter. To learn why, see [Match parameter names](/v2.0/reference/flux/language/data-model/#match-parameter-names).
Make sure `fieldFn` parameter names match each specified parameter.
To learn why, see [Match parameter names](/v2.0/reference/flux/language/data-model/#match-parameter-names).
{{% /note %}}
## Examples

View File

@ -3,6 +3,7 @@ title: yield() function
description: The `yield()` function indicates the input tables received should be delivered as a result of the query.
aliases:
- /v2.0/reference/flux/functions/outputs/yield
- /v2.0/reference/flux/functions/built-in/outputs/yield/
menu:
v2_0_ref:
name: yield

View File

@ -2,6 +2,8 @@
title: Flux built-in testing functions
list_title: Built-in testing functions
description: Flux's built-in testing functions test various aspects of piped-forward data.
aliases:
- /v2.0/reference/flux/functions/built-in/tests/
menu:
v2_0_ref:
name: Tests

View File

@ -1,6 +1,8 @@
---
title: contains() function
description: The `contains()` function tests whether a value is a member of a set.
aliases:
- /v2.0/reference/flux/functions/built-in/tests/contains/
menu:
v2_0_ref:
name: contains

View File

@ -4,6 +4,7 @@ list_title: Built-in transformation functions
description: Flux's built-in transformation functions transform and shape your data in specific ways.
aliases:
- /v2.0/reference/flux/functions/transformations
- /v2.0/reference/flux/functions/built-in/transformations/
menu:
v2_0_ref:
parent: Built-in

View File

@ -4,6 +4,7 @@ list_title: Built-in aggregate functions
description: Flux's built-in aggregate functions take values from an input table and aggregate them in some way.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/
menu:
v2_0_ref:
parent: built-in-transformations
@ -29,7 +30,7 @@ Any output table will have the following properties:
- It will not have a `_time` column.
### aggregateWindow helper function
The [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow)
The [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow)
does most of the work needed when aggregating data.
It windows and aggregates the data, then combines windowed tables into a single output table.
@ -43,9 +44,9 @@ The following functions are both aggregates and selectors.
Each returns `n` values after performing an aggregate operation.
They are categorized as selector functions in this documentation:
- [highestAverage](/v2.0/reference/flux/functions/transformations/selectors/highestaverage)
- [highestCurrent](/v2.0/reference/flux/functions/transformations/selectors/highestcurrent)
- [highestMax](/v2.0/reference/flux/functions/transformations/selectors/highestmax)
- [lowestAverage](/v2.0/reference/flux/functions/transformations/selectors/lowestaverage)
- [lowestCurrent](/v2.0/reference/flux/functions/transformations/selectors/lowestcurrent)
- [lowestMin](/v2.0/reference/flux/functions/transformations/selectors/lowestmin)
- [highestAverage](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/highestaverage)
- [highestCurrent](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/highestcurrent)
- [highestMax](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/highestmax)
- [lowestAverage](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/lowestaverage)
- [lowestCurrent](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/lowestcurrent)
- [lowestMin](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/lowestmin)

View File

@ -3,6 +3,7 @@ title: aggregateWindow() function
description: The `aggregateWindow()` function applies an aggregate function to fixed windows of time.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/aggregatewindow
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/
menu:
v2_0_ref:
name: aggregateWindow
@ -10,7 +11,8 @@ menu:
weight: 501
---
The `aggregateWindow()` function applies an aggregate function to fixed windows of time.
The `aggregateWindow()` function applies an aggregate or selector function
(any function with a `column` parameter) to fixed windows of time.
_**Function type:** Aggregate_
@ -25,10 +27,14 @@ aggregateWindow(
)
```
As data is windowed into separate tables and aggregated, the `_time` column is dropped from each group key.
As data is windowed into separate tables and processed, the `_time` column is dropped from each group key.
This function copies the timestamp from a remaining column into the `_time` column.
View the [function definition](#function-definition).
`aggregateWindow()` restores the original `_start` and `_stop` values of input data
and, by default, uses `_stop` to set the `_time` value for each aggregated window.
Each row in the output of `aggregateWindow` represents an aggregated window ending at `_time`.
## Parameters
{{% note %}}
@ -43,12 +49,12 @@ _**Data type:** Duration_
### fn
The [aggregate function](/v2.0/reference/flux/functions/built-in/transformations/aggregates) used in the operation.
The [aggregate function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates) used in the operation.
_**Data type:** Function_
{{% note %}}
Only aggregate functions with a `column` parameter (singular) work with `aggregateWindow()`.
Only aggregate and selector functions with a `column` parameter (singular) work with `aggregateWindow()`.
{{% /note %}}
### column
@ -95,11 +101,10 @@ from(bucket: "example-bucket")
fn: mean
)
```
###### Specifying parameters of the aggregate function
To use `aggregateWindow()` aggregate functions that don't provide defaults for required parameters,
for the `fn` parameter, define an anonymous function with `columns` and `tables` parameters
that pipe-forwards tables into the aggregate function with all required parameters defined:
###### Specify parameters of the aggregate function
To use functions that don't provide defaults for required parameters with `aggregateWindow()`,
define an anonymous function with `column` and `tables` parameters that pipe-forward
tables into the aggregate or selector function with all required parameters defined:
```js
from(bucket: "example-bucket")

View File

@ -0,0 +1,112 @@
---
title: chandeMomentumOscillator() function
description: >
The `chandeMomentumOscillator()` function applies the technical momentum indicator
developed by Tushar Chande.
aliases:
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/chandemomentumoscillator/
menu:
v2_0_ref:
name: chandeMomentumOscillator
parent: built-in-aggregates
weight: 501
related:
- https://docs.influxdata.com/influxdb/latest/query_language/functions/#triple-exponential-moving-average, InfluxQL CHANDE_MOMENTUM_OSCILLATOR()
---
The `chandeMomentumOscillator()` function applies the technical momentum indicator
developed by Tushar Chande.
_**Function type:** Aggregate_
```js
chandeMomentumOscillator(
n: 10,
columns: ["_value"]
)
```
The Chande Momentum Oscillator (CMO) indicator calculates the difference between
the sum of all recent data points with values greater than the median value of the data set
and the sum of all recent data points with values lower than the median value of the data set,
then divides the result by the sum of all data movement over a given time period.
It then multiplies the result by 100 and returns a value between -100 and +100.
## Parameters
### n
The period or number of points to use in the calculation.
_**Data type: Integer**_
### columns
The columns to operate on.
Defaults to `["_value"]`.
_**Data type: Array of Strings**_
## Examples
#### Table transformation with a ten point Chande Momentum Oscillator
###### Input table
| _time | _value |
|:-----:|:------:|
| 0001 | 1 |
| 0002 | 2 |
| 0003 | 3 |
| 0004 | 4 |
| 0005 | 5 |
| 0006 | 6 |
| 0007 | 7 |
| 0008 | 8 |
| 0009 | 9 |
| 0010 | 10 |
| 0011 | 11 |
| 0012 | 12 |
| 0013 | 13 |
| 0014 | 14 |
| 0015 | 15 |
| 0016 | 14 |
| 0017 | 13 |
| 0018 | 12 |
| 0019 | 11 |
| 0020 | 10 |
| 0021 | 9 |
| 0022 | 8 |
| 0023 | 7 |
| 0024 | 6 |
| 0025 | 5 |
| 0026 | 4 |
| 0027 | 3 |
| 0028 | 2 |
| 0029 | 1 |
###### Query
```js
// ...
|> chandeMomentumOscillator(n: 10)
```
###### Output table
| _time | _value |
|:-----:|:------:|
| 0011 | 100 |
| 0012 | 100 |
| 0013 | 100 |
| 0014 | 100 |
| 0015 | 100 |
| 0016 | 80 |
| 0017 | 60 |
| 0018 | 40 |
| 0019 | 20 |
| 0020 | 0 |
| 0021 | -20 |
| 0022 | -40 |
| 0023 | -60 |
| 0024 | -80 |
| 0025 | -100 |
| 0026 | -100 |
| 0027 | -100 |
| 0028 | -100 |
| 0029 | -100 |

View File

@ -3,6 +3,7 @@ title: count() function
description: The `count()` function outputs the number of non-null records in a column.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/count
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/count/
menu:
v2_0_ref:
name: count

View File

@ -3,6 +3,7 @@ title: cov() function
description: The `cov()` function computes the covariance between two streams by first joining the streams, then performing the covariance operation.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/cov
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/cov/
menu:
v2_0_ref:
name: cov

View File

@ -3,6 +3,7 @@ title: covariance() function
description: The `covariance()` function computes the covariance between two columns.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/covariance
- /v2.0/reference/flux/functions/built-in/transformations/aggregates/covariance/
menu:
v2_0_ref:
name: covariance

Some files were not shown because too many files have changed in this diff Show More