Merge pull request #209 from influxdata/alpha-9

Alpha 9
pull/210/head^2
Scott Anderson 2019-05-01 15:16:53 -06:00 committed by GitHub
commit aba4442e90
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
76 changed files with 1014 additions and 338 deletions

View File

@ -3,6 +3,7 @@
.inline {
margin: 0 .15rem;
&.middle:before { vertical-align: middle; }
&.top:before { vertical-align: text-top; }
&.xsmall:before { font-size: .8rem; }
&.small:before { font-size: .9rem; }
&.large:before { font-size: 1.1rem; }
@ -21,6 +22,28 @@
padding-left: .28rem;
line-height: 1.25rem;
}
&.ui-toggle {
display: inline-block;
position: relative;
width: 34px;
height: 22px;
background: #1C1C21;
border: 2px solid #383846;
border-radius: .7rem;
vertical-align: text-bottom;
.circle {
display: inline-block;
position: absolute;
border-radius: 50%;
height: 12px;
width: 12px;
background: #22ADF6;
top: 3px;
right: 3px;
}
}
}
.nav-icon {

View File

@ -135,6 +135,7 @@
&:not(:last-child) {
> p:only-child{ margin-bottom: 0; }
}
ul,ol { margin: -.5rem 0 1rem;}
}
//////////////////////////////////// Code ////////////////////////////////////
@ -703,6 +704,14 @@
font-size: .8rem;
}
}
//////////////////////////////// Miscellaneous ///////////////////////////////
.required {
color:#FF8564;
font-weight:700;
font-style: italic;
}
}

View File

@ -15,8 +15,8 @@ To delete a Telegraf configuration:
{{< nav-icon "settings" >}}
2. Click the **Telegraf** tab.
3. Hover over the configuration you want to delete and click **Delete** on the far right.
4. Click **Confirm**.
3. Hover over the configuration you want to delete, click the **{{< icon "trash" >}}**
icon, and **Delete**.
{{< img-hd src="/img/2-0-telegraf-config-delete.png" />}}

View File

@ -17,10 +17,6 @@ View Telegraf configuration information in the InfluxDB user interface (UI):
{{< nav-icon "settings" >}}
2. Click the **Telegraf** tab.
3. Hover over a configuration to view options.
{{< img-hd src="/img/2-0-telegraf-config-view.png" />}}
### View and download the telegraf.conf
To view the actual `telegraf.conf` associated with the configuration,

View File

@ -27,7 +27,7 @@ This article describes how to get started with InfluxDB OSS. To get started with
### Download and install InfluxDB v2.0 alpha
Download InfluxDB v2.0 alpha for macOS.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.8_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.9_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
### Unpackage the InfluxDB binaries
Unpackage the downloaded archive.
@ -90,8 +90,8 @@ influxd --reporting-disabled
### Download and install InfluxDB v2.0 alpha
Download the InfluxDB v2.0 alpha package appropriate for your chipset.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.8_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.8_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.9_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.9_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
### Place the executables in your $PATH
Unpackage the downloaded archive and place the `influx` and `influxd` executables in your system `$PATH`.
@ -100,10 +100,10 @@ _**Note:** The following commands are examples. Adjust the file names, paths, an
```sh
# Unpackage contents to the current working directory
tar xvzf path/to/influxdb_2.0.0-alpha.8_linux_amd64.tar.gz
tar xvzf path/to/influxdb_2.0.0-alpha.9_linux_amd64.tar.gz
# Copy the influx and influxd binary to your $PATH
sudo cp influxdb_2.0.0-alpha.8_linux_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb_2.0.0-alpha.9_linux_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}

View File

@ -8,19 +8,41 @@ menu:
parent: Manage buckets
weight: 202
---
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update a bucket.
Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI)
to update a bucket.
Note that updating an bucket's name will affect any assets that reference the bucket by name, including the following:
## Update a bucket in the InfluxDB UI
- Queries
- Dashboards
- Tasks
- Telegraf configurations
- Templates
If you change a bucket name, be sure to update the bucket in the above places as well.
## Update a bucket's name in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
{{< nav-icon "settings" >}}
2. Select the **Buckets** tab.
3. To update a bucket's name or retention policy, click the name of the bucket from the list.
4. Click **Update** to save.
3. Hover over the name of the bucket you want to rename in the list.
4. Click **Rename**.
5. Review the information in the window that appears and click **I understand, let's rename my bucket**.
6. Update the bucket's name and click **Change Bucket Name**.
## Update a bucket's retention policy in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
{{< nav-icon "settings" >}}
2. Select the **Buckets** tab.
3. Click the name of the bucket you want to update from the list.
4. In the window that appears, edit the bucket's retention policy.
5. Click **Save Changes**.
## Update a bucket using the influx CLI

View File

@ -9,19 +9,30 @@ menu:
weight: 103
---
Use the `influx` command line interface (CLI) to update an organization.
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update an organization.
Note that updating an organization's name will affect any assets that reference the organization by name, including the following:
- Queries
- Dashboards
- Tasks
- Telegraf configurations
- Templates
If you change an organization name, be sure to update the organization in the above places as well.
<!---
## Update an organization in the InfluxDB UI
1. Click the **Influx** icon in the navigation bar.
1. Click the **Settings** icon in the navigation bar.
{{< nav-icon "admin" >}}
{{< nav-icon "settings" >}}
2. Click the **Org Profile** tab.
3. Click **Rename**.
4. In the window that appears, review the information and click **I understand, let's rename my organization**.
5. Enter a new name for your organization.
6. Click **Change organization name**.
2. Click on the organization you want to update in the list.
3. To update the organization's name, select the **Options** tab.
4. To manage the organization's members, buckets, dashboards, and tasks, click on the corresponding tabs.
-->
## Update an organization using the influx CLI
Use the [`influx org update` command](/v2.0/reference/cli/influx/org/update)

View File

@ -11,14 +11,16 @@ weight: 5
v2.0/tags: [tasks]
---
Process and analyze your data with tasks in the InfluxDB _**task engine**. Use tasks (scheduled Flux queries)
Process and analyze your data with tasks in the InfluxDB **task engine**. Use tasks (scheduled Flux queries)
to input a data stream and then analyze, modify, and act on the data accordingly.
Discover how to configure and build tasks using the InfluxDB user interface (UI) and the `influx` command line interface (CLI).
Find examples of data downsampling, anomaly detection_(Coming)_, alerting _(Coming)_, and other common tasks.
Discover how to create and manage tasks using the InfluxDB user interface (UI)
and the `influx` command line interface (CLI).
Find examples of data downsampling, anomaly detection _(Coming)_, alerting
_(Coming)_, and other common tasks.
{{% note %}}
Tasks replace InfluxDB v1.x's continuous queries.
Tasks replace InfluxDB v1.x continuous queries.
{{% /note %}}
{{< children >}}

View File

@ -1,13 +1,15 @@
---
title: Write an InfluxDB task
seotitle: Write an InfluxDB task that processes data
title: Get started with InfluxDB tasks
list_title: Get started with tasks
description: >
How to write an InfluxDB task that processes data in some way, then performs an action
Learn the basics of writing an InfluxDB task that processes data, and then performs an action,
such as storing the modified data in a new bucket or sending an alert.
aliases:
- /v2.0/process-data/write-a-task/
v2.0/tags: [tasks]
menu:
v2_0:
name: Write a task
name: Get started with tasks
parent: Process data
weight: 101
---

View File

@ -1,9 +1,10 @@
---
title: Manage tasks in InfluxDB
seotitle: Manage data processing tasks in InfluxDB
list_title: Manage tasks
description: >
InfluxDB provides options for managing the creation, reading, updating, and deletion
of tasks using both the 'influx' CLI and the InfluxDB UI.
InfluxDB provides options for creating, reading, updating, and deleting tasks
using both the `influx` CLI and the InfluxDB UI.
v2.0/tags: [tasks]
menu:
v2_0:

View File

@ -3,7 +3,7 @@ title: Create a task
seotitle: Create a task for processing data in InfluxDB
description: >
How to create a task that processes data in InfluxDB using the InfluxDB user
interface or the 'influx' command line interface.
interface or the `influx` command line interface.
menu:
v2_0:
name: Create a task
@ -14,7 +14,10 @@ weight: 201
InfluxDB provides multiple ways to create tasks both in the InfluxDB user interface (UI)
and the `influx` command line interface (CLI).
_This article assumes you have already [written a task](/v2.0/process-data/write-a-task)._
_Before creating a task, review the [basics criteria for writing a task](/v2.0/process-data/get-started)._
- [InfluxDB UI](#create-a-task-in-the-influxdb-ui)
- [`influx` CLI](#create-a-task-using-the-influx-cli)
## Create a task in the InfluxDB UI
The InfluxDB UI provides multiple ways to create a task:
@ -22,6 +25,8 @@ The InfluxDB UI provides multiple ways to create a task:
- [Create a task from the Data Explorer](#create-a-task-from-the-data-explorer)
- [Create a task in the Task UI](#create-a-task-in-the-task-ui)
- [Import a task](#import-a-task)
- [Create a task from a template](#create-a-task-from-a-template)
- [Clone a task](#clone-a-task)
### Create a task from the Data Explorer
1. Click on the **Data Explorer** icon in the left navigation menu.
@ -41,11 +46,12 @@ The InfluxDB UI provides multiple ways to create a task:
{{< nav-icon "tasks" >}}
2. Click **+ Create Task** in the upper right.
3. In the left panel, specify the task options.
See [Task options](/v2.0/process-data/task-options)for detailed information about each option.
4. In the right panel, enter your task script.
5. Click **Save** in the upper right.
2. Click **{{< icon "plus" >}} Create Task** in the upper right.
3. Select **New Task**.
4. In the left panel, specify the task options.
See [Task options](/v2.0/process-data/task-options) for detailed information about each option.
5. In the right panel, enter your task script.
6. Click **Save** in the upper right.
{{< img-hd src="/img/2-0-tasks-create-edit.png" title="Create a task" />}}
@ -56,15 +62,27 @@ The InfluxDB UI provides multiple ways to create a task:
2. Click **+ Create Task** in the upper right.
3. Select **Import Task**.
3. Drag and drop or select a file to upload.
4. Click **Import JSON as Task**.
4. Upload a JSON task file using one of the following options:
- Drag and drop a JSON task file in the specified area.
- Click to upload and the area to select the JSON task from from your file manager.
- Select the **JSON** option and paste in raw task JSON.
5. Click **Import JSON as Task**.
### Create a task from a template
1. Click on the **Settings** icon in the left navigation menu.
{{< nav-icon "Settings" >}}
2. Select **Templates**.
3. Hover over the template to use to create the task and click **Create**.
### Clone a task
1. Click on the **Tasks** icon in the left navigation menu.
{{< nav-icon "tasks" >}}
2. Hover over the task you would like to clone and click the **{{< icon "duplicate" >}}** that appears.
2. Hover over the task you would like to clone and click the **{{< icon "duplicate" >}}** icon that appears.
4. Click **Clone**.
## Create a task using the influx CLI

View File

@ -3,7 +3,7 @@ title: Delete a task
seotitle: Delete a task for processing data in InfluxDB
description: >
How to delete a task in InfluxDB using the InfluxDB user interface or using
the 'influx' command line interface.
the `influx` command line interface.
menu:
v2_0:
name: Delete a task

View File

@ -18,9 +18,9 @@ Tasks are exported as downloadable JSON files.
{{< nav-icon "tasks" >}}
2. In the list of tasks, hover over the task you would like to export and click
the **{{< icon "gear" >}}** that appears.
the **{{< icon "gear" >}}** icon that appears.
3. Select **Export**.
4. There are multiple options for downloading or saving the task export file:
4. Downloading or save the task export file using one of the following options:
- Click **Download JSON** to download the exported JSON file.
- Click **Save as template** to save the export file as a task template.
- Click **Copy to Clipboard** to copy the raw JSON content to your machine's clipboard.

View File

@ -3,7 +3,7 @@ title: Update a task
seotitle: Update a task for processing data in InfluxDB
description: >
How to update a task that processes data in InfluxDB using the InfluxDB user
interface or the 'influx' command line interface.
interface or the `influx` command line interface.
menu:
v2_0:
name: Update a task
@ -17,7 +17,7 @@ To view your tasks, click the **Tasks** icon in the left navigation menu.
{{< nav-icon "tasks" >}}
#### Update a task's Flux script
1. In the list of tasks, click the **Name** of the task you would like to update.
1. In the list of tasks, click the **Name** of the task you want to update.
2. In the left panel, modify the task options.
3. In the right panel, modify the task script.
4. Click **Save** in the upper right.
@ -25,9 +25,8 @@ To view your tasks, click the **Tasks** icon in the left navigation menu.
{{< img-hd src="/img/2-0-tasks-create-edit.png" alt="Update a task" />}}
#### Update the status of a task
In the list of tasks, click the toggle in the **Active** column of the task you
would like to activate or inactivate.
In the list of tasks, click the {{< icon "toggle" >}} toggle to the left of the
task you want to activate or inactivate.
## Update a task with the influx CLI
Use the `influx task update` command to update or change the status of an existing task.
@ -36,7 +35,7 @@ _This command requires a task ID, which is available in the output of `influx ta
#### Update a task's Flux script
Pass the file path of your updated Flux script to the `influx task update` command
with the ID of the task you would like to update.
with the ID of the task you want to update.
Modified [task options](/v2.0/process-data/task-options) defined in the Flux
script are also updated.
@ -49,7 +48,7 @@ influx task update -i 0343698431c35000 @/tasks/cq-mean-1h.flux
```
#### Update the status of a task
Pass the ID of the task you would like to update to the `influx task update`
Pass the ID of the task you want to update to the `influx task update`
command with the `--status` flag.
_Possible arguments of the `--status` flag are `active` or `inactive`._

View File

@ -3,7 +3,7 @@ title: View tasks
seotitle: View created tasks that process data in InfluxDB
description: >
How to view all created data processing tasks using the InfluxDB user interface
or the 'influx' command line interface.
or the `influx` command line interface.
menu:
v2_0:
name: View tasks
@ -18,10 +18,10 @@ Click the **Tasks** icon in the left navigation to view the lists of tasks.
### Filter the list of tasks
1. Enable the **Show Inactive** option to include inactive tasks in the list.
2. Enter text in the **Filter tasks by name** field to search for tasks by name.
3. Select an organization from the **All Organizations** dropdown to filter the list by organization.
4. Click on the heading of any column to sort by that field.
1. Click the **Show Inactive** {{< icon "toggle" >}} toggle to include or exclude
inactive tasks in the list.
2. Enter text in the **Filter tasks** field to search for tasks by name or label.
3. Click on the heading of any column to sort by that field.
## View tasks with the influx CLI
Use the `influx task find` command to return a list of created tasks.

View File

@ -62,7 +62,14 @@ shaping your data for the desired output.
###### Example group key
```js
[_start, _stop, _field, _measurement, host]
Group key: [_start, _stop, _field]
_start:time _stop:time _field:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ------------------------------ ----------------------------
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:33:56.000000000Z 65.55318832397461
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:06.000000000Z 65.52391052246094
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:16.000000000Z 65.49603939056396
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:26.000000000Z 65.51754474639893
2019-04-25T17:33:55.196959000Z 2019-04-25T17:34:55.196959000Z used_percent 2019-04-25T17:34:36.000000000Z 65.536737442016
```
Note that `_time` and `_value` are excluded from the example group key because they

View File

@ -0,0 +1,179 @@
---
title: Query using conditional logic
seotitle: Query using conditional logic in Flux
description: >
This guide describes how to use Flux conditional expressions, such as `if`,
`else`, and `then`, to query and transform data.
v2.0/tags: [conditionals, flux]
menu:
v2_0:
name: Query using conditionals
parent: How-to guides
weight: 209
---
Flux provides `if`, `then`, and `else` conditional expressions that allow for powerful and flexible Flux queries.
##### Conditional expression syntax
```js
// Pattern
if <condition> then <action> else <alternative-action>
// Example
if color == "green" then "008000" else "ffffff"
```
Conditional expressions are most useful in the following contexts:
- When defining variables.
- When using functions that operate on a single row at a time (
[`filter()`](/v2.0/reference/flux/functions/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/functions/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregations/reduce) ).
## Examples
- [Conditionally set the value of a variable](#conditionally-set-the-value-of-a-variable)
- [Create conditional filters](#create-conditional-filters)
- [Conditionally transform column values with map()](#conditionally-transform-column-values-with-map)
- [Conditionally increment a count with reduce()](#conditionally-increment-a-count-with-reduce)
### Conditionally set the value of a variable
The following example sets the `overdue` variable based on the
`dueDate` variable's relation to `now()`.
```js
dueDate = 2019-05-01
overdue = if dueDate < now() then true else false
```
### Create conditional filters
The following example uses an example `metric` [dashboard variable](/v2.0/visualize-data/variables/)
to change how the query filters data.
`metric` has three possible values:
- Memory
- CPU
- Disk
```js
from(bucket: "example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
if v.metric == "Memory"
then r._measurement == "mem" and r._field == "used_percent"
else if v.metric == "CPU"
then r._measurement == "cpu" and r._field == "usage_user"
else if v.metric == "Disk"
then r._measurement == "disk" and r._field == "used_percent"
else r._measurement != ""
)
```
### Conditionally transform column values with map()
The following example uses the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map/)
to conditionally transform column values.
It sets the `level` column to a specific string based on `_value` column.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[No Comments](#)
[Comments](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|> map(fn: (r) => ({
_time: r._time,
_value: r._value,
level:
if r._value >= 95.0000001 and r._value <= 100.0 then "critical"
else if r._value >= 85.0000001 and r._value <= 95.0 then "warning"
else if r._value >= 70.0000001 and r._value <= 85.0 then "high"
else "normal"
})
)
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|> map(fn: (r) => ({
// Retain the _time column in the mapped row
_time: r._time,
// Retain the _value column in the mapped row
_value: r._value,
// Set the level column value based on the _value column
level:
if r._value >= 95.0000001 and r._value <= 100.0 then "critical"
else if r._value >= 85.0000001 and r._value <= 95.0 then "warning"
else if r._value >= 70.0000001 and r._value <= 85.0 then "high"
else "normal"
})
)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
### Conditionally increment a count with reduce()
The following example uses the [`aggregateWindow()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/)
and [`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/)
functions to count the number of records in every five minute window that exceed a defined threshold.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[No Comments](#)
[Comments](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
threshold = 65.0
data = from(bucket: "example-bucket")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|> aggregateWindow(
every: 5m,
fn: (column, tables=<-) => tables |> reduce(
identity: {above_threshold_count: 0.0},
fn: (r, accumulator) => ({
above_threshold_count:
if r._value >= threshold then accumulator.above_threshold_count + 1.0
else accumulator.above_threshold_count + 0.0
})
)
)
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
threshold = 65.0
data = from(bucket: "example-bucket")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
// Aggregate data into 5 minute windows using a custom reduce() function
|> aggregateWindow(
every: 5m,
// Use a custom function in the fn parameter.
// The aggregateWindow fn parameter requires 'column' and 'tables' parameters.
fn: (column, tables=<-) => tables |> reduce(
identity: {above_threshold_count: 0.0},
fn: (r, accumulator) => ({
// Conditionally increment above_threshold_count if
// r.value exceeds the threshold
above_threshold_count:
if r._value >= threshold then accumulator.above_threshold_count + 1.0
else accumulator.above_threshold_count + 0.0
})
)
)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -49,6 +49,7 @@ retrieving authentication tokens._
| [bucket](/v2.0/reference/cli/influx/bucket) | Bucket management commands |
| [help](/v2.0/reference/cli/influx/help) | Help about any command |
| [org](/v2.0/reference/cli/influx/org) | Organization management commands |
| [ping](/v2.0/reference/cli/influx/ping) | Check the InfluxDB `/health` endpoint |
| [query](/v2.0/reference/cli/influx/query) | Execute a Flux query |
| [repl](/v2.0/reference/cli/influx/repl) | Interactive REPL (read-eval-print-loop) |
| [setup](/v2.0/reference/cli/influx/setup) | Create default username, password, org, bucket, etc. |

View File

@ -21,13 +21,13 @@ influx auth [command]
`auth`, `authorization`
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [active](/v2.0/reference/cli/influx/auth/active) | Active authorization |
| [create](/v2.0/reference/cli/influx/auth/create) | Create authorization |
| [delete](/v2.0/reference/cli/influx/auth/delete) | Delete authorization |
| [find](/v2.0/reference/cli/influx/auth/find) | Find authorization |
| [inactive](/v2.0/reference/cli/influx/auth/inactive) | Inactive authorization |
| Subcommand | Description |
|:---------- |:----------- |
| [active](/v2.0/reference/cli/influx/auth/active) | Activate authorization |
| [create](/v2.0/reference/cli/influx/auth/create) | Create authorization |
| [delete](/v2.0/reference/cli/influx/auth/delete) | Delete authorization |
| [find](/v2.0/reference/cli/influx/auth/find) | Find authorization |
| [inactive](/v2.0/reference/cli/influx/auth/inactive) | Inactivate authorization |
## Flags
| Flag | Description |

View File

@ -20,6 +20,8 @@ influx auth find [flags]
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `find` command | |
| `-i`, `--id` | The authorization ID | string |
| `-o`, `--org` | The organization | string |
| `--org-id` | The organization ID | string |
| `-u`, `--user` | The user | string |
| `--user-id` | The user ID | string |

View File

@ -20,7 +20,6 @@ influx bucket create [flags]
|:---- |:----------- |:----------: |
| `-h`, `--help` | Help for the `create` command | |
| `-n`, `--name` | Name of bucket that will be created | string |
| `-o`, `--org` | Name of the organization that owns the bucket | string |
| `--org-id` | The ID of the organization that owns the bucket | string |
| `-r`, `--retention` | Duration in nanoseconds data will live in bucket | duration |

View File

@ -0,0 +1,33 @@
---
title: influx ping Check the health of InfluxDB
description: >
The `influx ping` command checks the health of a running InfluxDB instance by
querying the `/health` endpoint.
menu:
v2_0_ref:
name: influx ping
parent: influx
weight: 101
v2.0/tags: [ping, health]
---
The `influx ping` command checks the health of a running InfluxDB instance by
querying the `/health` endpoint.
It does not require an authorization token.
## Usage
```
influx ping [flags]
```
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for the `ping` command |
## Global Flags
| Global flag | Description | Input type |
|:----------- |:----------- |:----------:|
| `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string |
| `--local` | Run commands locally against the filesystem | |
| `-t`, `--token` | API token to be used throughout client calls | string |

View File

@ -21,6 +21,7 @@ influx task find [flags]
| `-h`, `--help` | Help for `find` | |
| `-i`, `--id` | Task ID | string |
| `--limit` | The number of tasks to find (default `100`) | integer |
| `--org` | Task organization name | string |
| `--org-id` | Task organization ID | string |
| `-n`, `--user-id` | Task owner ID | string |

View File

@ -0,0 +1,24 @@
---
title: now() function
description: The `now()` function returns the current time (GMT).
menu:
v2_0_ref:
name: now
parent: built-in-misc
weight: 401
---
The `now()` function returns the current time (GMT).
_**Function type:** Date/Time_
_**Output data type:** Time_
```js
now()
```
## Examples
```js
data
|> range(start: -10h, stop: now())
```

View File

@ -1,25 +0,0 @@
---
title: systemTime() function
description: The `systemTime()` function returns the current system time.
aliases:
- /v2.0/reference/flux/functions/misc/systemtime
menu:
v2_0_ref:
name: systemTime
parent: built-in-misc
weight: 401
---
The `systemTime()` function returns the current system time.
_**Function type:** Date/Time_
_**Output data type:** Timestamp_
```js
systemTime()
```
## Examples
```js
offsetTime = (offset) => systemTime() |> shift(shift: offset)
```

View File

@ -14,16 +14,15 @@ v2.0/tags: [aggregates, built-in, functions]
---
Flux's built-in aggregate functions take values from an input table and aggregate them in some way.
The output table contains is a single row with the aggregated value.
The output table contains a single row with the aggregated value.
Aggregate operations output a table for every input table they receive.
A list of columns to aggregate must be provided to the operation.
The aggregate function is applied to each column in isolation.
You must provide a column to aggregate.
Any output table will have the following properties:
- It always contains a single record.
- It will have the same group key as the input table.
- It will contain a column for each provided aggregate column.
- It will contain a column for the provided aggregate column.
The column label will be the same as the input table.
The type of the column depends on the specific aggregate operation.
The value of the column will be `null` if the input table is empty or the input column has only `null` values.

View File

@ -18,15 +18,15 @@ _**Function type:** Aggregate_
aggregateWindow(
every: 1m,
fn: mean,
columns: ["_value"],
timeColumn: "_stop",
column: "_value",
timeSrc: "_stop",
timeDst: "_time",
createEmpty: true
)
```
As data is windowed into separate tables and aggregated, the `_time` column is dropped from each group key.
This helper copies the timestamp from a remaining column into the `_time` column.
This function copies the timestamp from a remaining column into the `_time` column.
View the [function definition](#function-definition).
## Parameters
@ -37,17 +37,21 @@ The duration of windows.
_**Data type:** Duration_
### fn
The aggregate function used in the operation.
The [aggregate function](/v2.0/reference/flux/functions/built-in/transformations/aggregates) used in the operation.
_**Data type:** Function_
### columns
List of columns on which to operate.
Defaults to `["_value"]`.
{{% note %}}
Only aggregate functions with a `column` parameter (singular) work with `aggregateWindow()`.
{{% /note %}}
_**Data type:** Array of strings_
### column
The column on which to operate.
Defaults to `"_value"`.
### timeColumn
_**Data type:** String_
### timeSrc
The time column from which time is copied for the aggregate record.
Defaults to `"_stop"`.
@ -92,18 +96,19 @@ from(bucket: "telegraf/autogen")
r._measurement == "mem" and
r._field == "used_percent")
|> aggregateWindow(
column: "_value",
every: 5m,
fn: (columns, tables=<-) => tables |> quantile(q: 0.99, columns:columns)
fn: (column, tables=<-) => tables |> quantile(q: 0.99, column:column)
)
```
## Function definition
```js
aggregateWindow = (every, fn, columns=["_value"], timeColumn="_stop", timeDst="_time", tables=<-) =>
aggregateWindow = (every, fn, column="_value", timeSrc="_stop", timeDst="_time", tables=<-) =>
tables
|> window(every:every)
|> fn(columns:columns)
|> duplicate(column:timeColumn, as:timeDst)
|> fn(column:column)
|> duplicate(column:timeSrc, as:timeDst)
|> window(every:inf, timeColumn:timeDst)
```

View File

@ -1,6 +1,6 @@
---
title: count() function
description: The `count()` function outputs the number of non-null records in each aggregated column.
description: The `count()` function outputs the number of non-null records in a column.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/count
menu:
@ -10,23 +10,23 @@ menu:
weight: 501
---
The `count()` function outputs the number of records in each aggregated column.
The `count()` function outputs the number of records in a column.
It counts both null and non-null records.
_**Function type:** Aggregate_
_**Output data type:** Integer_
```js
count(columns: ["_value"])
count(column: "_value")
```
## Parameters
### columns
A list of columns on which to operate
Defaults to `["_value"]`.
### column
The column on which to operate.
Defaults to `"_value"`.
_**Data type: Array of strings**_
_**Data type: String**_
## Examples
```js
@ -38,7 +38,7 @@ from(bucket: "telegraf/autogen")
```js
from(bucket: "telegraf/autogen")
|> range(start: -5m)
|> count(columns: ["_value"])
|> count(column: "_value")
```
<hr style="margin-top:4rem"/>

View File

@ -22,14 +22,10 @@ covariance(columns: ["column_x", "column_y"], pearsonr: false, valueDst: "_value
## Parameters
### columns
A list of columns on which to operate.
A list of **two columns** on which to operate. <span class="required">Required</span>
_**Data type:** Array of strings_
{{% note %}}
Exactly two columns must be provided to the `columns` property.
{{% /note %}}
### pearsonr
Indicates whether the result should be normalized to be the Pearson R coefficient.

View File

@ -12,7 +12,7 @@ weight: 501
The `derivative()` function computes the rate of change per [`unit`](#unit) of time between subsequent non-null records.
It assumes rows are ordered by the `_time` column.
The output table schema will be the same as the input table.
The output table schema is the same as the input table.
_**Function type:** Aggregate_
_**Output data type:** Float_
@ -21,7 +21,7 @@ _**Output data type:** Float_
derivative(
unit: 1s,
nonNegative: false,
columns: ["_value"],
column: "_value",
timeSrc: "_time"
)
```
@ -40,11 +40,11 @@ When set to `true`, if a value is less than the previous value, it is assumed th
_**Data type:** Boolean_
### columns
A list of columns on which to compute the derivative.
Defaults to `["_value"]`.
### column
The column to use to compute the derivative.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### timeSrc
The column containing time values.

View File

@ -11,13 +11,13 @@ weight: 501
---
The `difference()` function computes the difference between subsequent records.
Every user-specified column of numeric type is subtracted while others are kept intact.
The user-specified column of numeric type is subtracted while others are kept intact.
_**Function type:** Aggregate_
_**Output data type:** Float_
```js
difference(nonNegative: false, columns: ["_value"])
difference(nonNegative: false, column: "_value")
```
## Parameters
@ -28,11 +28,11 @@ When set to `true`, if a value is less than the previous value, it is assumed th
_**Data type:** Boolean_
### columns
A list of columns on which to compute the difference.
Defaults to `["_value"]`.
### column
The column to use to compute the difference.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
## Subtraction rules for numeric types
- The difference between two non-null values is their algebraic difference;
@ -58,37 +58,37 @@ from(bucket: "telegraf/autogen")
### Example data transformation
###### Input table
| _time | A | B | C | tag |
|:-----:|:----:|:----:|:----:|:---:|
| 0001 | null | 1 | 2 | tv |
| 0002 | 6 | 2 | null | tv |
| 0003 | 4 | 2 | 4 | tv |
| 0004 | 10 | 10 | 2 | tv |
| 0005 | null | null | 1 | tv |
| _time | _value | tag |
|:-----:|:------:|:---:|
| 0001 | null | tv |
| 0002 | 6 | tv |
| 0003 | 4 | tv |
| 0004 | 10 | tv |
| 0005 | null | tv |
#### With nonNegative set to false
```js
|> difference(nonNegative: false)
```
###### Output table
| _time | A | B | C | tag |
|:-----:|:----:|:----:|:----:|:---:|
| 0002 | null | 1 | null | tv |
| 0003 | -2 | 0 | 2 | tv |
| 0004 | 6 | 8 | -2 | tv |
| 0005 | null | null | -1 | tv |
| _time | _value | tag |
|:-----:|:------:|:---:|
| 0002 | null | tv |
| 0003 | -2 | tv |
| 0004 | 6 | tv |
| 0005 | null | tv |
#### With nonNegative set to true
```js
|> difference(nonNegative: true):
```
###### Output table
| _time | A | B | C | tag |
|:-----:|:----:|:----:|:----:|:---:|
| 0002 | null | 1 | null | tv |
| 0003 | null | 0 | 2 | tv |
| 0004 | 6 | 8 | null | tv |
| 0005 | null | null | null | tv |
| _time | _value | tag |
|:-----:|:------:|:---:|
| 0002 | null | tv |
| 0003 | null | tv |
| 0004 | 6 | tv |
| 0005 | null | tv |
<hr style="margin-top:4rem"/>

View File

@ -1,6 +1,8 @@
---
title: histogramQuantile() function
description: The `histogramQuantile()` function approximates a quantile given a histogram that approximates the cumulative distribution of the dataset.
description: >
The `histogramQuantile()` function approximates a quantile given a histogram
that approximates the cumulative distribution of the dataset.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/histogramquantile
menu:

View File

@ -20,16 +20,16 @@ _**Function type:** Aggregate_
_**Output data type:** Float_
```js
increase(columns: ["_values"])
increase(column: "_values")
```
## Parameters
### columns
The list of columns for which the increase is calculated.
Defaults to `["_value"]`.
### column
The column for which the increase is calculated.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** Strings_
## Examples
```js
@ -61,8 +61,8 @@ Given the following input table:
## Function definition
```js
increase = (tables=<-, columns=["_value"]) =>
increase = (tables=<-, column="_value") =>
tables
|> difference(nonNegative: true, columns:columns)
|> difference(nonNegative: true, column:column)
|> cumulativeSum()
```

View File

@ -17,7 +17,7 @@ _**Function type:** Aggregate_
_**Output data type:** Float_
```js
integral(unit: 10s, columns: ["_value"])
integral(unit: 10s, column: "_value")
```
## Parameters
@ -27,11 +27,11 @@ The time duration used when computing the integral.
_**Data type:** Duration_
### columns
A list of columns on which to operate.
Defaults to `["_value"]`.
### column
The column on which to operate.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
## Examples
```js

View File

@ -16,16 +16,16 @@ _**Function type:** Aggregate_
_**Output data type:** Float_
```js
mean(columns: ["_value"])
mean(column: "_value")
```
## Parameters
### columns
A list of columns on which to compute the mean.
Defaults to `["_value"]`.
### column
The column to use to compute the mean.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
## Examples
```js

View File

@ -20,7 +20,7 @@ _**Output data type:** Float or Object_
```js
quantile(
columns: ["_value"],
column: "_value",
q: 0.99,
method: "estimate_tdigest",
compression: 1000.0
@ -35,11 +35,11 @@ value that represents the specified quantile.
## Parameters
### columns
A list of columns on which to compute the quantile.
Defaults to `["_value"]`.
### column
The column to use to compute the quantile.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### q
A value between 0 and 1 indicating the desired quantile.
@ -72,7 +72,7 @@ _**Data type:** Float_
## Examples
###### Percentile as an aggregate
###### Quantile as an aggregate
```js
from(bucket: "telegraf/autogen")
|> range(start: -5m)
@ -86,7 +86,7 @@ from(bucket: "telegraf/autogen")
)
```
###### Percentile as a selector
###### Quantile as a selector
```js
from(bucket: "telegraf/autogen")
|> range(start: -5m)

View File

@ -25,8 +25,8 @@ reduce(
```
If the reducer record contains a column with the same name as a group key column,
the group key column's value is overwritten and the resulting record is regrouped
into the appropriate table.
the group key column's value is overwritten, and the outgoing group key is changed.
However, if two reduced tables write to the same destination group key, the function will error.
## Parameters

View File

@ -16,15 +16,16 @@ _**Function type:** Aggregate_
_**Output data type:** Float_
```js
skew(columns: ["_value"])
skew(column: "_value")
```
## Parameters
### columns
Specifies a list of columns on which to operate. Defaults to `["_value"]`.
### column
The column on which to operate.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
## Examples
```js

View File

@ -1,6 +1,6 @@
---
title: spread() function
description: The `spread()` function outputs the difference between the minimum and maximum values in each specified column.
description: The `spread()` function outputs the difference between the minimum and maximum values in a specified column.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/spread
menu:
@ -10,26 +10,26 @@ menu:
weight: 501
---
The `spread()` function outputs the difference between the minimum and maximum values in each specified column.
The `spread()` function outputs the difference between the minimum and maximum values in a specified column.
Only `uint`, `int`, and `float` column types can be used.
The type of the output column depends on the type of input column:
- For input columns with type `uint` or `int`, the output is an `int`
- For input columns with type `float` the output is a float.
- For columns with type `uint` or `int`, the output is an `int`
- For columns with type `float`, the output is a float.
_**Function type:** Aggregate_
_**Output data type:** Integer or Float (inherited from input column type)_
```js
spread(columns: ["_value"])
spread(column: "_value")
```
## Parameters
### columns
Specifies a list of columns on which to operate. Defaults to `["_value"]`.
### column
The column on which to operate. Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
## Examples
```js

View File

@ -1,6 +1,6 @@
---
title: stddev() function
description: The `stddev()` function computes the standard deviation of non-null records in specified columns.
description: The `stddev()` function computes the standard deviation of non-null records in a specified column.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/stddev
menu:
@ -10,22 +10,39 @@ menu:
weight: 501
---
The `stddev()` function computes the standard deviation of non-null records in specified columns.
The `stddev()` function computes the standard deviation of non-null records in a specified column.
_**Function type:** Aggregate_
_**Output data type:** Float_
```js
stddev(columns: ["_value"])
stddev(
column: "_value",
mode: "sample"
)
```
## Parameters
### columns
Specifies a list of columns on which to operate.
Defaults to `["_value"]`.
### column
The column on which to operate.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### mode
The standard deviation mode or type of standard deviation to calculate.
Defaults to `"sample"`.
_**Data type:** String_
The available options are:
##### sample
Calculates the sample standard deviation where the data is considered to be part of a larger population.
##### population
Calculates the population standard deviation where the data is considered a population of its own.
## Examples
```js

View File

@ -1,6 +1,6 @@
---
title: sum() function
description: The `sum()` function computes the sum of non-null records in specified columns.
description: The `sum()` function computes the sum of non-null records in a specified column.
aliases:
- /v2.0/reference/flux/functions/transformations/aggregates/sum
menu:
@ -10,22 +10,22 @@ menu:
weight: 501
---
The `sum()` function computes the sum of non-null records in specified columns.
The `sum()` function computes the sum of non-null records in a specified column.
_**Function type:** Aggregate_
_**Output data type:** Integer, UInteger, or Float (inherited from column type)_
```js
sum(columns: ["_value"])
sum(column: "_value")
```
## Parameters
### columns
Specifies a list of columns on which to operate.
Defaults to `["_value"]`.
### column
The column on which to operate.
Defaults to `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
## Examples
```js

View File

@ -33,6 +33,10 @@ The name assigned to the duplicate column.
_**Data type:** String_
{{% note %}}
If the `as` column already exists, this function will overwrite the existing values.
{{% /note %}}
## Examples
```js
from(bucket: "telegraf/autogen")

View File

@ -12,6 +12,7 @@ weight: 401
The `group()` function groups records based on their values for specific columns.
It produces tables with new group keys based on provided properties.
Specify an empty array of columns to ungroup data or merge all input tables into a single output table.
_**Function type:** Transformation_
@ -69,8 +70,9 @@ from(bucket: "telegraf/autogen")
|> group(columns: ["_time"], mode: "except")
```
###### Remove all grouping
###### Ungroup data
```js
// Merge all tables into a single table
from(bucket: "telegraf/autogen")
|> range(start: -30m)
|> group()

View File

@ -46,7 +46,7 @@ The resulting group keys for all tables will be: `[_time, _field_d1, _field_d2]`
## Parameters
### tables
The map of streams to be joined. <span style="color:#FF8564; font-weight:700;">Required</span>.
The map of streams to be joined. <span class="required">Required</span>
_**Data type:** Object_
@ -55,7 +55,7 @@ _**Data type:** Object_
{{% /note %}}
### on
The list of columns on which to join. <span style="color:#FF8564; font-weight:700;">Required</span>.
The list of columns on which to join. <span class="required">Required</span>
_**Data type:** Array of strings_

View File

@ -20,7 +20,7 @@ _**Function type:** Transformation_
_**Output data type:* Object_
```js
range(start: -15m, stop: now)
range(start: -15m, stop: now())
```
## Parameters
@ -45,7 +45,7 @@ _**Data type:** Duration or Timestamp_
{{% note %}}
Flux only honors [RFC3339 timestamps](/v2.0/reference/flux/language/types#timestamp-format)
and ignores dates and times provided in other formats.
and ignores dates and times provided in other formats.
{{% /note %}}
## Examples

View File

@ -1,6 +1,6 @@
---
title: highestAverage() function
description: The `highestAverage()` function returns the top `n` records from all groups using the average of each group.
description: The `highestAverage()` function calculates the average of each table in the input stream returns the top `n` records.
aliases:
- /v2.0/reference/flux/functions/transformations/selectors/highestaverage
menu:
@ -10,14 +10,15 @@ menu:
weight: 501
---
The `highestAverage()` function returns the top `n` records from all groups using the average of each group.
The `highestAverage()` function calculates the average of each table in the input stream returns the top `n` records.
It outputs a single aggregated table containing `n` records.
_**Function type:** Selector, Aggregate_
```js
highestAverage(
n:10,
columns: ["_value"],
column: "_value",
groupColumns: []
)
```
@ -29,12 +30,11 @@ Number of records to return.
_**Data type:** Integer_
### columns
List of columns by which to sort.
Sort precedence is determined by list order (left to right).
Default is `["_value"]`.
### column
Column by which to sort.
Default is `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### groupColumns
The columns on which to group before performing the aggregation.
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
// _highestOrLowest is a helper function which reduces all groups into a single
// group by specific tags and a reducer function. It then selects the highest or
// lowest records based on the columns and the _sortLimit function.
// lowest records based on the column and the _sortLimit function.
// The default reducer assumes no reducing needs to be performed.
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
tables
|> group(columns:groupColumns)
|> reducer()
|> group(columns:[])
|> _sortLimit(n:n, columns:columns)
|> _sortLimit(n:n, columns:[column])
highestAverage = (n, columns=["_value"], groupColumns=[], tables=<-) =>
highestAverage = (n, column="_value", groupColumns=[], tables=<-) =>
tables
|> _highestOrLowest(
n:n,
columns:columns,
groupColumns:groupColumns,
reducer: (tables=<-) => tables |> mean(columns:[columns[0]]),
reducer: (tables=<-) => tables |> mean(column:column),
_sortLimit: top,
)
```

View File

@ -1,6 +1,6 @@
---
title: highestCurrent() function
description: The `highestCurrent()` function returns the top `n` records from all groups using the last value of each group.
description: The `highestCurrent()` function selects the last record of each table in the input stream and returns the top `n` records.
aliases:
- /v2.0/reference/flux/functions/transformations/selectors/highestcurrent
menu:
@ -10,14 +10,15 @@ menu:
weight: 501
---
The `highestCurrent()` function returns the top `n` records from all groups using the last value of each group.
The `highestCurrent()` function selects the last record of each table in the input stream and returns the top `n` records.
It outputs a single aggregated table containing `n` records.
_**Function type:** Selector, Aggregate_
```js
highestCurrent(
n:10,
columns: ["_value"],
column: "_value",
groupColumns: []
)
```
@ -29,12 +30,11 @@ Number of records to return.
_**Data type:** Integer_
### columns
List of columns by which to sort.
Sort precedence is determined by list order (left to right).
Default is `["_value"]`.
### column
Column by which to sort.
Default is `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### groupColumns
The columns on which to group before performing the aggregation.
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
// _highestOrLowest is a helper function which reduces all groups into a single
// group by specific tags and a reducer function. It then selects the highest or
// lowest records based on the columns and the _sortLimit function.
// lowest records based on the column and the _sortLimit function.
// The default reducer assumes no reducing needs to be performed.
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
tables
|> group(columns:groupColumns)
|> reducer()
|> group(columns:[])
|> _sortLimit(n:n, columns:columns)
|> _sortLimit(n:n, columns:[column])
highestCurrent = (n, columns=["_value"], groupColumns=[], tables=<-) =>
highestCurrent = (n, column="_value", groupColumns=[], tables=<-) =>
tables
|> _highestOrLowest(
n:n,
columns:columns,
column:column,
groupColumns:groupColumns,
reducer: (tables=<-) => tables |> last(column:columns[0]),
reducer: (tables=<-) => tables |> last(column:column),
_sortLimit: top,
)
```

View File

@ -1,6 +1,6 @@
---
title: highestMax() function
description: The `highestMax()` function returns the top `n` records from all groups using the maximum of each group.
description: The `highestMax()` function selects the maximum record from each table in the input stream and returns the top `n` records.
aliases:
- /v2.0/reference/flux/functions/transformations/selectors/highestmax
menu:
@ -10,14 +10,15 @@ menu:
weight: 501
---
The `highestMax()` function returns the top `n` records from all groups using the maximum of each group.
The `highestMax()` function selects the maximum record from each table in the input stream and returns the top `n` records.
It outputs a single aggregated table containing `n` records.
_**Function type:** Selector, Aggregate_
```js
highestMax(
n:10,
columns: ["_value"],
column: "_value",
groupColumns: []
)
```
@ -29,12 +30,11 @@ Number of records to return.
_**Data type:** Integer_
### columns
List of columns by which to sort.
Sort precedence is determined by list order (left to right).
Default is `["_value"]`.
### column
Column by which to sort.
Default is `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### groupColumns
The columns on which to group before performing the aggregation.
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
// _highestOrLowest is a helper function which reduces all groups into a single
// group by specific tags and a reducer function. It then selects the highest or
// lowest records based on the columns and the _sortLimit function.
// lowest records based on the column and the _sortLimit function.
// The default reducer assumes no reducing needs to be performed.
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
tables
|> group(columns:groupColumns)
|> reducer()
|> group(columns:[])
|> _sortLimit(n:n, columns:columns)
|> _sortLimit(n:n, columns:[column])
highestMax = (n, columns=["_value"], groupColumns=[], tables=<-) =>
highestMax = (n, column="_value", groupColumns=[], tables=<-) =>
tables
|> _highestOrLowest(
n:n,
columns:columns,
column:column,
groupColumns:groupColumns,
reducer: (tables=<-) => tables |> max(column:columns[0]),
reducer: (tables=<-) => tables |> max(column:column),
_sortLimit: top
)
```

View File

@ -1,6 +1,6 @@
---
title: lowestAverage() function
description: The `lowestAverage()` function returns the bottom `n` records from all groups using the average of each group.
description: The `lowestAverage()` function calculates the average of each table in the input stream returns the lowest `n` records.
aliases:
- /v2.0/reference/flux/functions/transformations/selectors/lowestaverage
menu:
@ -10,14 +10,15 @@ menu:
weight: 501
---
The `lowestAverage()` function returns the bottom `n` records from all groups using the average of each group.
The `lowestAverage()` function calculates the average of each table in the input stream returns the lowest `n` records.
It outputs a single aggregated table containing `n` records.
_**Function type:** Selector, Aggregate_
```js
lowestAverage(
n:10,
columns: ["_value"],
column: "_value",
groupColumns: []
)
```
@ -29,12 +30,11 @@ Number of records to return.
_**Data type:** Integer_
### columns
List of columns by which to sort.
Sort precedence is determined by list order (left to right).
Default is `["_value"]`.
### column
Column by which to sort.
Default is `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### groupColumns
The columns on which to group before performing the aggregation.
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
// _highestOrLowest is a helper function which reduces all groups into a single
// group by specific tags and a reducer function. It then selects the highest or
// lowest records based on the columns and the _sortLimit function.
// lowest records based on the column and the _sortLimit function.
// The default reducer assumes no reducing needs to be performed.
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
tables
|> group(columns:groupColumns)
|> reducer()
|> group(columns:[])
|> _sortLimit(n:n, columns:columns)
|> _sortLimit(n:n, columns:[column])
lowestAverage = (n, columns=["_value"], groupColumns=[], tables=<-) =>
lowestAverage = (n, column="_value", groupColumns=[], tables=<-) =>
tables
|> _highestOrLowest(
n:n,
columns:columns,
column:column,
groupColumns:groupColumns,
reducer: (tables=<-) => tables |> mean(columns:[columns[0]]),
reducer: (tables=<-) => tables |> mean(column:column]),
_sortLimit: bottom,
)

View File

@ -1,6 +1,6 @@
---
title: lowestCurrent() function
description: The `lowestCurrent()` function returns the bottom `n` records from all groups using the last value of each group.
description: The `lowestCurrent()` function selects the last record of each table in the input stream and returns the lowest `n` records.
aliases:
- /v2.0/reference/flux/functions/transformations/selectors/lowestcurrent
menu:
@ -10,14 +10,15 @@ menu:
weight: 501
---
The `lowestCurrent()` function returns the bottom `n` records from all groups using the last value of each group.
The `lowestCurrent()` function selects the last record of each table in the input stream and returns the lowest `n` records.
It outputs a single aggregated table containing `n` records.
_**Function type:** Selector, Aggregate_
```js
lowestCurrent(
n:10,
columns: ["_value"],
column: "_value",
groupColumns: []
)
```
@ -29,12 +30,11 @@ Number of records to return.
_**Data type:** Integer_
### columns
List of columns by which to sort.
Sort precedence is determined by list order (left to right).
Default is `["_value"]`.
### column
Column by which to sort.
Default is `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### groupColumns
The columns on which to group before performing the aggregation.
@ -63,22 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
// _highestOrLowest is a helper function which reduces all groups into a single
// group by specific tags and a reducer function. It then selects the highest or
// lowest records based on the columns and the _sortLimit function.
// lowest records based on the column and the _sortLimit function.
// The default reducer assumes no reducing needs to be performed.
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
tables
|> group(columns:groupColumns)
|> reducer()
|> group(columns:[])
|> _sortLimit(n:n, columns:columns)
|> _sortLimit(n:n, columns:[column])
lowestCurrent = (n, columns=["_value"], groupColumns=[], tables=<-) =>
lowestCurrent = (n, column="_value", groupColumns=[], tables=<-) =>
tables
|> _highestOrLowest(
n:n,
columns:columns,
column:column,
groupColumns:groupColumns,
reducer: (tables=<-) => tables |> last(column:columns[0]),
reducer: (tables=<-) => tables |> last(column:column),
_sortLimit: bottom,
)
```

View File

@ -1,6 +1,6 @@
---
title: lowestMin() function
description: The `lowestMin()` function returns the bottom `n` records from all groups using the minimum of each group.
description: The `lowestMin()` function selects the minimum record from each table in the input stream and returns the lowest `n` records.
aliases:
- /v2.0/reference/flux/functions/transformations/selectors/lowestmin
menu:
@ -10,14 +10,15 @@ menu:
weight: 501
---
The `lowestMin()` function returns the bottom `n` records from all groups using the minimum of each group.
The `lowestMin()` function selects the minimum record from each table in the input stream and returns the lowest `n` records.
It outputs a single aggregated table containing `n` records.
_**Function type:** Selector, Aggregate_
```js
lowestMin(
n:10,
columns: ["_value"],
column: "_value",
groupColumns: []
)
```
@ -29,12 +30,11 @@ Number of records to return.
_**Data type:** Integer_
### columns
List of columns by which to sort.
Sort precedence is determined by list order (left to right).
Default is `["_value"]`.
### column
Column by which to sort.
Default is `"_value"`.
_**Data type:** Array of strings_
_**Data type:** String_
### groupColumns
The columns on which to group before performing the aggregation.
@ -63,23 +63,22 @@ _sortLimit = (n, desc, columns=["_value"], tables=<-) =>
// _highestOrLowest is a helper function which reduces all groups into a single
// group by specific tags and a reducer function. It then selects the highest or
// lowest records based on the columns and the _sortLimit function.
// lowest records based on the column and the _sortLimit function.
// The default reducer assumes no reducing needs to be performed.
_highestOrLowest = (n, _sortLimit, reducer, columns=["_value"], groupColumns=[], tables=<-) =>
_highestOrLowest = (n, _sortLimit, reducer, column="_value", groupColumns=[], tables=<-) =>
tables
|> group(columns:groupColumns)
|> reducer()
|> group(columns:[])
|> _sortLimit(n:n, columns:columns)
|> _sortLimit(n:n, columns:[column])
lowestMin = (n, columns=["_value"], groupColumns=[], tables=<-) =>
lowestMin = (n, column="_value", groupColumns=[], tables=<-) =>
tables
|> _highestOrLowest(
n:n,
columns:columns,
column:column,
groupColumns:groupColumns,
// TODO(nathanielc): Once max/min support selecting based on multiple columns change this to pass all columns.
reducer: (tables=<-) => tables |> min(column:columns[0]),
reducer: (tables=<-) => tables |> min(column:column),
_sortLimit: bottom,
)
```

View File

@ -16,9 +16,17 @@ _**Function type:** Selector_
_**Output data type:** Object_
```js
max()
max(column: "_value")
```
## Parameters
### column
The column to use to calculate the maximum value.
Default is `"_value"`.
_**Data type:** String_
## Examples
```js
from(bucket:"telegraf/autogen")

View File

@ -16,9 +16,17 @@ _**Function type:** Selector_
_**Output data type:** Object_
```js
min()
min(column: "_value")
```
## Parameters
### column
The column to use to calculate the minimum value.
Default is `"_value"`.
_**Data type:** String_
## Examples
```js
from(bucket:"telegraf/autogen")

View File

@ -61,6 +61,7 @@ tagKeys = (bucket, predicate=(r) => true, start=-30d) =>
|> filter(fn: predicate)
|> keys()
|> keep(columns: ["_value"])
|> distinct()
```
_**Used functions:**
@ -68,4 +69,5 @@ _**Used functions:**
[range](/v2.0/reference/flux/functions/built-in/transformations/range/),
[filter](/v2.0/reference/flux/functions/built-in/transformations/filter/),
[keys](/v2.0/reference/flux/functions/built-in/transformations/keys/),
[keep](/v2.0/reference/flux/functions/built-in/transformations/keep/)_
[keep](/v2.0/reference/flux/functions/built-in/transformations/keep/),
[distinct](/v2.0/reference/flux/functions/built-in/transformations/selectors/distinct/)_

View File

@ -18,20 +18,20 @@ _**Output data format:** Object_
```js
import "math"
math.modf(x: 3.14)
math.modf(f: 3.14)
// Returns {int: 3, frac: 0.14000000000000012}
```
## Parameters
### x
### f
The value used in the operation.
_**Data type:** Float_
## Special cases
```js
math.modf(x: ±Inf) // Returns {int: ±Inf, frac: NaN}
math.modf(x: NaN) // Returns {int: NaN, frac: NaN}
math.modf(f: ±Inf) // Returns {int: ±Inf, frac: NaN}
math.modf(f: NaN) // Returns {int: NaN, frac: NaN}
```

View File

@ -27,7 +27,7 @@ The numerator used in the operation.
_**Data type:** Float_
### x
### y
The denominator used in the operation.
_**Data type:** Float_

View File

@ -23,6 +23,6 @@ math.signbit(x: -1.2)
## Parameters
### x
The value used in the operation.
The value used in the evaluation.
_**Data type:** Float_

View File

@ -0,0 +1,46 @@
---
title: strings.trimPrefix() function
description: >
The `strings.trimPrefix()` function removes a prefix from a string.
Strings that do not start with the prefix are returned unchanged.
menu:
v2_0_ref:
name: strings.trimPrefix
parent: Strings
weight: 301
---
The `strings.trimPrefix()` function removes a prefix from a string.
Strings that do not start with the prefix are returned unchanged.
_**Output data type:** String_
```js
import "strings"
strings.trimPrefix(v: "123_abc", prefix: "123")
// returns "_abc"
```
## Paramters
### v
The string value to trim.
_**Data type:** String_
### prefix
The prefix to remove.
_**Data type:** String_
## Examples
###### Remove a prefix from all values in a column
```js
import "strings"
data
|> map(fn:(r) => strings.trimPrefix(v: r.sensorId, prefix: "s12_"))
```

View File

@ -0,0 +1,46 @@
---
title: strings.trimSuffix() function
description: >
The `strings.trimSuffix()` function removes a suffix from a string.
Strings that do not end with the suffix are returned unchanged.
menu:
v2_0_ref:
name: strings.trimSuffix
parent: Strings
weight: 301
---
The `strings.trimSuffix()` function removes a suffix from a string.
Strings that do not end with the suffix are returned unchanged.
_**Output data type:** String_
```js
import "strings"
strings.trimSuffix(v: "123_abc", suffix: "abc")
// returns "123_"
```
## Paramters
### v
The string value to trim.
_**Data type:** String_
### suffix
The suffix to remove.
_**Data type:** String_
## Examples
###### Remove a suffix from all values in a column
```js
import "strings"
data
|> map(fn:(r) => strings.trimSuffix(v: r.sensorId, suffix: "_s12"))
```

View File

@ -0,0 +1,22 @@
---
title: Flux system package
list_title: System package
description: >
The Flux system package provides functions for reading values from the system.
Import the `system` package.
menu:
v2_0_ref:
name: System
parent: Flux packages and functions
weight: 204
v2.0/tags: [system, functions, package]
---
The Flux system package provides functions for reading values from the system.
Import the `system` package:
```js
import "system"
```
{{< children type="functions" show="pages" >}}

View File

@ -0,0 +1,31 @@
---
title: system.time() function
description: The `system.time()` function returns the current system time.
aliases:
- /v2.0/reference/flux/functions/misc/systemtime
- /v2.0/reference/flux/functions/built-in/misc/systemtime
menu:
v2_0_ref:
name: system.time
parent: System
weight: 401
---
The `system.time()` function returns the current system time.
_**Function type:** Date/Time_
_**Output data type:** Timestamp_
```js
import "system"
system.time()
```
## Examples
```js
import "system"
data
|> set(key: "processed_at", value: string(v: system.time() ))
```

View File

@ -152,30 +152,48 @@ DotExpression = "." identifer
MemberBracketExpression = "[" string_lit "]" .
```
### Operators
## Conditional expressions
Conditional expressions evaluate a boolean-valued condition.
If the result is _true_, the expression that follows the `then` keyword is evaluated and returned.
If the result is _false_, the expression that follows the `else` keyword is evaluated and returned.
In either case, only the branch taken is evaluated and only side effects associated this branch will occur.
```js
ConditionalExpression = "if" Expression "then" Expression "else" Expression .
```
##### Conditional expression example
```js
color = if code == 0 then "green" else if code == 1 then "yellow" else "red"
```
## Operators
Operators combine operands into expressions.
The precedence of the operators is given in the table below.
Operators with a lower number have higher precedence.
|Precedence| Operator | Description |
|----------|----------|---------------------------|
| 1 | `a()` | Function call |
| | `a[]` | Member or index access |
| | `.` | Member access |
| 2 | `*` `/` |Multiplication and division|
| 3 | `+` `-` | Addition and subtraction |
| 4 |`==` `!=` | Comparison operators |
| | `<` `<=` | |
| | `>` `>=` | |
| |`=~` `!~` | |
| 5 | `not` | Unary logical expression |
| 6 | `and` | Logical AND |
| 7 | `or` | Logical OR |
| Precedence | Operator | Description |
|:----------:|:--------: |:--------------------------|
| 1 | `a()` | Function call |
| | `a[]` | Member or index access |
| | `.` | Member access |
| 2 | `*` `/` |Multiplication and division|
| 3 | `+` `-` | Addition and subtraction |
| 4 |`==` `!=` | Comparison operators |
| | `<` `<=` | |
| | `>` `>=` | |
| |`=~` `!~` | |
| 5 | `not` | Unary logical expression |
| 6 | `and` | Logical AND |
| 7 | `or` | Logical OR |
| 8 | `if` `then` `else` | Conditional |
The operator precedence is encoded directly into the grammar as the following.
```js
Expression = LogicalExpression .
Expression = ConditionalExpression .
ConditionalExpression = LogicalExpression
| "if" Expression "then" Expression "else" Expression .
LogicalExpression = UnaryLogicalExpression
| LogicalExpression LogicalOperator UnaryLogicalExpression .
LogicalOperator = "and" | "or" .

View File

@ -9,11 +9,88 @@ menu:
---
{{% note %}}
_The latest release of InfluxDB v2.0 alpha includes **Flux v0.25.0**.
_The latest release of InfluxDB v2.0 alpha includes **Flux v0.28.3**.
Though newer versions of Flux may be available, they will not be included with
InfluxDB until the next InfluxDB v2.0 release._
{{% /note %}}
## v0.28.3 [2019-05-01]
### Bug fixes
- Fix request results labels to count runtime errors.
- An error when joining could result in two calls to finish.
---
## v0.28.2 [2019-04-26]
### Bug fixes
- Preallocate data when constructing a new string array.
---
## v0.28.1 [2019-04-25]
### Bug fixes
- Make executor respect memory limit from caller.
---
## v0.28.0 [2019-04-24]
### Features
- Allow choosing sample/population mode in `stddev()`.
### Bug fixes
- Fix `reduce()` so it resets the reduce value to the neutral element value for each new group key
and reports an error when two reducers write to the same destination group key.
---
## v0.27.0 [2019-04-22]
### Features
- Add `trimSuffix` and `trimPrefix` functions to the strings package.
- Add support for conditional expressions to compiler.
- Add conditional expression handling to interpreter.
### Bug fixes
- Enforce memory and concurrency limits in controller.
- Format conditional expression.
- `tagKeys` should include a call to `distinct`.
---
## v0.26.0 [2019-04-18]
### Breaking changes
- Aggregates now accept only a `column` parameter. `columns` not used.
### Features
- Add handling for conditional expressions to type inference.
- Add `if`/`then`/`else` syntax to Flux parser.
- Added a WalkIR function that external programs can use to traverse an opSpec structure.
- Add planner options to compile options.
- Add example on how to use Flux as a library.
- `duplicate()` will now overwrite a column if the as label already exists.
#### Bug fixes
- Format right child with good parentheses.
- Make staticcheck pass.
- Rename `json` tag so go vet passes.
- The controller pump could reference a nil pointer.
- Create a DependenciesAwareProgram so controller can assign dependencies.
- Make `Program.Start` start execution synchronously.
- Read the metadata channel in a separate goroutine.
- Remove dead code in controller so `staticcheck` passes.
- Allow Flux unit tests to pass.
- Require a Github token to perform a release.
- Change example name to make go vet pass.
- Make `csv.from` return decode error.
---
## v0.25.0 [2019-04-08]
## Breaking changes

View File

@ -7,6 +7,41 @@ menu:
weight: 1
---
## v2.0.0-alpha.9 [2019-05-01]
{{% warn %}}
**This will remove all tasks from your InfluxDB v2.0 instance.**
Before upgrading, [export all existing tasks](/v2.0/process-data/manage-tasks/export-task/). After upgrading, [reimport your exported tasks](/v2.0/process-data/manage-tasks/create-task/#import-a-task).
{{% /warn %}}
### Features
- Set autorefresh of dashboard to pause if absolute time range is selected.
- Switch task back end to a more modular and flexible system.
- Add org profile tab with ability to edit organization name.
- Add org name to dashboard page title.
- Add cautioning to bucket renaming.
- Add option to generate all access token in tokens tab.
- Add option to generate read/write token in tokens tab.
- Add new Local Metrics Dashboard template that is created during Quick Start.
### Bug Fixes
- Fixed scroll clipping found in label editing flow.
- Prevent overlapping text and dot in time range dropdown.
- Updated link in notes cell to a more useful site.
- Show error message when adding line protocol.
- Update UI Flux function documentation.
- Update System template to support math with floats.
- Fix the `window` function documentation.
- Fix typo in the `range` Flux function example.
- Update the `systemTime` function to use `system.time`.
### UI Improvements
- Add general polish and empty states to Create Dashboard from Template overlay.
---
## v2.0.0-alpha.8 [2019-04-12]
### Features

View File

@ -9,7 +9,6 @@ menu:
name: Create a token
parent: Manage tokens
weight: 201
draft: true
---
Create authentication tokens using the InfluxDB user interface (UI) or the `influx`
@ -22,7 +21,12 @@ command line interface (CLI).
{{< nav-icon "settings" >}}
2. Click **Tokens**.
3. _Full instructions coming soon._
3. Click the **+ Generate** dropdown in the upper right and select a token type (**Read/Write Token** or **All Access Token**).
4. In the window that appears, enter a description for your token in the **Description** field.
5. If you're generating a read/write token:
- Search for and select buckets to read from in the **Read** pane.
- Search for and select buckets to write to in the **Write** pane.
5. Click **Save**.
## Create a token using the influx CLI

View File

@ -10,11 +10,17 @@ weight: 201
---
Create dashboard variables in the Data Explorer, from the Organization page, or import a variable.
**Variable names must be unique.**
_For information about variable types, see [Variable types](/v2.0/visualize-data/variables/variable-types/)._
### Create a variable in the Data Explorer
{{% note %}}
Only [Query variables](/v2.0/visualize-data/variables/variable-types/#query)
can be created from the Data Explorer.
{{% /note %}}
1. Click the **Data Explorer** icon in the sidebar.
{{< nav-icon "data-explorer" >}}
@ -35,8 +41,9 @@ _For information about variable types, see [Variable types](/v2.0/visualize-data
2. Select the **Variables** tab.
3. Click **+Create Variable**.
4. Enter a name for your variable.
5. Enter your variable.
6. Click **Create**.
5. Select your [variable type](/v2.0/visualize-data/variables/variable-types/).
6. Enter the appropriate variable information.
7. Click **Create**.
## Import a variable

View File

@ -8,6 +8,7 @@ menu:
weight: 205
"v2.0/tags": [variables]
---
Delete an existing variable in the InfluxDB user interface (UI).
### Delete a variable
@ -17,4 +18,9 @@ Delete an existing variable in the InfluxDB user interface (UI).
{{< nav-icon "settings" >}}
2. Select the **Variables** tab.
3. Hover over a variable and click the trash can icon.
3. Hover over a variable, click the **{{< icon "trash" >}}** icon, and **Delete**.
{{% warn %}}
Once deleted, any dashboards with queries that utilize the variable will no
longer function correctly.
{{% /warn %}}

View File

@ -18,7 +18,8 @@ Variables are exported as downloadable JSON files.
{{< nav-icon "settings" >}}
2. Select the **Variables** tab.
3. Hover over a variable in the list, then click the gear icon ({{< icon "gear" >}}) and select **Export**.
3. Hover over a variable in the list, then click the gear icon (**{{< icon "gear" >}}**)
and select **Export**.
4. Review the JSON in the window that appears.
5. Select one of the following options:
* **Download JSON**: Download the dashboard as a JSON file.

View File

@ -19,5 +19,5 @@ Update an existing dashboard variable's name or JSON content in the InfluxDB use
2. Select the **Variables** tab.
3. Click on a variable's name from the list.
4. Update the variable's name and query.
4. Update the variable's name, type, and associated information.
5. Click **Submit**.

View File

@ -9,26 +9,43 @@ weight: 207
"v2.0/tags": [variables]
---
{{% note %}}
In the current version of InfluxDB v2.0 alpha, only [query-populated variables](#query) are available.
{{% /note %}}
Variable types determine how a variable's list of possible values is populated.
The following variable types are available:
- [Map](#map)
- [Query](#query)
- [CSV](#csv)
## Map
Map variables use a list of key value pairs in CSV format to map keys to specific values.
Keys populate the variable's value list in the InfluxDB user interface (UI), but
values are used when actually processing the query.
The most common use case for map variables is aliasing simple, human-readable keys
to complex values.
##### Map variable CSV example
```js
Juanito MacNeil,"5TKl6l8i4idg15Fxxe4P"
Astrophel Chaudhary,"bDhZbuVj5RV94NcFXZPm"
Ochieng Benes,"YIhg6SoMKRUH8FMlHs3V"
Mila Emile,"o61AhpOGr5aO3cYVArC0"
```
## Query
Variable values are populated using the `_value` column of a Flux query.
Query variable values are populated using the `_value` column of a Flux query.
##### Variable query example
##### Query variable example
```js
// List all buckets
buckets()
|> rename(columns: {"name": "_value"})
|> rename(columns: {"name": "_value"})
|> keep(columns: ["_value"])
```
_For examples of dashboard variable queries, see [Common variable queries](/v2.0/visualize-data/variables/common-variables)._
{{% note %}}
#### Important things to note about variable queries
- The variable will only use values from the `_value` column.
If the data youre looking for is in a column other than `_value`, use the
@ -39,3 +56,20 @@ _For examples of dashboard variable queries, see [Common variable queries](/v2.0
Use the [`group()` function](/v2.0/reference/flux/functions/built-in/transformations/group)
to group everything into a single table.
- Do not use any [predefined dashboard variables](/v2.0/visualize-data/variables/#predefined-dashboard-variables) in variable queries.
{{% /note %}}
## CSV
CSV variables use a CSV-formatted list to populate variable values.
A common use case is when the list of potential values is static and cannot be
queried from InfluxDB.
##### CSV variable examples
```
value1, value2, value3, value4
```
```
value1
value2
value3
value4
```

View File

@ -45,7 +45,7 @@
{{ else if eq $icon "search" }}
<span class="inline icon-ui-search middle small"></span>
{{ else if or (eq $icon "trash") (eq $icon "trashcan") (eq $icon "delete") }}
<span class="inline icon-ui-trash small"></span>
<span class="inline icon-ui-trash top small"></span>
{{ else if eq $icon "triangle" }}
<span class="inline icon-ui-triangle middle"></span>
{{ else if eq $icon "cloud" }}
@ -56,4 +56,6 @@
<span class="inline icon-ui-chat large"></span>
{{ else if eq $icon "add-label" }}
<span class="inline add-btn-round">&#59696;</span>
{{ else if eq $icon "toggle" }}
<span class="inline ui-toggle"><span class="circle"></span></span>
{{ end }}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB