Merge pull request #1393 from influxdata/flux-0.83

Flux 0.83
pull/1405/head
Scott Anderson 2020-09-04 09:56:21 -06:00 committed by GitHub
commit f90d7a5aa8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 446 additions and 36 deletions

View File

@ -5,7 +5,7 @@ list_title: Query SQL data
description: > description: >
The Flux `sql` package provides functions for working with SQL data sources. The Flux `sql` package provides functions for working with SQL data sources.
Use `sql.from()` to query SQL databases like PostgreSQL, MySQL, Snowflake, Use `sql.from()` to query SQL databases like PostgreSQL, MySQL, Snowflake,
SQLite, Microsoft SQL Server, and Amazon Athena. SQLite, Microsoft SQL Server, Amazon Athena, and Google BigQuery.
influxdb/v2.0/tags: [query, flux, sql] influxdb/v2.0/tags: [query, flux, sql]
menu: menu:
influxdb_2_0: influxdb_2_0:
@ -33,8 +33,8 @@ The [Flux](/influxdb/v2.0/reference/flux) `sql` package provides functions for w
like [PostgreSQL](https://www.postgresql.org/), [MySQL](https://www.mysql.com/), like [PostgreSQL](https://www.postgresql.org/), [MySQL](https://www.mysql.com/),
[Snowflake](https://www.snowflake.com/), [SQLite](https://www.sqlite.org/index.html), [Snowflake](https://www.snowflake.com/), [SQLite](https://www.sqlite.org/index.html),
[Microsoft SQL Server](https://www.microsoft.com/en-us/sql-server/default.aspx), [Microsoft SQL Server](https://www.microsoft.com/en-us/sql-server/default.aspx),
and [Amazon Athena](https://aws.amazon.com/athena/) and use the results with [Amazon Athena](https://aws.amazon.com/athena/) and [Google BigQuery](https://cloud.google.com/bigquery)
InfluxDB dashboards, tasks, and other operations. and use the results with InfluxDB dashboards, tasks, and other operations.
- [Query a SQL data source](#query-a-sql-data-source) - [Query a SQL data source](#query-a-sql-data-source)
- [Join SQL data with data in InfluxDB](#join-sql-data-with-data-in-influxdb) - [Join SQL data with data in InfluxDB](#join-sql-data-with-data-in-influxdb)
@ -61,6 +61,8 @@ To query a SQL data source:
[Snowflake](#) [Snowflake](#)
[SQLite](#) [SQLite](#)
[SQL Server](#) [SQL Server](#)
[Athena](#)
[BigQuery](#)
{{% /code-tabs %}} {{% /code-tabs %}}
{{% code-tab-content %}} {{% code-tab-content %}}
@ -129,6 +131,33 @@ sql.from(
_For information about authenticating with SQL Server using ADO-style parameters, _For information about authenticating with SQL Server using ADO-style parameters,
see [SQL Server ADO authentication](/influxdb/v2.0/reference/flux/stdlib/sql/from/#sql-server-ado-authentication)._ see [SQL Server ADO authentication](/influxdb/v2.0/reference/flux/stdlib/sql/from/#sql-server-ado-authentication)._
{{% /code-tab-content %}} {{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "sql"
sql.from(
driverName: "awsathena",
dataSourceName: "s3://myorgqueryresults/?accessID=12ab34cd56ef&region=region-name&secretAccessKey=y0urSup3rs3crEtT0k3n",
query: "GO SELECT * FROM Example.Table"
)
```
_For information about parameters to include in the Athena DSN,
see [Athena connection string](/influxdb/v2.0/reference/flux/stdlib/sql/from/#athena-connection-string)._
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "sql"
sql.from(
driverName: "bigquery",
dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y",
query: "SELECT * FROM exampleTable"
)
```
_For information about authenticating with BigQuery, see
[BigQuery authentication parameters](/influxdb/v2.0/reference/flux/stdlib/sql/from/#bigquery-authentication-parameters)._
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}} {{< /code-tabs-wrapper >}}
_See the [`sql.from()` documentation](/influxdb/v2.0/reference/flux/stdlib/sql/from/) for _See the [`sql.from()` documentation](/influxdb/v2.0/reference/flux/stdlib/sql/from/) for

View File

@ -223,18 +223,19 @@ Operators with a lower number have higher precedence.
| 1 | `a()` | Function call | | 1 | `a()` | Function call |
| | `a[]` | Member or index access | | | `a[]` | Member or index access |
| | `.` | Member access | | | `.` | Member access |
| 2 | `^` | Exponentiation | | 2 | <code>\|></code> | Pipe forward |
| 3 | `*` `/` `%` | Multiplication, division, and modulo | | 3 | `^` | Exponentiation |
| 4 | `+` `-` | Addition and subtraction | | 4 | `*` `/` `%` | Multiplication, division, and modulo |
| 5 |`==` `!=` | Comparison operators | | 5 | `+` `-` | Addition and subtraction |
| 6 |`==` `!=` | Comparison operators |
| | `<` `<=` | | | | `<` `<=` | |
| | `>` `>=` | | | | `>` `>=` | |
| |`=~` `!~` | | | |`=~` `!~` | |
| 6 | `not` | Unary logical operator | | 7 | `not` | Unary logical operator |
| | `exists` | Null check operator | | | `exists` | Null check operator |
| 7 | `and` | Logical AND | | 8 | `and` | Logical AND |
| 8 | `or` | Logical OR | | 9 | `or` | Logical OR |
| 9 | `if` `then` `else` | Conditional | | 10 | `if` `then` `else` | Conditional |
The operator precedence is encoded directly into the grammar as the following. The operator precedence is encoded directly into the grammar as the following.

View File

@ -132,18 +132,20 @@ The table below outlines operator precedence.
Operators with a lower number have higher precedence. Operators with a lower number have higher precedence.
| Precedence | Operator | Description | | Precedence | Operator | Description |
|:----------:|:--------: |:--------------------------| |:----------:|:--------: |:-------------------------- |
| 1 | `a()` | Function call | | 1 | `a()` | Function call |
| | `a[]` | Member or index access | | | `a[]` | Member or index access |
| | `.` | Member access | | | `.` | Member access |
| 2 | `*` `/` |Multiplication and division| | 2 | <code>\|></code> | Pipe forward |
| 3 | `+` `-` | Addition and subtraction | | 3 | `^` | Exponentiation |
| 4 |`==` `!=` | Comparison operators | | 4 | `*` `/` `%` | Multiplication, division, and modulo |
| 5 | `+` `-` | Addition and subtraction |
| 6 |`==` `!=` | Comparison operators |
| | `<` `<=` | | | | `<` `<=` | |
| | `>` `>=` | | | | `>` `>=` | |
| |`=~` `!~` | | | |`=~` `!~` | |
| 5 | `not` | Unary logical operator | | 7 | `not` | Unary logical operator |
| | `exists` | Null check operator | | | `exists` | Null check operator |
| 6 | `and` | Logical AND | | 8 | `and` | Logical AND |
| 7 | `or` | Logical OR | | 9 | `or` | Logical OR |
| 8 | `if` `then` `else` | Conditional | | 10 | `if` `then` `else` | Conditional |

View File

@ -20,23 +20,47 @@ _**Function type:** Aggregate_
_**Output data type:** Float_ _**Output data type:** Float_
```js ```js
integral(unit: 10s, column: "_value") integral(
unit: 10s,
column: "_value",
timeColumn: "_time",
interpolation: ""
)
``` ```
## Parameters ## Parameters
### unit ### unit
The time duration used when computing the integral. Time duration used when computing the integral.
_**Data type:** Duration_ _**Data type:** Duration_
### column ### column
The column on which to operate. Column on which to operate.
Defaults to `"_value"`. Defaults to `"_value"`.
_**Data type:** String_ _**Data type:** String_
### timeColumn
Column that contains time values to use in the operation.
Defaults to `"_time"`.
_**Data type:** String_
### interpolate
Type of interpolation to use.
Defaults to `""`.
Use one of the following interpolation options:
- _empty sting for no interpolation_
- linear
_**Data type:** String_
## Examples ## Examples
##### Calculate the integral
```js ```js
from(bucket: "example-bucket") from(bucket: "example-bucket")
|> range(start: -5m) |> range(start: -5m)
@ -46,3 +70,14 @@ from(bucket: "example-bucket")
) )
|> integral(unit:10s) |> integral(unit:10s)
``` ```
##### Calculate the integral with linear interpolation
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system"
)
|> integral(unit:10s, interpolate: "linear")
```

View File

@ -0,0 +1,53 @@
---
title: timeWeightedAvg() function
description: The `timeWeightedAvg()` function outputs the timeWeightedAvg of non-null records as a float.
menu:
influxdb_2_0_ref:
name: timeWeightedAvg
parent: built-in-aggregates
weight: 501
related:
- /influxdb/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/integral/
---
The `timeWeightedAvg()` function outputs the time-weighted average of non-null records
in a table as a float.
Time is weighted using the linearly interpolated integral of values in the table.
_**Function type:** Aggregate_
_**Output data type:** Float_
```js
timeWeightedAvg(unit: "_value")
```
## Parameters
### unit
Time duration used when computing the time-weighted average.
_**Data type:** Duration_
## Examples
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system"
)
|> timeWeightedAvg(unit: 1m)
```
## Function definition
```js
timeWeightedAvg = (tables=<-, unit) => tables
|> integral(
unit: unit,
interpolate: "linear"
)
|> map(fn: (r) => ({
r with
_value: (r._value * float(v: uint(v: unit))) / float(v: int(v: r._stop) - int(v: r._start))
}))
```

View File

@ -0,0 +1,44 @@
---
title: Flux InfluxDB Tasks package
list_title: InfluxDB Tasks package
description: >
The Flux InfluxDB Tasks package provides options and functions for working with
[InfluxDB tasks](/influxdb/v2.0/process-data/get-started/).
Import the `influxdata/influxdb/tasks` package.
aliases:
- /influxdb/v2.0/reference/flux/functions/influxdb-v1/
menu:
influxdb_2_0_ref:
name: InfluxDB Tasks
parent: Flux standard library
weight: 202
influxdb/v2.0/tags: [functions, tasks, package]
related:
- /influxdb/v2.0/process-data/get-started/
---
The Flux InfluxDB Tasks package provides options and functions for working with
[InfluxDB tasks](/influxdb/v2.0/process-data/get-started/).
Import the `influxdata/influxdb/tasks` package:
```js
import "influxdata/influxdb/tasks"
```
## Options
The InfluxDB Tasks package provides the following options:
#### lastSuccessTime
Define the time of the last successful task run.
_Only use this option to override the time of the last successful run provided by
the InfluxDB task engine._
```js
import "influxdata/influxdb/tasks"
option tasks.lastSuccessTime = 0000-01-01T00:00:00Z
```
## Functions
{{< children type="functions" show="pages" >}}

View File

@ -0,0 +1,41 @@
---
title: tasks.lastSuccess() function
description: The `tasks.lastSuccess()` function returns ...
menu:
influxdb_2_0_ref:
name: tasks.lastSuccess
parent: InfluxDB Tasks
weight: 301
---
The `tasks.lastSuccess()` function returns the time of last successful run of the
InfluxDB task or the value of the `orTime` parameter if the task has never successfully run.
```js
import "influxdata/influxdb/tasks"
tasks.lastSuccess(orTime: 2020-01-01T00:00:00Z)
```
## Parameters
### orTime
The default time value returned if the task has never successfully run.
_**Data type:** Time_
## Examples
##### Query data since the last successful task run
```js
import "influxdata/influxdb/tasks"
options task = {
name: "Example task",
every: 30m
}
from(bucket: "example-bucket")
|> range(start: tasks.lastSuccess(orTime: 2020-01-01T00:00:00Z))
// ...
```

View File

@ -0,0 +1,66 @@
---
title: Flux Profiler package
list_title: Profiler package
description: >
The Flux Profiler package provides performance profiling tools for Flux queries and operations.
Import the `profiler` package.
menu:
influxdb_2_0_ref:
name: Profiler
parent: Flux standard library
weight: 202
influxdb/v2.0/tags: [functions, optimize, package]
related:
- /influxdb/v2.0/query-data/optimize-queries/
---
The Flux Profiler package provides performance profiling tools for Flux queries and operations.
Import the `profiler` package:
```js
import "profiler"
```
## Options
The Profiler package includes the following options:
### enabledProfilers
Enable Flux profilers.
_**Data type:** Array of strings_
```js
import "profiler"
option profiler.enabledProfilers = [""]
```
#### Available profilers
##### query
The `query` profiler provides statistics about the execution of an entire Flux script.
When enabled, results returned by [`yield()`](/influxdb/v2.0/reference/flux/stdlib/built-in/outputs/yield/)
include a table with the following columns:
- **TotalDuration**: total query duration in nanoseconds.
- **CompileDuration**: number of nanoseconds spent compiling the query.
- **QueueDuration**: number of nanoseconds spent queueing.
- **RequeueDuration**: number fo nanoseconds spent requeueing.
- **PlanDuration**: number of nanoseconds spent planning the query.
- **ExecuteDuration**: number of nanoseconds spent executing the query.
- **Concurrency**: number of goroutines allocated to process the query.
- **MaxAllocated**: maximum number of bytes the query allocated.
- **TotalAllocated**: total number of bytes the query allocated (includes memory that was freed and then used again).
- **RuntimeErrors**: error messages returned during query execution.
- **flux/query-plan**: Flux query plan.
- **influxdb/scanned-values**: value scanned by InfluxDB.
- **influxdb/scanned-bytes**: number of bytes scanned by InfluxDB.
#### Use the query profiler to output statistics about query execution
```js
import "profilers"
option profiler.enabledProfiles["query"]
// ... Query to profile
```

View File

@ -3,7 +3,8 @@ title: Flux SQL package
list_title: SQL package list_title: SQL package
description: > description: >
The Flux SQL package provides tools for working with data in SQL databases such The Flux SQL package provides tools for working with data in SQL databases such
as MySQL, PostgreSQL, Snowflake, SQLite, Microsoft SQL Server, and Amazon Athena. as MySQL, PostgreSQL, Snowflake, SQLite, Microsoft SQL Server, Amazon Athena,
and Google BigQuery.
Import the `sql` package. Import the `sql` package.
aliases: aliases:
- /influxdb/v2.0/reference/flux/functions/sql/ - /influxdb/v2.0/reference/flux/functions/sql/
@ -20,6 +21,7 @@ related:
SQL Flux functions provide tools for working with data in SQL databases such as: SQL Flux functions provide tools for working with data in SQL databases such as:
- Amazon Athena - Amazon Athena
- Google BigQuery
- Microsoft SQL Server - Microsoft SQL Server
- MySQL - MySQL
- PostgreSQL - PostgreSQL

View File

@ -36,6 +36,7 @@ _**Data type:** String_
The following drivers are available: The following drivers are available:
- awsathena - awsathena
- bigquery
- mysql - mysql
- postgres - postgres
- snowflake - snowflake
@ -73,6 +74,10 @@ sqlserver://username:password@localhost:1234?database=examplebdb
server=localhost;user id=username;database=examplebdb; server=localhost;user id=username;database=examplebdb;
server=localhost;user id=username;database=examplebdb;azure auth=ENV server=localhost;user id=username;database=examplebdb;azure auth=ENV
server=localhost;user id=username;database=examplebdbr;azure tenant id=77e7d537;azure client id=58879ce8;azure client secret=0123456789 server=localhost;user id=username;database=examplebdbr;azure tenant id=77e7d537;azure client id=58879ce8;azure client secret=0123456789
# Google BigQuery DSNs
bigquery://projectid/?param1=value&param2=value
bigquery://projectid/location?param1=value&param2=value
``` ```
### query ### query
@ -88,6 +93,7 @@ _**Data type:** String_
- [SQLite](#query-an-sqlite-database) - [SQLite](#query-an-sqlite-database)
- [Amazon Athena](#query-an-amazon-athena-database) - [Amazon Athena](#query-an-amazon-athena-database)
- [SQL Server](#query-a-sql-server-database) - [SQL Server](#query-a-sql-server-database)
- [Google BigQuery](#query-a-bigquery-database)
{{% note %}} {{% note %}}
The examples below use [InfluxDB secrets](/influxdb/v2.0/security/secrets/) to populate The examples below use [InfluxDB secrets](/influxdb/v2.0/security/secrets/) to populate
@ -250,3 +256,40 @@ _For information about managed identities, see [Microsoft managed identities](ht
``` ```
azure auth=MSI azure auth=MSI
``` ```
### Query a BigQuery database
```js
import "sql"
import "influxdata/influxdb/secrets"
projectID = secrets.get(key: "BIGQUERY_PROJECT_ID")
apiKey = secrets.get(key: "BIGQUERY_APIKEY")
sql.from(
driverName: "bigquery",
dataSourceName: "bigquery://${projectID}/?apiKey=${apiKey}",
query:"SELECT * FROM exampleTable"
)
```
#### Common BigQuery URL parameters
- **dataset** - BigQuery dataset ID. When set, you can use unqualified table names in queries.
#### BigQuery authentication parameters
The Flux BigQuery implementation uses the Google Cloud Go SDK.
Provide your authentication credentials using one of the following methods:
- The `GOOGLE_APPLICATION_CREDENTIALS` environment variable that identifies the
location of your credential JSON file.
- Provide your BigQuery API key using the **apiKey** URL parameter in your BigQuery DSN.
###### Example apiKey URL parameter
```
bigquery://projectid/?apiKey=AIzaSyB6XK8IO5AzKZXoioQOVNTFYzbDBjY5hy4
```
- Provide your base-64 encoded service account, refresh token, or JSON credentials
using the **credentials** URL parameter in your BigQuery DSN.
###### Example credentials URL parameter
```
bigquery://projectid/?credentials=eyJ0eXBlIjoiYXV0...
```

View File

@ -34,12 +34,18 @@ _**Data type:** String_
The following drivers are available: The following drivers are available:
- bigquery
- mysql - mysql
- postgres - postgres
- snowflake - snowflake
- sqlite3 _Does not work with InfluxDB OSS or InfluxDB Cloud. More information [below](#write-data-to-an-sqlite-database)._ - sqlite3 _Does not work with InfluxDB OSS or InfluxDB Cloud. More information [below](#write-data-to-an-sqlite-database)._
- sqlserver, mssql - sqlserver, mssql
{{% warn %}}
#### sql.to does not support Amazon Athena
The `sql.to` function does not support writing data to [Amazon Athena](https://aws.amazon.com/athena/).
{{% /warn %}}
### dataSourceName ### dataSourceName
The data source name (DSN) or connection string used to connect to the SQL database. The data source name (DSN) or connection string used to connect to the SQL database.
The string's form and structure depend on the [driver](#drivername) used. The string's form and structure depend on the [driver](#drivername) used.
@ -67,6 +73,10 @@ sqlserver://username:password@localhost:1234?database=examplebdb
server=localhost;user id=username;database=examplebdb; server=localhost;user id=username;database=examplebdb;
server=localhost;user id=username;database=examplebdb;azure auth=ENV server=localhost;user id=username;database=examplebdb;azure auth=ENV
server=localhost;user id=username;database=examplebdbr;azure tenant id=77e7d537;azure client id=58879ce8;azure client secret=0123456789 server=localhost;user id=username;database=examplebdbr;azure tenant id=77e7d537;azure client id=58879ce8;azure client secret=0123456789
# Google BigQuery DSNs
bigquery://projectid/?param1=value&param2=value
bigquery://projectid/location?param1=value&param2=value
``` ```
### table ### table
@ -91,6 +101,7 @@ If writing to a **SQLite** database, set `batchSize` to `999` or less.
- [Snowflake](#write-data-to-a-snowflake-database) - [Snowflake](#write-data-to-a-snowflake-database)
- [SQLite](#write-data-to-an-sqlite-database) - [SQLite](#write-data-to-an-sqlite-database)
- [SQL Server](#write-data-to-a-sql-server-database) - [SQL Server](#write-data-to-a-sql-server-database)
- [Google BigQuery](#write-data-to-a-sql-server-database)
{{% note %}} {{% note %}}
The examples below use [InfluxDB secrets](/influxdb/v2.0/security/secrets/) to populate The examples below use [InfluxDB secrets](/influxdb/v2.0/security/secrets/) to populate
@ -223,7 +234,39 @@ _For information about managed identities, see [Microsoft managed identities](ht
azure auth=MSI azure auth=MSI
``` ```
{{% warn %}} ### Write to a BigQuery database
### sql.to does not support Amazon Athena ```js
The `sql.to` function does not support writing data to [Amazon Athena](https://aws.amazon.com/athena/). import "sql"
{{% /warn %}} import "influxdata/influxdb/secrets"
projectID = secrets.get(key: "BIGQUERY_PROJECT_ID")
apiKey = secrets.get(key: "BIGQUERY_APIKEY")
sql.to(
driverName: "bigquery",
dataSourceName: "bigquery://${projectID}/?apiKey=${apiKey}",
table:"exampleTable"
)
```
#### Common BigQuery URL parameters
- **dataset** - BigQuery dataset ID. When set, you can use unqualified table names in queries.
#### BigQuery authentication parameters
The Flux BigQuery implementation uses the Google Cloud Go SDK.
Provide your authentication credentials using one of the following methods:
- The `GOOGLE_APPLICATION_CREDENTIALS` environment variable that identifies the
location of your credential JSON file.
- Provide your BigQuery API key using the **apiKey** URL parameter in your BigQuery DSN.
###### Example apiKey URL parameter
```
bigquery://projectid/?apiKey=AIzaSyB6XK8IO5AzKZXoioQOVNTFYzbDBjY5hy4
```
- Provide your base-64 encoded service account, refresh token, or JSON credentials
using the **credentials** URL parameter in your BigQuery DSN.
###### Example credentials URL parameter
```
bigquery://projectid/?credentials=eyJ0eXBlIjoiYXV0...
```

View File

@ -9,11 +9,62 @@ menu:
--- ---
{{% note %}} {{% note %}}
The latest release of InfluxDB v2.0 beta includes **Flux v0.77.1**. _The latest release of InfluxDB v2.0 beta includes **Flux v0.77.1**.
Though newer versions of Flux may be available, they will not be included with Though newer versions of Flux may be available, they will not be included with
InfluxDB until the next InfluxDB v2.0 release._ InfluxDB until the next InfluxDB v2.0 release._
{{% /note %}} {{% /note %}}
## v0.83.1 [2020-09-02]
### Bug fixes
- Single value integral interpolation.
---
## v0.83.0 [2020-09-01]
### Features
- Improve window errors.
- Add [BigQuery](https://cloud.google.com/bigquery) support to
[`sql` package](/influxdb/v2.0/reference/flux/stdlib/sql/).
- Add `TypeExpression` to `BuiltinStmt` and fix tests.
- Add time-weighted average ([`timeWeightedAvg()` function](/influxdb/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/timeweightedavg/)).
- Update [`integral()`](/influxdb/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/integral/)
with linear interpolation.
- Make experimental tracing an attribute of the context.
### Bug fixes
- Update builtin statement for `integral()`.
- Add Rust JSON tests.
- CSV no longer deadlocks when next transformation does not consume table.
---
## v0.82.2 [2020-08-25]
### Features
- Add [`tasks.lastSuccess` function](/influxdb/v2.0/reference/flux/stdlib/influxdb-tasks/lastsuccess/)
to retrieve the time of the last successful run of an InfluxDB task.
---
## v0.82.1 [2020-08-25]
- _Internal code cleanup._
---
## v0.82.0 [2020-08-24]
### Features
- Add the [`profiler` package](/influxdb/v2.0/reference/flux/stdlib/profiler/).
- Add a documentation URL field to Flux errors.
- Check InfluxDB schema compatibility.
### Bug fixes
- Panic when a map object property contains an invalid type.
---
## v0.81.0 [2020-08-17] ## v0.81.0 [2020-08-17]
### Features ### Features