Merge branch 'master' into alpha-18

pull/490/head
Scott Anderson 2019-09-20 09:46:08 -06:00
commit 40d26d7c19
17 changed files with 320 additions and 82 deletions

View File

@ -14,18 +14,29 @@ cloud_all: true
---
Create a check in the InfluxDB user interface (UI).
Checks query data and apply a status to each point based on specified conditions.
## Check types
There are two types of checks a threshold check and a deadman check.
#### Threshold check
A threshold check assigns a status based on a value being above, below,
inside, or outside of defined thresholds.
[Create a threshold check](#create-a-threshold-check).
#### Deadman check
A deadman check assigns a status to data when a series or group doesn't report
in a specified amount of time.
[Create a deadman check](#create-a-deadman-check).
## Parts of a check
A check consists of two parts a query and check configuration.
##### Check query
#### Check query
- Specifies the dataset to monitor.
- Requires a bucket, measurement, field, and an aggregate function.
{{% note %}}The aggregate function aggregates data points between the specified check intervals
and returns a single value for the check to process.
{{% /note %}}
- May include tags to narrow results.
##### Check configuration
#### Check configuration
- Defines check properties, including the check interval and status message.
- Evaluates specified conditions and applies a status (if applicable) to each data point:
- `crit`
@ -35,23 +46,28 @@ A check consists of two parts a query and check configuration.
- Stores status in the `_level` column.
## Create a check in the InfluxDB UI
1. Click **Monitoring & Alerting** in the sidebar.
1. Click **Monitoring & Alerting** in the sidebar in the InfluxDB UI.
{{< nav-icon "alerts" >}}
2. In the top right corner of the **Checks** column, click **{{< icon "plus" >}} Create**.
2. In the top right corner of the **Checks** column, click **{{< icon "plus" >}} Create**
and select the [type of check](#check-types) to create.
3. Click **Name this check** in the top left corner and provide a unique name for the check.
3. Click **Name this check** in the top left corner and provide a unique name for the check.
#### Configure the check query
1. Select the **bucket**, **measurement**, **field** and **tag sets** to query.
2. If creating a threshold check, select an **aggregate function**.
Aggregate functions aggregate data between the specified check intervals and
return a single value for the check to process.
In the **Aggregate functions** column, select an interval from the interval drop-down list
(for example, "Every 5 minutes") and an aggregate function from the list of functions.
### Configure the query
1. In the **Query view**, select the bucket, measurement, field and tag sets to query.
2. In the **Aggregate functions** column, select an interval from the interval drop-down list
(for example, "Every 5 minutes") and an aggregate function from the list of functions.
3. Click **Submit** to run the query and preview the results.
To see the raw query results, click the **{{< icon "toggle" >}} View Raw Data** toggle.
### Configure the check
1. Click **2. Check** near the top of the window to display the **Check view**.
#### Configure the check
1. Click **2. Check** near the top of the window.
2. In the **Properties** column, configure the following:
##### Schedule Every
@ -113,32 +129,23 @@ count = 12
When a check generates a status, it stores the message in the `_message` column.
4. In the **Conditions** column, define the logic that assigns a status or level to data.
Select the type of check to configure:
##### Threshold
A threshold check assigns a status based on a value being above, below,
inside, or outside of defined thresholds.
[Configure a threshold check](#configure-a-threshold-check).
##### Deadman
A deadman check assigns a status to data when a series or group has not
reported in a specified amount of time.
[Configure a deadman check](#configure-a-deadman-check).
4. Define check conditions that assign statuses to points.
Condition options depend on your check type.
##### Configure a threshold check
1. For each status you want to configure, click the status name (CRIT, WARN, INFO, or OK).
1. In the **Thresholds** column, click the status name (CRIT, WARN, INFO, or OK)
to define conditions for that specific status.
2. From the **When value** drop-down list, select a threshold: is above, is below,
is inside of, is outside of.
3. Enter a value or values for the threshold.
You can also use the threshold sliders in the data visualization to define threshold values.
##### Configure a deadman check
1. In the **for** field, enter a duration for the deadman check.
For example, `5m`, `1h`, or `2h30m`.
1. In the **Deadman** column, enter a duration for the deadman check in the **for** field.
For example, `90s`, `5m`, `2h30m`, etc.
2. Use the **set status to** drop-down list to select a status to set on a dead series.
3. In the **And stop checking after** field, enter the time to stop monitoring the series.
For example, `30m`, `2h`, `3h15m`.
For example, `30m`, `2h`, `3h15m`, etc.
5. Click the green **{{< icon "check" >}}** in the top right corner to save the check.

View File

@ -21,9 +21,9 @@ to create a bucket.
2. Select **Buckets**.
3. Click **{{< icon "plus" >}} Create Bucket** in the upper right.
4. Enter a **Name** for the bucket.
5. Select **Delete Data older than**:
Select **Never** to retain data forever.
Select **Periodically** to define a specific retention policy.
5. Select when to **Delete Data**:
- **Never** to retain data forever.
- **Older than** to choose a specific retention policy.
5. Click **Create** to create the bucket.
## Create a bucket using the influx CLI
@ -32,7 +32,7 @@ Use the [`influx bucket create` command](/v2.0/reference/cli/influx/bucket/creat
to create a new bucket. A bucket requires the following:
- A name
- The name or ID of the organization to which it belongs
- The name or ID of the organization the bucket belongs to
- A retention period in nanoseconds
```sh

View File

@ -20,7 +20,7 @@ to delete a bucket.
2. Select **Buckets**.
3. Hover over the bucket you would like to delete.
4. Click **{{< icon "delete" >}} Delete Bucket** and **Delete** to delete the bucket.
4. Click **{{< icon "delete" >}} Delete Bucket** and **Confirm** to delete the bucket.
## Delete a bucket using the influx CLI

View File

@ -8,6 +8,7 @@ menu:
parent: Manage buckets
weight: 202
---
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update a bucket.
Note that updating an bucket's name will affect any assets that reference the bucket by name, including the following:
@ -28,10 +29,9 @@ If you change a bucket name, be sure to update the bucket in the above places as
{{< nav-icon "load data" >}}
2. Select **Buckets**.
3. Hover over the name of the bucket you want to rename in the list.
4. Click **Rename**.
5. Review the information in the window that appears and click **I understand, let's rename my bucket**.
6. Update the bucket's name and click **Change Bucket Name**.
3. Click **Rename** under the bucket you want to rename.
4. Review the information in the window that appears and click **I understand, let's rename my bucket**.
5. Update the bucket's name and click **Change Bucket Name**.
## Update a bucket's retention policy in the InfluxDB UI
@ -50,7 +50,7 @@ Use the [`influx bucket update` command](/v2.0/reference/cli/influx/bucket/updat
to update a bucket. Updating a bucket requires the following:
- The bucket ID _(provided in the output of `influx bucket find`)_
- The name or ID of the organization to which the bucket belongs
- The name or ID of the organization the bucket belongs to.
##### Update the name of a bucket
```sh

View File

@ -21,8 +21,7 @@ weight: 202
## View buckets using the influx CLI
Use the [`influx bucket find` command](/v2.0/reference/cli/influx/bucket/find)
to view a buckets in an organization. Viewing bucket requires the following:
to view a buckets in an organization.
```sh
influx bucket find

View File

@ -60,35 +60,50 @@ In your request, set the following:
- Your organization via the `org` or `orgID` URL parameters.
- `Authorization` header to `Token ` + your authentication token.
- `accept` header to `application/csv`.
- `content-type` header to `application/vnd.flux`.
- `Accept` header to `application/csv`.
- `Content-type` header to `application/vnd.flux`.
- Your plain text query as the request's raw data.
This lets you POST the Flux query in plain text and receive the annotated CSV response.
InfluxDB returns the query results in [annotated CSV](/v2.0/reference/annotated-csv/).
{{% note %}}
#### Use gzip to compress the query response
To compress the query response, set the `Accept-Encoding` header to `gzip`.
This saves network bandwidth, but increases server-side load.
{{% /note %}}
Below is an example `curl` command that queries InfluxDB:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
[Without compression](#)
[With compression](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'accept:application/csv' \
-H 'content-type:application/vnd.flux' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'Accept: application/csv' \
-H 'Content-type: application/vnd.flux' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS -H 'Authorization: Token TOKENSTRINGHERE' -H 'accept:application/csv' -H 'content-type:application/vnd.flux' -d 'from(bucket:“test”) |> range(start:-1000h) |> group(columns:[“_measurement”], mode:“by”) |> sum()'
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'Accept: application/csv' \
-H 'Content-type: application/vnd.flux' \
-H 'Accept-Encoding: gzip' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -0,0 +1,69 @@
---
title: Check if a value exists
seotitle: Use Flux to check if a value exists
description: >
Use the Flux `exists` operator to check if an object contains a key or if that
key's value is `null`.
v2.0/tags: [exists]
menu:
v2_0:
name: Check if a value exists
parent: How-to guides
weight: 209
---
Use the Flux `exists` operator to check if an object contains a key or if that
key's value is `null`.
```js
p = {firstName: "John", lastName: "Doe", age: 42}
exists p.firstName
// Returns true
exists p.height
// Returns false
```
Use `exists` with row functions (
[`filter()`](/v2.0/reference/flux/stdlib/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/stdlib/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/))
to check if a row includes a column or if the value for that column is `null`.
#### Filter out null values
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => exists r._value)
```
#### Map values based on existence
```js
from(bucket: "default")
|> range(start: -30s)
|> map(fn: (r) => ({
r with
human_readable:
if exists r._value then "${r._field} is ${string(v:r._value)}."
else "${r._field} has no value."
}))
```
#### Ignore null values in a custom aggregate function
```js
customSumProduct = (tables=<-) =>
tables
|> reduce(
identity: {sum: 0.0, product: 1.0},
fn: (r, accumulator) => ({
r with
sum:
if exists r._value then r._value + accumulator.sum
else accumulator.sum,
product:
if exists r._value then r.value * accumulator.product
else accumulator.product
})
)
```

View File

@ -0,0 +1,108 @@
---
title: Manipulate timestamps with Flux
description: >
Use Flux to process and manipulate timestamps.
menu:
v2_0:
name: Manipulate timestamps
parent: How-to guides
weight: 209
---
Every point stored in InfluxDB has an associated timestamp.
Use Flux to process and manipulate timestamps to suit your needs.
- [Convert timestamp format](#convert-timestamp-format)
- [Time-related Flux functions](#time-related-flux-functions)
## Convert timestamp format
### Convert nanosecond epoch timestamp to RFC3339
Use the [`time()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/time/)
to convert a **nanosecond** epoch timestamp to an RFC3339 timestamp.
```js
time(v: 1568808000000000000)
// Returns 2019-09-18T12:00:00.000000000Z
```
### Convert RFC3339 to nanosecond epoch timestamp
Use the [`uint()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/unit/)
to convert an RFC3339 timestamp to a nanosecond epoch timestamp.
```js
uint(v: 2019-09-18T12:00:00.000000000Z)
// Returns 1568808000000000000
```
### Calculate the duration between two timestamps
Flux doesn't support mathematical operations using [time type](/v2.0/reference/flux/language/types/#time-types) values.
To calculate the duration between two timestamps:
1. Use the `uint()` function to convert each timestamp to a nanosecond epoch timestamp.
2. Subtract one nanosecond epoch timestamp from the other.
3. Use the `duration()` function to convert the result into a duration.
```js
time1 = uint(v: 2019-09-17T21:12:05Z)
time2 = uint(v: 2019-09-18T22:16:35Z)
duration(v: time2 - time1)
// Returns 25h4m30s
```
{{% note %}}
Flux doesn't support duration column types.
To store a duration in a column, use the [`string()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/string/)
to convert the duration to a string.
{{% /note %}}
## Time-related Flux functions
### Retrieve the current time
Use the [`now()` function](/v2.0/reference/flux/stdlib/built-in/misc/now/) to
return the current UTC time in RFC3339 format.
```js
now()
```
### Add a duration to a timestamp
The [`experimental.addDuration()` function](/v2.0/reference/flux/stdlib/experimental/addduration/)
adds a duration to a specified time and returns the resulting time.
{{% warn %}}
By using `experimental.addDuration()`, you accept the
[risks of experimental functions](/v2.0/reference/flux/stdlib/experimental/#use-experimental-functions-at-your-own-risk).
{{% /warn %}}
```js
import "experimental"
experimental.addDuration(
d: 6h,
to: 2019-09-16T12:00:00Z,
)
// Returns 2019-09-16T18:00:00.000000000Z
```
### Subtract a duration from a timestamps
The [`experimental.addDuration()` function](/v2.0/reference/flux/stdlib/experimental/subduration/)
subtracts a duration from a specified time and returns the resulting time.
{{% warn %}}
By using `experimental.addDuration()`, you accept the
[risks of experimental functions](/v2.0/reference/flux/stdlib/experimental/#use-experimental-functions-at-your-own-risk).
{{% /warn %}}
```js
import "experimental"
experimental.subDuration(
d: 6h,
from: 2019-09-16T12:00:00Z,
)
// Returns 2019-09-16T06:00:00.000000000Z
```

View File

@ -10,6 +10,7 @@ menu:
name: reduce
parent: built-in-aggregates
weight: 501
v2.0/tags: [exists]
---
The `reduce()` function aggregates records in each table according to the reducer,
@ -96,7 +97,7 @@ creates a new column if it doesn't exist, and includes all existing columns in
the output table.
```js
recduce(fn: (r) => ({ r with newColumn: r._value * 2 }))
reduce(fn: (r) => ({ r with newColumn: r._value * 2 }))
```

View File

@ -9,6 +9,7 @@ menu:
name: filter
parent: built-in-transformations
weight: 401
v2.0/tags: [exists]
---
The `filter()` function filters data based on conditions defined in a predicate function ([`fn`](#fn)).
@ -42,6 +43,7 @@ Objects evaluated in `fn` functions are represented by `r`, short for "record" o
## Examples
##### Filter based on measurement, field, and tag
```js
from(bucket:"example-bucket")
|> range(start:-1h)
@ -52,6 +54,20 @@ from(bucket:"example-bucket")
)
```
##### Filter out null values
```js
from(bucket:"example-bucket")
|> range(start:-1h)
|> filter(fn: (r) => exists r._value )
```
##### Filter values based on thresholds
```js
from(bucket:"example-bucket")
|> range(start:-1h)
|> filter(fn: (r) => r._value > 50.0 and r._value < 65.0 )
```
<hr style="margin-top:4rem"/>
##### Related InfluxQL functions and statements:

View File

@ -9,6 +9,7 @@ menu:
name: map
parent: built-in-transformations
weight: 401
v2.0/tags: [exists]
---
The `map()` function applies a function to each record in the input tables.

View File

@ -93,7 +93,12 @@ The Unix nanosecond timestamp for the data point.
InfluxDB accepts one timestamp per point.
If no timestamp is provided, InfluxDB uses the system time (UTC) of its host machine.
_**Data type:** [Unix timestamp](#unix-timestamp)_
_**Data type:** [Unix timestamp](#unix-timestamp)_
{{% note %}}
To ensure a data point includes the time a metric is observed (not received by InfluxDB),
include the timestamp.
{{% /note %}}
{{% note %}}
_Use the default nanosecond precision timestamp or specify an alternative precision

View File

@ -119,24 +119,41 @@ influx write -b bucketName -o orgName -p s @/path/to/line-protocol.txt
{{< nav-icon "load data" >}}
2. Select **Buckets**.
3. Hover over the bucket to write data to and click **{{< icon "plus" >}} Add Data**.
4. Select **Line Protocol**.
_You can also [use Telegraf](/v2.0/write-data/use-telegraf/) or
[scrape data](/v2.0/write-data/scrape-data/)._
5. Select **Upload File** or **Enter Manually**.
3. Under the bucket you want to write data to, click **{{< icon "plus" >}} Add Data**.
4. Select from the following options:
- **Upload File:**
Select the time precision of your data.
Drag and drop the line protocol file into the UI or click to select the
file from your file manager.
- **Enter Manually:**
Select the time precision of your data.
Manually enter line protocol.
- [Configure Telegraf Agent](#configure-telegraf-agent)
- [Line Protocol](#line-protocol-1)
- [Scrape Metrics](#scrape-metrics)
6. Click **Continue**.
A message indicates whether data is successfully written to InfluxDB.
7. To add more data or correct line protocol, click **Previous**.
8. Click **Finish**.
---
#### Configure Telegraf Agent
1. To configure a Telegraf agent, see [Automatically create a Telegraf configuration](/v2.0/write-data/use-telegraf/auto-config/#create-a-telegraf-configuration).
---
#### Line Protocol
1. Select **Upload File** or **Enter Manually**.
- **Upload File:**
Select the time precision of your data.
Drag and drop the line protocol file into the UI or click to select the
file from your file manager.
- **Enter Manually:**
Select the time precision of your data.
Manually enter line protocol.
2. Click **Continue**.
A message indicates whether data is successfully written to InfluxDB.
3. To add more data or correct line protocol, click **Previous**.
4. Click **Finish**.
---
#### Scrape Metrics
1. To scrape metrics, see [Create a scraper](/v2.0/write-data/scrape-data/manage-scrapers/create-a-scraper/#create-a-scraper-in-the-influxdb-ui).
{{% cloud-msg %}}{{< cloud-name >}} does not support scrapers.
{{% /cloud-msg %}}
## Other ways to write data to InfluxDB

View File

@ -21,7 +21,8 @@ Create a new scraper in the InfluxDB user interface (UI).
3. Click **{{< icon "plus" >}} Create Scraper**.
4. Enter a **Name** for the scraper.
5. Select a **Bucket** to store the scraped data.
6. Enter the **Target URL** to scrape. The default URL value is `http://localhost:9999/metrics`,
6. Enter the **Target URL** to scrape.
The default URL value is `http://localhost:9999/metrics`,
which provides InfluxDB-specific metrics in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/).
7. Click **Create**.

View File

@ -17,7 +17,6 @@ Delete a scraper from the InfluxDB user interface (UI).
{{< nav-icon "load data" >}}
2. Click **Scrapers**. A listing of any existing scrapers appears with the
**Name**, **URL**, and **Bucket** for each scraper.
3. Hover over the scraper you want to delete and click **Delete**.
4. Click **Confirm**.
2. Click **Scrapers**.
3. Hover over the scraper you want to delete and click the **{{< icon "delete" >}}** icon.
4. Click **Delete**.

View File

@ -22,7 +22,7 @@ To modify either, [create a new scraper](/v2.0/write-data/scrape-data/manage-scr
{{< nav-icon "load data" >}}
2. Click **Scrapers**. A list of existing scrapers appears.
2. Click **Scrapers**.
3. Hover over the scraper you would like to update and click the **{{< icon "pencil" >}}**
that appears next to the scraper name.
4. Enter a new name for the scraper. Press Return or click out of the name field to save the change.

View File

@ -21,7 +21,7 @@ Its vast library of input plugins and "plug-and-play" architecture lets you quic
and easily collect metrics from many different sources.
This article describes how to use Telegraf to collect and store data in InfluxDB v2.0.
_See [Telegraf plugins](/v2.0/reference/telegraf-plugins/) for a list of available plugins._
For a list of available plugins, see [Telegraf plugins](/v2.0/reference/telegraf-plugins/).
#### Requirements
- **Telegraf 1.9.2 or greater**.