ported telegraf 1.15 and kapacitor 1.5

pull/1345/head^2
Scott Anderson 2020-07-30 10:34:24 -06:00
parent fc4506367c
commit f73709e52d
155 changed files with 56635 additions and 1 deletions

View File

@ -0,0 +1,26 @@
---
title: Kapacitor 1.5 documentation
menu:
kapacitor:
name: v1.5
identifier: kapacitor_1_5
weight: 1
---
Kapacitor is an open source data processing framework that makes it easy to create
alerts, run ETL jobs and detect anomalies.
Kapacitor is the final piece of the [TICK stack](https://influxdata.com/time-series-platform/).
## Key features
Here are some of the features that Kapacitor currently supports that make it a
great choice for data processing.
* Process both streaming data and batch data.
* Query data from InfluxDB on a schedule, and receive data via the
[line protocol](/influxdb/v1.4/write_protocols/line/) and any other method InfluxDB supports.
* Perform any transformation currently possible in [InfluxQL](/influxdb/v1.7/query_language/spec/).
* Store transformed data back in InfluxDB.
* Add custom user defined functions to detect anomalies.
* Integrate with HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, and more.

View File

@ -0,0 +1,31 @@
---
title: About the project
aliases:
- kapacitor/v1.5/contributing/
menu:
kapacitor_1_5_ref:
name: About the project
weight: 1
---
Kapacitor is open source and we welcome contributions from the community.
If you want Kapacitor to be able to output to you own endpoint see this [How To](/kapacitor/v1.5/about_the_project/custom_output/).
## [Release Notes/Changelog](/kapacitor/v1.5/about_the_project/releasenotes-changelog/)
## [Contributing](https://github.com/influxdata/kapacitor/blob/master/CONTRIBUTING.md)
## [CLA](https://influxdata.com/community/cla/)
## [Licenses](https://github.com/influxdata/kapacitor/blob/master/LICENSE)
## Third Party Software
InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
software of third parties that is incorporated in InfluxData products.
Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
Third party suppliers assume no liability for any claim that might arise with respect to such third party software, nor for a
customers use of or inability to use the third party software.
The [list of third party software components, including references to associated licenses and other materials](https://github.com/influxdata/kapacitor/blob/master/LICENSE_OF_DEPENDENCIES.md), is maintained on a version by version basis.

View File

@ -0,0 +1,10 @@
---
title: CLA
menu:
kapacitor_1_5_ref:
name: CLA
weight: 30
parent: About the project
url: https://influxdb.com/community/cla.html
---

View File

@ -0,0 +1,10 @@
---
title: Contributing
menu:
kapacitor_1_5_ref:
name: Contributing
weight: 10
parent: About the project
url: https://github.com/influxdata/kapacitor/blob/master/CONTRIBUTING.md
---

View File

@ -0,0 +1,10 @@
---
title: License
menu:
kapacitor_1_5_ref:
name: License
weight: 40
parent: About the project
url: https://github.com/influxdata/kapacitor/blob/master/LICENSE
---

View File

@ -0,0 +1,578 @@
---
title: Release Notes/Changelog
menu:
kapacitor_1_5_ref:
parent: About the project
---
## v1.5.6 [2020-07-17]
## Features
- Add [Microsoft Teams event handler](/kapacitor/1.5/event_handlers/microsoftteams/), thanks @mmindenhall!
- Add [Discord event handler](/kapacitor/1.5/event_handler/discord/), thanks @mattnotmitt!
- Add [support for TLS 1.3](/kapacitor/v1.5/administration/configuration/#transport-layer-security-tls-settings).
### Bug fixes
- Fix UDF agent Python 3.0 issues, thanks @elohmeier!
- Add `scraper_test` package to fix discovery service lost configuration (`discovery.Config`), thanks @flisky!
- Use `systemd` for Amazon Linux 2.
- Correct issue with `go vet` invocation in `.hooks/pre-commit` file that caused the hook to fail, thanks @mattnotmitt!
- Update `build.py` to support `arm64`, thanks @povlhp!
- Fix panic when setting a zero interval for ticker, which affected deadman and stats nodes.
- Fix a panic on int div-by-zero and return an error instead.
- Fix issue that caused Kapacitor to ignore the `pushover().userKey('')` TICKScript operation.
## v1.5.5 [2020-04-20]
## Breaking changes
- Update release checksums (used to verify release bits haven't been tampered with) from MD5 (Message Digest, 128-bit digest) to SHA-256 (Secure Hash Algorithm 2, 256-bit digest).
### Bug fixes
- Update the Kafka client to ensure errors are added to Kapacitor logs.
## v1.5.4 [2020-01-16]
## Features
- Add the ability to use templates when specifying MQTT (message queue telemetry transport) topic.
- Upgrade to support Python 3.0 for user defined functions (UDFs).
### Bug fixes
- Upgrade the Kafka library to set the timestamp correctly.
- Upgrade to Go 1.13, fixes various `go vet` issues.
## v1.5.3 [2019-06-18]
{{% warn %}}
### Authentication and shared secret
If using Kapacitor v1.5.3 or newer and InfluxDB with [authentication enabled](/influxdb/v1.7/administration/authentication_and_authorization/),
set the `[http].shared-secret` option in your `kapacitor.conf` to the shared secret of your InfluxDB instances.
```toml
# ...
[http]
# ...
shared-secret = "youramazingsharedsecret"
```
If not set, set to an empty string, or does not match InfluxDB's shared-secret,
the integration with InfluxDB will fail and Kapacitor will not start.
Kapacitor will output an error similar to:
```
kapacitord[4313]: run: open server: open service *influxdb.Service: failed to link subscription on startup: signature is invalid
```
{{% /warn %}}
#### Important update [2019-07-11]
- Some customers have reported a high number of CLOSE_WAIT connections.
Upgrade to this release to resolve this issue.
### Features
- Add ability to skip SSL verification with an alert post node.
- Add TLS configuration options.
### Bug fixes
- Use default transport consistently.
- Fix deadlock in barrier node when delete is used.
- Make RPM create files with correct ownership on install.
- Delete group stats when a group is deleted.
- Avoid extra allocation when building GroupID.
## v1.5.2 [2018-12-12]
### Features
- Add barrier node support to JoinNode.
- Add ability to expire groups using the BarrierNode.
- Add alert/persist-topics to config.
- Add multiple field support to the ChangeDetectNode.
- Add links to PagerDuty v2 alerts.
- Add additional metadata to Sensu alerts.
### Bug fixes
- Fix join not catching up fast enough after a pause in the data stream.
## v1.5.1 [2018-08-06]
### Bug fixes
- `pagerduty2` should use `routingKey` rather than `serviceKey`.
- Fix KafkaTopic not working from TICKscript.
- Improve Kafka alert throughput.
## v1.5.0 [2018-05-17]
### Features
- Add alert inhibitors that allow an alert to suppress events from other matching alerts.
- Config format updated to allow for more than one slack configuration.
- Added a new Kapacitor node changeDetect that emits a value for each time a series field changes.
- Add recoverable field to JSON alert response to indicate whether the alert will auto-recover.
- Update OpsGenie integration to use the v2 API.
To upgrade to using the new API simply update your configuration and TICKscripts to use opsgenie2 instead of opsgenie.
If your `opsgenie` configuration uses the `recovery_url` option, for `opsgenie2` you will need to change it to the `recovery_action` option.
This is because the new v2 API is not structured with static URLs, and so only the action can be defined and not the entire URL.
- Add https-private-key option to httpd config.
- Add `.quiet` to all nodes to silence any errors reported by the node.
- Add Kafka event handler.
### Bug fixes
- Kapacitor ticks generating a hash instead of their actual given name.
- Fix deadlock in load service when task has an error.
- Support PagerDuty API v2.
- Fix bug where you could not delete a topic handler with the same name as its topic.
- Adjust PagerDuty v2 service-test names and capture detailed error messages.
- Fix Kafka configuration.
## v1.4.1 [2018-03-13]
### Bug fixes
* Fix bug where task type was invalid when using var for stream/batch
## v1.4.0 [2017-12-08]
### Release notes
Kapacitor v1.4.0 adds many new features, highlighted here:
- Load directory service for adding topic handlers, tasks, and templates from `dir`.
- Structured logging with logging API endpoints that can be used to tail logs for specified tasks.
- Autoscale support for Docker Swarm and AWS EC2.
- Sideload data into your TICKscript streams from external sources.
- Fully-customizable HTTP Post body for the alert Post handler and the HTTP Post node.
### Breaking changes
#### Change over internal API to use message passing semantics.
The `Combine` and `Flatten` nodes previously operated (erroneously) across batch boundaries: this has been fixed.
### Features
- Added service for loading topic handlers, tasks, and templates from `dir`.
- Topic handler file format modified to include TopicID and HandlerID.
- TICKscript now allows task descriptions exclusively through a TICKscript.
- Task types (batch or stream) no longer must be specified.
- `dbrp` expressions were added to TICKscript.
- Added support for AWS EC2 autoscaling services.
- Added support for Docker Swarm autoscaling services.
- Added `BarrierNode` to emit `BarrierMessage` periodically.
- Added `Previous` state.
- Added support to persist replay status after it finishes.
- Added `alert.post` and `https_post` timeouts to ensure cleanup of hung connections.
- Added subscriptions modes to InfluxDB subscriptions.
- Added linear fill support for `QueryNode`.
- Added MQTT alert handler.
- Added built-in functions for converting timestamps to integers.
- Added `bools` field types to UDFs.
- Added stateless `now()` function to get the current local time.
- Added support for timeout, tags, and service templates in the Alerta AlertNode.
- Added support for custom HTTP Post bodies via a template system.
- Added support allowing for the addition of the HTTP status code as a field when using HTTP Post.
- Added `logfmt` support and refactor logging.
- Added support for exposing logs via the API. API is released as a technical preview.
- Added support for `{{ .Duration }}` on Alert Message property.
- Added support for [JSON lines](https://en.wikipedia.org/wiki/JSON_Streaming#Line-delimited_JSON) for steaming HTTP logs.
- Added new node `Sideload` that allows loading data from files into the stream of data. Data can be loaded using a hierarchy.
- Promote Alert API to stable v1 path.
- Change `WARN` level logs to `INFO` level.
- Updated Go version to 1.9.2.
### Bug fixes
- Fixed issues where log API checked the wrong header for the desired content type.
- Fixed VictorOps "data" field being a string instead of actual JSON.
- Fixed panic with `MQTT.toml` configuration generation.
- Fix oddly-generated TOML for MQTT & HTTPpost.
- Address Idle Barrier dropping all messages when source has clock offset.
- Address crash of Kapacitor on Windows x64 when starting a recording.
- Allow for `.yml` file extensions in `define-topic-handler`.
- Fix HTTP server error logging.
- Fixed bugs with stopping a running UDF agent.
- Fixed error messages for missing fields which are arguments to functions are not clear.
- Fixed bad PagerDuty test the required server info.
- Added SNMP sysUpTime to SNMP Trap service.
- Fixed panic on recording replay with HTTPPostHandler.
- Fixed Kubernetes incluster master API DNS resolution.
- Remove the pidfile after the server has exited.
- Logs API writes multiple HTTP headers.
- Fixed missing dependency in RPM package.
- Force tar owner/group to be `root`.
- Fixed install/remove of Kapacitor on non-systemd Debian/Ubuntu systems.
- Fixed packaging to not enable services on RHEL systems.
- Fixed issues with recusive symlinks on systemd systems.
- Fixed invalid default MQTT config.
## v1.3.3 [2017-08-11]
### Bug fixes
- Expose pprof without authentication, if enabled.
## v1.3.2 [2017-08-08]
### Bug fixes
- Use details field from alert node in PagerDuty.
## v1.3.1 [2017-06-02]
### Bug fixes
- Proxy from environment for HTTP request to Slack
- Fix derivative node preserving fields from previous point in stream tasks
## v1.3.0 [2017-05-22]
### Release Notes
This release has two major features.
1. Addition of scraping and discovering for Prometheues style data collection.
2. Updates to the Alert Topic system.
Here is a quick example of how to configure Kapacitor to scrape discovered targets.
First, configure a discoverer, here we use the file-discovery discoverer.
Next, configure a scraper to use that discoverer.
```
# Configure file discoverer
[[file-discovery]]
enabled = true
id = "discover_files"
refresh-interval = "10s"
##### This will look for prometheus json files
##### File format is here https://prometheus.io/docs/operating/configuration/#%3Cfile_sd_config%3E
files = ["/tmp/prom/*.json"]
# Configure scraper
[[scraper]]
enabled = true
name = "node_exporter"
discoverer-id = "discover_files"
discoverer-service = "file-discovery"
db = "prometheus"
rp = "autogen"
type = "prometheus"
scheme = "http"
metrics-path = "/metrics"
scrape-interval = "2s"
scrape-timeout = "10s"
```
Add the above snippet to your `kapacitor.conf` file.
Create the below snippet as the file `/tmp/prom/localhost.json`:
```
[{
"targets": ["localhost:9100"]
}]
```
Start the Prometheues `node_exporter` locally.
Now, startup Kapacitor and it will discover the `localhost:9100` `node_exporter` target and begin scrapping it for metrics.
For more details on the scraping and discovery systems, see the full documentation [here](/kapacitor/v1.3/pull_metrics/scraping-and-discovery/).
The second major feature with this release are changes to the alert topic system.
The previous release introduced this new system as a technical preview and with this release the alerting service has been simplified.
Alert handlers now only have a single action and belong to a single topic.
The handler definition has been simplified as a result.
Here are some example alert handlers using the new structure:
```yaml
id: my_handler
kind: pagerDuty
options:
serviceKey: XXX
```
```yaml
id: aggregate_by_1m
kind: aggregate
options:
interval: 1m
topic: aggregated
```
```yaml
id: publish_to_system
kind: publish
options:
topics: [ system ]
```
To define a handler now you must specify which topic the handler belongs to.
For example, to define the above aggregate handler on the system topic, use this command:
```sh
kapacitor define-handler system aggregate_by_1m.yaml
```
For more details on the alerting system, see the full documentation [here](https://docs.influxdata.com/kapacitor/v1.3/alerts).
### Breaking Change
#### Fixed inconsistency with JSON data from alerts.
The alert handlers Alerta, Log, OpsGenie, PagerDuty, Post and VictorOps allow extra opaque data to beattached to alert notifications.
That opaque data was inconsistent and this change fixes that.
Depending on how that data was consumed this could result in a breaking change, since the original behavior
was inconsistent we decided it would be best to fix the issue now and make it consistent for all future builds.
Specifically in the JSON result data the old key `Series` is always `series`, and the old key `Err` is now
always `error` instead of for only some of the outputs.
#### Refactor the Alerting service.
The change is completely breaking for the technical preview alerting service, a.k.a. the new alert topic
handler features. The change boils down to simplifying how you define and interact with topics.
Alert handlers now only ever have a single action and belong to a single topic.
An automatic migration from old to new handler definitions will be performed during startup.
See the updated API docs.
#### Add generic error counters to every node type.
Renamed `query_errors` to `errors` in batch node.
Renamed `eval_errors` to `errors` in eval node.
#### The UDF agent Go API has changed.
The changes now make it so that the agent package is self contained.
#### A bug was fixed around missing fields in the derivative node.
The behavior of the node changes slightly in order to provide a consistent fix to the bug.
The breaking change is that now, the time of the points returned are from the right hand or current point time,
instead of the left hand or previous point time.
### Features
- Allow Sensu handler to be specified.
- Added type signatures to Kapacitor functions.
- Added `isPresent` operator for verifying whether a value is present (part of [#1284](https://github.com/influxdata/kapacitor/pull/1284)).
- Added Kubernetes scraping support.
- Added `groupBy exclude` and added `dropOriginalFieldName` to `flatten`.
- Added KapacitorLoopback node to be able to send data from a task back into Kapacitor.
- Added headers to alert POST requests.
- TLS configuration in Slack service for Mattermost compatibility.
- Added generic HTTP Post node.
- Expose server specific information in alert templates.
- Added Pushover integration.
- Added `working_cardinality` stat to each node type that tracks the number of groups per node.
- Added StateDuration node.
- Default HipChat URL should be blank.
- Add API endpoint for performing Kapacitor database backups.
- Adding source for sensu alert as parameter.
- Added discovery and scraping services for metrics collection (pull model).
- Updated Go version to 1.7.5.
### Bug fixes
- Fixed broken ENV var configuration overrides for the Kubernetes section.
- Copy batch points slice before modification, fixes potential panics and data corruption.
- Use the Prometheus metric name as the measurement name by default for scrape data.
- Fixed possible deadlock for scraper configuration updating.
- Fixed panic with concurrent writes to same points in state tracking nodes.
- Simplified static-discovery configuration.
- Fixed panic in InfluxQL node with missing field.
- Fixed missing working_cardinality stats on stateDuration and stateCount nodes.
- Fixed panic in scraping TargetManager.
- Use ProxyFromEnvironment for all outgoing HTTP traffic.
- Fixed bug where batch queries would be missing all fields after the first nil field.
- Fix case-sensitivity for Telegram `parseMode` value.
- Fix pprof debug endpoint.
- Fixed hang in configuration API to update a configuration section.
Now if the service update process takes too long the request will timeout and return an error.
Previously the request would block forever.
- Make the Alerta auth token prefix configurable and default it to Bearer.
- Fixed logrotate file to correctly rotate error log.
- Fixed bug with alert duration being incorrect after restoring alert state.
- Fixed bug parsing dbrp values with quotes.
- Fixed panic on loading replay files without a file extension.
- Fixed bug in Default Node not updating batch tags and groupID.
Also empty string on a tag value is now a sufficient condition for the default conditions to be applied.
See [#1233](https://github.com/influxdata/kapacitor/pull/1233) for more information.
- Fixed dot view syntax to use xlabels and not create invalid quotes.
- Fixed curruption of recordings list after deleting all recordings.
- Fixed missing "vars" key when listing tasks.
- Fixed bug where aggregates would not be able to change type.
- Fixed panic when the process cannot stat the data dir.
## v1.2.0 [2017-01-23]
### Release Notes
A new system for working with alerts has been introduced.
This alerting system allows you to configure topics for alert events and then configure handlers for various topics.
This way alert generation is decoupled from alert handling.
Existing TICKscripts will continue to work without modification.
To use this new alerting system remove any explicit alert handlers from your TICKscript and specify a topic.
Then configure the handlers for the topic.
```
stream
|from()
.measurement('cpu')
.groupBy('host')
|alert()
// Specify the topic for the alert
.topic('cpu')
.info(lambda: "value" > 60)
.warn(lambda: "value" > 70)
.crit(lambda: "value" > 80)
// No handlers are configured in the script, they are instead defined on the topic via the API.
```
The API exposes endpoints to query the state of each alert and endpoints for configuring alert handlers.
See the [API docs](https://docs.influxdata.com/kapacitor/latest/api/api/) for more details.
The kapacitor CLI has been updated with commands for defining alert handlers.
This release introduces a new feature where you can window based off the number of points instead of their time.
For example:
```
stream
|from()
.measurement('my-measurement')
// Emit window for every 10 points with 100 points per window.
|window()
.periodCount(100)
.everyCount(10)
|mean('value')
|alert()
.crit(lambda: "mean" > 100)
.slack()
.channel('#alerts')
```
With this change alert nodes will have an anonymous topic created for them.
This topic is managed like all other topics preserving state etc. across restarts.
As a result existing alert nodes will now remember the state of alerts after restarts and disiabling/enabling a task.
>NOTE: The new alerting features are being released under technical preview.
This means breaking changes may be made in later releases until the feature is considered complete.
See the [API docs on technical preview](https://docs.influxdata.com/kapacitor/v1.2/api/api/#technical-preview) for specifics of how this effects the API.
### Features
- Add new query property for aligning group by intervals to start times.
- Add new alert API, with support for configuring handlers and topics.
- Move alerta api token to header and add option to skip TLS verification.
- Add SNMP trap service for alerting.
- Add fillPeriod option to Window node, so that the first emit waits till the period has elapsed before emitting.
- Now when the Window node every value is zero, the window will be emitted immediately for each new point.
- Preserve alert state across restarts and disable/enable actions.
- You can now window based on count in addition to time.
- Enable markdown in slack attachments.
### Bug fixes
- Fix issue with the Union node buffering more points than necessary.
- Fix panic during close of failed startup when connecting to InfluxDB.
- Fix panic during replays.
- logrotate.d ignores kapacitor configuration due to bad file mode.
- Fix panic during failed aggregate results.
## v1.1.1 [2016-12-02]
### Release Notes
No changes to Kapacitor, only upgrading to GoLang 1.7.4 for security patches.
## v1.1.0 [2016-10-07]
### Release Notes
New K8sAutoscale node that allows you to auotmatically scale Kubernetes deployments driven by any metrics Kapacitor consumes.
For example, to scale a deployment `myapp` based off requests per second:
```
// The target requests per second per host
var target = 100.0
stream
|from()
.measurement('requests')
.where(lambda: "deployment" == 'myapp')
// Compute the moving average of the last 5 minutes
|movingAverage('requests', 5*60)
.as('mean_requests_per_second')
|k8sAutoscale()
.resourceName('app')
.kind('deployments')
.min(4)
.max(100)
// Compute the desired number of replicas based on target.
.replicas(lambda: int(ceil("mean_requests_per_second" / target)))
```
New API endpoints have been added to be able to configure InfluxDB clusters and alert handlers dynamically without needing to restart the Kapacitor daemon.
Along with the ability to dynamically configure a service, API endpoints have been added to test the configurable services.
See the [API docs](https://docs.influxdata.com/kapacitor/latest/api/api/) for more details.
>NOTE: The `connect_errors` stat from the query node was removed since the client changed, all errors are now counted in the `query_errors` stat.
### Features
- Add a Kubernetes autoscaler node. You can now autoscale your Kubernetes deployments via Kapacitor.
- Add new API endpoint for dynamically overriding sections of the configuration.
- Upgrade to using GoLang 1.7
- Add API endpoints for testing service integrations.
- Add support for Slack icon emojis and custom usernames.
- Bring Kapacitor up to parity with available InfluxQL functions in 1.1.
### Bug fixes
- Fix bug where keeping a list of fields that where not referenced in the eval expressions would cause an error.
- Fix the number of subscriptions statistic.
- Fix inconsistency with InfluxDB by adding configuration option to set a default retention policy.
- Sort and dynamically adjust column width in CLI output.
- Adds missing strLength function.
## v1.0.2 [2016-10-06]
### Bug fixes
- Fix bug where errors to save cluster/server ID files were ignored.
- Create data_dir on startup if it does not exist.
## v1.0.1 [2016-09-26]
### Features
- Add TCP alert handler
- Add ability to set alert message as a field
- Add `.create` property to InfluxDBOut node, which when set will create the database and retention policy on task start.
- Allow duration / duration in TICKscript.
- Add support for string manipulation functions.
- Add ability to set specific HTTP port and hostname per configured InfluxDB cluster.
### Bug fixes
- Fixed typo in the default configuration file
- Change |log() output to be in JSON format so its self documenting structure.
- Fix issue with TMax and the Holt-Winters method.
- Fix bug with TMax and group by time.
## v1.0.0 [2016-09-02]
### Release Notes
First release of Kapacitor v1.0.0.

View File

@ -0,0 +1,10 @@
---
title: Administration
menu:
kapacitor_1_5:
name: Administration
weight: 80
---
## [Upgrading to Kapacitor 1.4](/kapacitor/v1.5/administration/upgrading/)

View File

@ -0,0 +1,935 @@
---
title: Configuring Kapacitor
menu:
kapacitor_1_5:
weight: 10
parent: Administration
---
* [Startup](#startup)
* [Kapacitor configuration file](#the-kapacitor-configuration-file)
* [Kapacitor environment variables](#kapacitor-environment-variables)
* [Configuring with the HTTP API](#configuring-with-the-http-api)
Basic installation and startup of the Kapacitor service is covered in
[Getting started with Kapacitor](/kapacitor/v1.5/introduction/getting-started/).
The basic principles of working with Kapacitor described there should be understood before continuing here.
This document presents Kapacitor configuration in greater detail.
Kapacitor service properties are configured using key-value pairs organized
into groups.
Any property key can be located by following its path in the configuration file (for example, `[http].https-enabled` or `[slack].channel`).
Values for configuration keys are declared in the configuration file.
On POSIX systems this file is located by default at the following location: `/etc/kapacitor/kapacitor.conf`. On Windows systems a sample configuration file can be found in the same directory as the `kapacitord.exe`.
The location of this file can be defined at startup with the `-config` argument.
The path to the configuration file can also be declared using the environment variable `KAPACITOR_CONFIG_PATH`.
Values declared in this file can be overridden by environment variables beginning with the token `KAPACITOR_`.
Some values can also be dynamically altered using the HTTP API when the key `[config-override].enabled` is set to `true`.
Four primary mechanisms for configuring different aspects of the Kapacitor service are available and listed here in the descending order by which they may be overridden:
* The configuration file.
* Environment variables.
* The HTTP API (for optional services and the InfluxDB connection).
* Command line arguments (for changing hostname and logging).
> ***Note:*** Setting the property `skip-config-overrides` in the configuration file to `true` will disable configuration overrides at startup.
## Startup
To specify how to load and run the Kapacitor daemon, set the following command line options:
* `-config`: Path to the configuration file.
* `-hostname`: Hostname that will override the hostname specified in the configuration file.
* `-pidfile`: File where the process ID will be written.
* `-log-file`: File where logs will be written.
* `-log-level`: Threshold for writing messages to the log file. Valid values include `debug, info, warn, error`.
### Systemd
On POSIX systems, when the Kapacitor daemon starts as part of `systemd`, environment variables can be set in the file `/etc/default/kapacitor`.
1. To start Kapacitor as part of `systemd`, do one of the following:
- ```sh
$ sudo systemctl enable kapacitor
```
- ```sh
$ sudo systemctl enable kapacitor —-now
```
2. Define where the PID file and log file will be written:
a. Add a line like the following into the `/etc/default/kapacitor` file:
```sh
KAPACITOR_OPTS="-pidfile=/home/kapacitor/kapacitor.pid -log-file=/home/kapacitor/logs/kapacitor.log"
```
b. Restart Kapacitor:
```sh
sudo systemctl restart kapacitor
```
The environment variable `KAPACITOR_OPTS` is one of a few special variables used
by Kapacitor at startup.
For more information on working with environment variables,
see [Kapacitor environment variables](#kapacitor-environment-variables)
below.
## Kapacitor configuration file
The default configuration can be displayed using the `config` command of the Kapacitor daemon.
```bash
kapacitord config
```
A sample configuration file is also available in the Kapacitor code base.
The most current version can be accessed on [github](https://github.com/influxdata/kapacitor/blob/master/etc/kapacitor/kapacitor.conf).
Use the Kapacitor HTTP API to get current configuration settings and values that can be changed while the Kapacitor service is running. See [Retrieving the current configuration](/kapacitor/v1.5/working/api/#retrieving-the-current-configuration).
### TOML
The configuration file is based on [TOML](https://github.com/toml-lang/toml).
Important configuration properties are identified by case-sensitive keys
to which values are assigned.
Key-value pairs are grouped into tables whose identifiers are delineated by brackets.
Tables can also be grouped into table arrays.
The most common value types found in the Kapacitor configuration file include
the following:
* **String** (declared in double quotes)
- Examples: `host = "localhost"`, `id = "myconsul"`, `refresh-interval = "30s"`.
* **Integer**
- Examples: `port = 80`, `timeout = 0`, `udp-buffer = 1000`.
* **Float**
- Example: `threshold = 0.0`.
* **Boolean**
- Examples: `enabled = true`, `global = false`, `no-verify = false`.
* **Array**
- Examples: `my_database = [ "default", "longterm" ]`, ` urls = ["http://localhost:8086"]`
* **Inline Table**
- Example: `basic-auth = { username = "my-user", password = "my-pass" }`
Table grouping identifiers are declared within brackets.
For example, `[http]`, `[deadman]`,`[kubernetes]`.
An array of tables is declared within double brackets.
For example, `[[influxdb]]`. `[[mqtt]]`, `[[dns]]`.
### Organization
Most keys are declared in the context of a table grouping, but the basic properties of the Kapacitor system are defined in the root context of the configuration file.
The four basic properties of the Kapacitor service include:
* `hostname`: String declaring the DNS hostname where the Kapacitor daemon runs.
* `data_dir`: String declaring the file system directory where core Kapacitor data is stored.
* `skip-config-overrides`: Boolean indicating whether or not to skip configuration overrides.
* `default-retention-policy`: String declaring the default retention policy to be used on the InfluxDB database.
Table groupings and arrays of tables follow the basic properties and include essential and optional features,
including specific alert handlers and mechanisms for service discovery and data scraping.
### Essential tables
#### HTTP
The Kapacitor service requires an HTTP connection. Important
HTTP properties, such as a bind address and the path to an HTTPS certificate,
are defined in the `[http]` table.
**Example: The HTTP grouping**
```toml
...
[http]
# HTTP API Server for Kapacitor
# This server is always on,
# it serves both as a write endpoint
# and as the API endpoint for all other
# Kapacitor calls.
bind-address = ":9092"
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/influxdb-selfsigned.pem"
### Use a separate private key location.
# https-private-key = ""
...
```
#### Transport Layer Security (TLS) settings
If the TLS configuration settings is not specified, Kapacitor supports all of the cipher suite IDs listed and all of the TLS versions implemented in the [Constants section of the Go `crypto/tls` package documentation](https://golang.org/pkg/crypto/tls/#pkg-constants), depending on the version of Go used to build InfluxDB.
Use the `SHOW DIAGNOSTICS` command to see the version of Go used to build Kapacitor.
##### `ciphers = [ TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256"]`
Determines the available set of cipher suites. For a list of available ciphers, which depends on the version of Go, see https://golang.org/pkg/crypto/tls/#pkg-constants.
You can use the query `SHOW DIAGNOSTICS` to see the version of Go used to build Kapacitor.
If not specified, uses the default settings from Go's crypto/tls package.
##### `min-version = "tls1.3"`
Minimum version of the tls protocol that will be negotiated. Valid values include: `tls1.0`, `tls1.1`, `tls1.2`, and `tls1.3`. If not specified, uses the default settings from the [Go `crypto/tls` package](https://golang.org/pkg/crypto/tls/#pkg-constants).
In this example, `tls1.0` specifies the minimum version as TLS 1.0.
##### `max-version = "tls1.3"`
Maximum version of the tls protocol that will be negotiated. IValid values include: `tls1.0`, `tls1.1`, `tls1.2`, and `tls1.3`. If not specified, uses the default settings from the [Go `crypto/tls` package](https://golang.org/pkg/crypto/tls/#pkg-constants).
##### Recommended configuration for "modern compatibility"
InfluxData recommends configuring your Kapacitor server's TLS settings for "modern compatibility" — this provides a higher level of security and assumes that backward compatibility is not required.
Our recommended TLS configuration settings for `ciphers`, `min-version`, and `max-version` are based on Mozilla's "modern compatibility" TLS server configuration described in [Security/Server Side TLS](https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility).
InfluxData's recommended TLS settings for "modern compatibility" are specified in the following configuration settings example.
```toml
ciphers = [ "TLS_AES_128_GCM_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_CHACHA20_POLY1305_SHA256"
]
min-version = "tls1.3"
max-version = "tls1.3"
```
> **Important:*** The order of the cipher suite IDs in the `ciphers` setting determines which algorithms are selected by priority. The TLS `min-version` and the `max-version` settings in the example above restrict support to TLS 1.3.
##### Config override
The `[config-override]` table contains only one key which enables or disables the ability to
override certain values through the HTTP API. It is enabled by default.
**Example: The Config Override grouping**
```toml
...
[config-override]
# Enable/Disable the service for overridding configuration via the HTTP API.
enabled = true
...
```
##### Logging
The Kapacitor service uses logging to monitor and inspect its behavior.
The path to the log and the log threshold is defined in `[logging]` table.
**Example: The Logging grouping**
```toml
...
[logging]
# Destination for logs
# Can be a path to a file or 'STDOUT', 'STDERR'.
file = "/var/log/kapacitor/kapacitor.log"
# Logging level can be one of:
# DEBUG, INFO, WARN, ERROR, or OFF
level = "INFO"
...
```
##### Load
Starting with Kapacitor 1.4, the Kapacitor service includes a feature
that enables the loading of TICKscript tasks when the service loads.
The path to these scripts is defined in this table.
**Example: The Load grouping**
```toml
...
[load]
# Enable/Disable the service for loading tasks/templates/handlers
# from a directory
enabled = true
# Directory where task/template/handler files are set
dir = "/etc/kapacitor/load"
...
```
##### Replay
The Kapacitor client application can record data streams and batches for testing
tasks before they are enabled.
This table contains one key which declares the path to the directory where the replay files are to be stored.
**Example: The Replay grouping**
```toml
...
[replay]
# Where to store replay files, aka recordings.
dir = "/var/lib/kapacitor/replay"
...
```
##### Task
Prior to Kapacitor 1.4, tasks were written to a special task database.
This table and its associated keys are _deprecated_ and should only be used for
migration purposes.
##### Storage
The Kapacitor service stores its configuration and other information in the key-value [Bolt](https://github.com/boltdb/bolt) database.
The location of this database on the file system is defined in the storage table
grouping.
**Example: The Storage grouping**
```toml
...
[storage]
# Where to store the Kapacitor boltdb database
boltdb = "/var/lib/kapacitor/kapacitor.db"
...
```
##### Deadman
Kapacitor provides a deadman's switch alert which can be configured globally
in this table grouping.
See the [Deadman](/kapacitor/v1.5/nodes/alert_node/#deadman) helper function topic in the AlertNode documentation.
For a Deadman's switch to work it needs a threshold below which the switch will
be triggered. It also needs a polling interval as well as an id and message
which will be passed to the alert handler.
**Example: The Deadman grouping**
```toml
...
[deadman]
# Configure a deadman's switch
# Globally configure deadman's switches on all tasks.
# NOTE: for this to be of use you must also globally configure at least one alerting method.
global = false
# Threshold, if globally configured the alert will be triggered if the throughput in points/interval is <= threshold.
threshold = 0.0
# Interval, if globally configured the frequency at which to check the throughput.
interval = "10s"
# Id: the alert Id, NODE_NAME will be replaced with the name of the node being monitored.
id = "node 'NODE_NAME' in task '{{ .TaskName }}'"
# The message of the alert. INTERVAL will be replaced by the interval.
message = "{{ .ID }} is {{ if eq .Level \"OK\" }}alive{{ else }}dead{{ end }}: {{ index .Fields \"collected\" | printf \"%0.3f\" }} points/INTERVAL."
...
```
#### InfluxDB
Kapacitor's main purpose processing between nodes within an InfluxDB Enterprise cluster or between multiple clusters.
You must define at least one `[[influxdb]]` table array configuration for an InfluxDB connection.
Multiple InfluxDB table array configurations can be specified,
but one InfluxDB table array configuration must be flagged as the `default`.
**Example: An InfluxDB connection grouping**
=======
{{% note %}}
To use Kapacitor with an InfluxDB instance that requires authentication,
it must authenticate using an InfluxDB user with **read and write** permissions.
{{% /note %}}
```toml
...
[[influxdb]]
# Connect to an InfluxDB cluster
# Kapacitor can subscribe, query and write to this cluster.
# Using InfluxDB is not required and can be disabled.
enabled = true
default = true
name = "localhost"
urls = ["http://localhost:8086"]
username = ""
password = ""
timeout = 0
# Absolute path to pem encoded CA file.
# A CA can be provided without a key/cert pair
# ssl-ca = "/etc/kapacitor/ca.pem"
# Absolutes paths to pem encoded key and cert files.
# ssl-cert = "/etc/kapacitor/cert.pem"
# ssl-key = "/etc/kapacitor/key.pem"
# Do not verify the TLS/SSL certificate.
# This is insecure.
insecure-skip-verify = false
# Maximum time to try and connect to InfluxDB during startup
startup-timeout = "5m"
# Turn off all subscriptions
disable-subscriptions = false
# Subscription mode is either "cluster" or "server"
subscription-mode = "server"
# Which protocol to use for subscriptions
# one of 'udp', 'http', or 'https'.
subscription-protocol = "http"
# Subscriptions resync time interval
# Useful if you want to subscribe to new created databases
# without restart Kapacitord
subscriptions-sync-interval = "1m0s"
# Override the global hostname option for this InfluxDB cluster.
# Useful if the InfluxDB cluster is in a separate network and
# needs special configuration to connect back to this Kapacitor instance.
# Defaults to `hostname` if empty.
kapacitor-hostname = ""
# Override the global http port option for this InfluxDB cluster.
# Useful if the InfluxDB cluster is in a separate network and
# needs special configuration to connect back to this Kapacitor instance.
# Defaults to the port from `[http] bind-address` if 0.
http-port = 0
# Host part of a bind address for UDP listeners.
# For example if a UDP listener is using port 1234
# and `udp-bind = "hostname_or_ip"`,
# then the UDP port will be bound to `hostname_or_ip:1234`
# The default empty value will bind to all addresses.
udp-bind = ""
# Subscriptions use the UDP network protocl.
# The following options of for the created UDP listeners for each subscription.
# Number of packets to buffer when reading packets off the socket.
udp-buffer = 1000
# The size in bytes of the OS read buffer for the UDP socket.
# A value of 0 indicates use the OS default.
udp-read-buffer = 0
[influxdb.subscriptions]
# Set of databases and retention policies to subscribe to.
# If empty will subscribe to all, minus the list in
# influxdb.excluded-subscriptions
#
# Format
# db_name = <list of retention policies>
#
# Example:
# my_database = [ "default", "longterm" ]
[influxdb.excluded-subscriptions]
# Set of databases and retention policies to exclude from the subscriptions.
# If influxdb.subscriptions is empty it will subscribe to all
# except databases listed here.
#
# Format
# db_name = <list of retention policies>
#
# Example:
# my_database = [ "default", "longterm" ]
...
```
#### Internals
Kapacitor includes internal services that can be enabled or disabled and
that have properties that need to be defined.
##### HTTP Post
The HTTP Post service configuration is commented out by default. It is used for
POSTing alerts to an HTTP endpoint.
##### Reporting
Kapacitor will send usage statistics back to InfluxData.
This feature can be disabled or enabled in the `[reporting]` table grouping.
**Example 9 Reporting configuration**
```toml
...
[reporting]
# Send usage statistics
# every 12 hours to Enterprise.
enabled = true
url = "https://usage.influxdata.com"
...
```
##### Stats
Internal statistics about Kapacitor can also be emitted to an InfluxDB database.
The collection frequency and the database to which the statistics are emitted
can be configured in the `[stats]` table grouping.
**Example: Stats configuration**
```toml
...
[stats]
# Emit internal statistics about Kapacitor.
# To consume these stats create a stream task
# that selects data from the configured database
# and retention policy.
#
# Example:
# stream|from().database('_kapacitor').retentionPolicy('autogen')...
#
enabled = true
stats-interval = "10s"
database = "_kapacitor"
retention-policy= "autogen"
# ...
```
##### Alert
Kapacitor includes global alert configuration options that apply to all alerts
created by the [alertNode](/kapacitor/v1.5/nodes/alert_node)
```toml
[alert]
# Persisting topics can become an I/O bottleneck under high load.
# This setting disables them entirely.
persist-topics = false
```
#### Optional table groupings
Optional table groupings are disabled by default and relate to specific features that can be leveraged by TICKscript nodes or used to discover and scrape information from remote locations.
In the default configuration, these optional table groupings may be commented out or include a key `enabled` set to `false` (i.e., `enabled = false`).
A feature defined by an optional table should be enabled whenever a relevant node or a handler for a relevant node is required by a task, or when an input source is needed.
For example, if alerts are to be sent via email, then the SMTP service should
be enabled and configured in the `[smtp]` properties table.
**Example 11 Enabling SMTP**
```toml
...
[smtp]
# Configure an SMTP email server
# Will use TLS and authentication if possible
# Only necessary for sending emails from alerts.
enabled = true
host = "192.168.1.24"
port = 25
username = "schwartz.pudel"
password = "f4usT!1808"
# From address for outgoing mail
from = "kapacitor@test.org"
# List of default To addresses.
to = ["heinrich@urfaust.versuch.de","valentin@urfaust.versuch.de","wagner@urfaust.versuch.de"]
# Skip TLS certificate verify when connecting to SMTP server
no-verify = false
# Close idle connections after timeout
idle-timeout = "30s"
# If true the all alerts will be sent via Email
# without explicitly marking them in the TICKscript.
global = false
# Only applies if global is true.
# Sets all alerts in state-changes-only mode,
# meaning alerts will only be sent if the alert state changes.
state-changes-only = false
# ...
```
Optional features include supported alert handlers, Docker services, user defined functions, input services, and discovery services.
##### Supported event handlers
Event handlers manage communications from Kapacitor to third party services or
across Internet standard messaging protocols.
They are activated through chaining methods on the [Alert](/kapacitor/v1.5/nodes/alert_node/) node.
Most of the handler configurations include common properties.
Every handler has the property `enabled`. They also need an endpoint to which
messages can be sent.
Endpoints may include single properties (e.g, `url` and `addr`) or property pairs (e.g., `host` and `port`).
Most also include an authentication mechanism such as a `token` or a pair of properties like `username` and `password`.
A sample SMTP configuration is shown in Example 11 above.
Specific properties are included directly in the configuration file and
discussed along with the specific handler information in the [Alert](/kapacitor/v1.5/nodes/alert_node/)
document.
The following handlers are currently supported:
* [Alerta](/kapacitor/v1.5/event_handlers/alerta/): Sending alerts to Alerta.
* [Discord](/kapacitor/v1.5/event_handlers/discord/): Sending alerts to Discord.
* [Email](/kapacitor/v1.5/event_handlers/email/): To send alerts by email.
* [HipChat](/kapacitor/v1.5/event_handlers/hipchat/): Sending alerts to the HipChat service.
* [Kafka](/kapacitor/v1.5/event_handlers/kafka/): Sending alerts to an Apache Kafka cluster.
* [MQTT](/kapacitor/v1.5/event_handlers/mqtt/): Publishing alerts to an MQTT broker.
* [OpsGenie](/kapacitor/v1.5/event_handlers/opsgenie/v2/): Sending alerts to the OpsGenie service.
* [PagerDuty](/kapacitor/v1.5/event_handlers/pagerduty/v2/): Sending alerts to the PagerDuty service.
* [Pushover](/kapacitor/v1.5/event_handlers/pushover/): Sending alerts to the Pushover service.
* [Sensu](/kapacitor/v1.5/event_handlers/sensu/): Sending alerts to Sensu.
* [Slack](/kapacitor/v1.5/event_handlers/slack/): Sending alerts to Slack.
* [SNMP Trap](/kapacitor/v1.5/event_handlers/snmptrap/): Posting to SNMP traps.
* [Talk](/kapacitor/v1.5/event_handlers/talk/): Sending alerts to the Talk service.
* [Telegram](/kapacitor/v1.5/event_handlers/telegram/): Sending alerts to Telegram.
* [VictorOps](/kapacitor/v1.5/event_handlers/victorops/): Sending alerts to the VictorOps service.
##### Docker services
Kapacitor can be used to trigger changes in Docker clusters. This
is activated by the [SwarmAutoScale](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
and the [K8sAutoScale](/kapacitor/v1.5/nodes/k8s_autoscale_node/) nodes.
The following service configurations corresponding to these chaining methods can
be found in the configuration file:
* [Swarm](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
**Example 12 The Docker Swarm configuration**
```toml
...
[[swarm]]
# Enable/Disable the Docker Swarm service.
# Needed by the swarmAutoscale TICKscript node.
enabled = false
# Unique ID for this Swarm cluster
# NOTE: This is not the ID generated by Swarm rather a user defined
# ID for this cluster since Kapacitor can communicate with multiple clusters.
id = ""
# List of URLs for Docker Swarm servers.
servers = ["http://localhost:2376"]
# TLS/SSL Configuration for connecting to secured Docker daemons
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
insecure-skip-verify = false
...
```
* [Kubernetes](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
**Example: The Kubernetes configuration**
```toml
...
[kubernetes]
# Enable/Disable the kubernetes service.
# Needed by the k8sAutoscale TICKscript node.
enabled = false
# There are several ways to connect to the kubernetes API servers:
#
# Via the proxy, start the proxy via the `kubectl proxy` command:
# api-servers = ["http://localhost:8001"]
#
# From within the cluster itself, in which case
# kubernetes secrets and DNS services are used
# to determine the needed configuration.
# in-cluster = true
#
# Direct connection, in which case you need to know
# the URL of the API servers, the authentication token and
# the path to the ca cert bundle.
# These value can be found using the `kubectl config view` command.
# api-servers = ["http://192.168.99.100:8443"]
# token = "..."
# ca-path = "/path/to/kubernetes/ca.crt"
#
# Kubernetes can also serve as a discoverer for scrape targets.
# In that case the type of resources to discoverer must be specified.
# Valid values are: "node", "pod", "service", and "endpoint".
# resource = "pod"
...
```
##### User defined functions (UDFs)
Kapacitor can be used to plug in a user defined function
([UDF](/kapacitor/v1.5/nodes/u_d_f_node/)), which can then be leveraged as
chaining methods in a TICKscript.
A user defined function is indicated by the declaration of a new grouping table with the following identifier: `[udf.functions.<UDF_NAME>]`.
A UDF configuration requires a path to an executable, identified by the following properties:
* `prog`: A string indicating the path to the executable.
* `args`: An array of string arguments to be passed to the executable.
* `timeout`: A timeout monitored when waiting for communications from the executable.
The UDF can also include a group of environment variables declared in a table
identified by the string `udf.functions.<UDF_NAME>.env`.
**Example: Configuring a User Defined Function**
```toml
...
[udf]
# Configuration for UDFs (User Defined Functions)
[udf.functions]
...
# Example python UDF.
# Use in TICKscript like:
# stream.pyavg()
# .field('value')
# .size(10)
# .as('m_average')
#
[udf.functions.pyavg]
prog = "/usr/bin/python2"
args = ["-u", "./udf/agent/examples/moving_avg.py"]
timeout = "10s"
[udf.functions.pyavg.env]
PYTHONPATH = "./udf/agent/py"
...
```
Additional examples can be found directly in the default configuration file.
##### Input methods
Kapacitor can receive and process data from sources other than InfluxDB, and the results of this processing can then be written to an InfluxDB database.
Currently, the following two sources external to InfluxDB are supported:
* **Collectd**: The POSIX daemon `collectd` for collecting system, network and service performance data.
* **Opentsdb**: The Open Time Series Database (Opentsdb) and its daemon tsd.
Configuration of connections to third party input sources requires properties such as:
* `bind-address`: Address at which Kapacitor will receive data.
* `database`: Database to which Kapacitor will write data.
* `retention-policy`: Retention policy for that database.
* `batch-size`: Number of datapoints to buffer before writing.
* `batch-pending`: Number of batches that may be pending in memory.
* `batch-timeout`: Length of time to wait before writing the batch. If
the batch size has not been reached, then a short batch will be written.
Each input source has additional properties specific to its configuration. They
follow the same configurations for these services used in
[Influxdb](https://github.com/influxdata/influxdb/blob/master/etc/config.sample.toml).
**Example: Collectd configuration**
```toml
...
[collectd]
enabled = false
bind-address = ":25826"
database = "collectd"
retention-policy = ""
batch-size = 1000
batch-pending = 5
batch-timeout = "10s"
typesdb = "/usr/share/collectd/types.db"
...
```
**Example 16 Opentsdb configuration**
```toml
...
[opentsdb]
enabled = false
bind-address = ":4242"
database = "opentsdb"
retention-policy = ""
consistency-level = "one"
tls-enabled = false
certificate = "/etc/ssl/influxdb.pem"
batch-size = 1000
batch-pending = 5
batch-timeout = "1s"
...
```
**User Datagram Protocol (UDP)**
As demonstrated in the [Live Leaderboard](/kapacitor/v1.5/guides/live_leaderboard/)
guide and the [Scores](https://github.com/influxdb/kapacitor/tree/master/examples/scores)
example, Kapacitor can be configured to accept raw data from a UDP connection.
This is configured much like other input services.
**Example: UDP configuration**
```toml
...
[[udp]]
enabled = true
bind-address = ":9100"
database = "game"
retention-policy = "autogen"
...
```
#### Service discovery and metric scraping
When the number and addresses of the hosts and services for which Kapacitor
should collect information are not known at the time of configuring or booting
the Kapacitor service, they can be determined, and the data collected, at runtime
with the help of discovery services.
This process is known as metric _scraping and discovery_.
For more information, see [Scraping and Discovery](/kapacitor/v1.5/pull_metrics/scraping-and-discovery/).
For scraping and discovery to work one or more scrapers must be configured. One
scraper can be bound to one discovery service.
**Example: Scraper configuration**
```toml
...
[[scraper]]
enabled = false
name = "myscraper"
# Specify the id of a discoverer service specified below
discoverer-id = "goethe-ec2"
# Specify the type of discoverer service being used.
discoverer-service = "ec2"
db = "prometheus_raw"
rp = "autogen"
type = "prometheus"
scheme = "http"
metrics-path = "/metrics"
scrape-interval = "1m0s"
scrape-timeout = "10s"
username = "schwartz.pudel"
password = "f4usT!1808"
bearer-token = ""
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
ssl-server-name = ""
insecure-skip-verify = false
...
```
The example above is illustrative only.
###### Discovery services
Kapacitor currently supports 12 discovery services.
Each of these has an `id` property by which it will be bound to a scraper.
Configuration entries are prepared by default for the following discovery
services:
* Azure
* Consul
* DNS
* EC2
* File Discovery
* GCE
* Marathon
* Nerve
* ServerSet
* Static Discovery
* Triton
* UDP
**Example: EC2 Discovery Service configuration**
```toml
...
[[ec2]]
enabled = false
id = "goethe-ec2"
region = "us-east-1"
access-key = "ABCD1234EFGH5678IJKL"
secret-key = "1nP00dl3N01rM4Su1v1Ju5qU3ch3ZM01"
profile = "mph"
refresh-interval = "1m0s"
port = 80
...
```
The above example is illustrative.
## Kapacitor environment variables
Kapacitor can use environment variables for high-level properties or to
override properties in the configuration file.
### Environment variables not in configuration file
These variables are not found in the configuration file.
* `KAPACITOR_OPTS`: Found in the `systemd` startup script and used to pass
command line options to `kapacitord` started by `systemd`.
* `KAPACITOR_CONFIG_PATH`: Sets the path to the configuration file.
* `KAPACITOR_URL`: Used by the client application `kapacitor` to locate
the `kapacitord` service.
* `KAPACITOR_UNSAFE_SSL`: A Boolean used by the client application `kapacitor`
to skip verification of the `kapacitord` certificate when connecting over SSL.
### Mapping properties to environment variables
Kapacitor-specific environment variables begin with the token `KAPACITOR`
followed by an underscore (`_`).
Properties then follow their path through the configuration file tree with each node in the tree separated by an underscore.
Dashes in configuration file identifiers are replaced with underscores.
Table groupings in table arrays are identified by integer tokens.
Examples:
* `KAPACITOR_SKIP_CONFIG_OVERRIDES`: Could be used to set the value for
`skip-config-overrides`.
* `KAPACITOR_INFLUXDB_0_URLS_0`: Could be used to set the value of the
first URL item in the URLS array in the first Influxdb property grouping table,
i.e. `[infludxb][0].[urls][0]`.
* `KAPACITOR_STORAGE_BOLTDB`: Could be used to set the path to the boltdb
directory used for storage, i.e. `[storage].botldb`.
* `KAPACITOR_HTTPPOST_0_HEADERS_Authorization`: Could be used to set the
value of the `authorization` header for the first HTTPPost configuration (`[httppost][0].headers.{authorization:"some_value"}`).
* `KAPACITOR_KUBERNETES_ENABLED`: Could be used to enable the Kubernetes
configuration service (`[kubernetes].enabled`).
## Configuring with the HTTP API
The Kapacitor [HTTP API](/kapacitor/v1.5/working/api/) can also be used to override
certain parts of the configuration.
This can be useful when a property may contain security sensitive information that should not be left in plain view in the file system, or when you need to reconfigure a service without restarting Kapacitor.
To view which parts of the configuration are available,
pull the JSON file at the `/kapacitor/v1/config` endpoint.
(e.g., http<span>:</span><span>//</span>localhost:9092<span>/</span>kapacitor<span>/</span>v1<span>/</span>config).
Working with the HTTP API to override configuration properties is presented in
detail in the [Configuration](/kapacitor/v1.5/working/api/#overriding-configurations) section
of the HTTP API document.
In order for overrides over the HTTP API to work,
the `[config-override].enabled` property must be set to `true`.
Generally, specific sections of the configuration can be viewed as JSON files by
GETting them from the context path built by their identifier from the `config`
endpoint.
For example, to get the table groupings of InfluxDB properties,
use the context `/kapacitor/v1/config/influxdb`.
Security-sensitive fields such as passwords, keys, and security tokens are redacted when using GET.
Properties can be altered by POSTing a JSON document to the endpoint.
The JSON document must contain a `set` field with a map of the properties to override and
their new values.
**Example: JSON file for enabling the SMTP configuration**
```json
{
"set":{
"enabled": true
}
}
```
By POSTing this document to the `/kapacitor/v1/config/smtp/` endpoint, the SMTP
service can be enabled.
Property overrides can be removed with the `delete` field in the JSON document.
**Example: JSON file for removing an SMTP override**
```json
{
"delete":[
"enabled"
]
}
```
By POSTing this document to the `/kapacitor/v1/config/smtp/` endpoint the SMTP
override is removed and Kapacitor reverts to the behavior defined in the
configuration file.

View File

@ -0,0 +1,446 @@
---
title: Security
menu:
kapacitor_1_5:
weight: 12
parent: Administration
---
# Contents
* [Overview](#overview)
* [Secure InfluxDB and Kapacitor](#secure-influxdb-and-kapacitor)
* [Kapacitor Security](#kapacitor-security)
* [Secure Kapacitor and Chronograf](#secure-kapacitor-and-chronograf)
# Overview
This document covers the basics of securing the open-source distribution of
Kapacitor. For information about security with Enterprise Kapacitor see the
[Enterprise Kapacitor](/enterprise_kapacitor/v1.5/) documentation.
When seeking to secure Kapacitor it is assumed that the Kapacitor server will be
communicating with an already secured InfluxDB server. It will also make its
tasks and alerts available to a Chronograf installation.
The following discussion will cover configuring Kapacitor to communicate with a
[secure InfluxDB server](#secure-influxdb-and-kapacitor), enabling
[TLS in Kapacitor](#kapacitor-security) and connecting a TLS enabled
Kapacitor server to [Chronograf](#secure-kapacitor-and-chronograf).
Authentication and Authorization are not fully implemented in the open-source
Kapacitor distribution, but are available as a feature of Enterprise Kapacitor.
## Secure InfluxDB and Kapacitor
InfluxDB can secure its communications with TLS on the transport layer and
with authentication into the database. How to enable TLS and authentication
and authorization in InfluxDB is covered in the InfluxDB documentation, in the
sections [HTTPS Setup](/influxdb/v1.4/administration/https_setup/) and
[Authentication and Authorization](/influxdb/v1.4/query_language/authentication_and_authorization)
respectively.
Kapacitor configuration supports both HTTPS communications and Authentication
with InfluxDB. Parameters can be set directly in the configuration file, as
environment variables or over Kapacitor's HTTP API.
An overview of Kapacitor configuration is provided in the
[Configuration](/kapacitor/v1.5/administration/configuration/) document.
### Kapacitor and InfluxDB HTTPS
To activate a TLS connection the `urls` strings in the `influxdb` servers
configuration will need to contain the `https` protocol. Furthermore either a
PEM encoded public key and certificate pair or a PEM encoded CA file will need
to be specified.
When testing with a **self-signed certificate** it is also important to switch off
certificate verification with the property `insecure-skip-verify`. Failure to do
so will result in x509 certificate errors as follows:
```
ts=2018-02-19T13:26:11.437+01:00 lvl=error msg="failed to connect to InfluxDB, retrying..." service=influxdb cluster=localhost err="Get https://localhost:8086/ping: x509: certificate is valid for lenovo-TP02, not localhost"
```
<a id="example-1" ></a>
> **Important** &ndash; Please note that in a production environment with a standard CA certificate, `insecure-skip-verify` needs to be switched on.
In the configuration file these values are set according to the following example.
**Example 1 &ndash; TLS Configuration Properties for InfluxDB &ndash; kapacitor.conf**
```toml
[[influxdb]]
# Connect to an InfluxDB cluster
# Kapacitor can subscribe, query and write to this cluster.
# Using InfluxDB is not required and can be disabled.
enabled = true
default = true
name = "localhost"
urls = ["https://localhost:8086"]
timeout = 0
# Absolute path to pem encoded CA file.
# A CA can be provided without a key/cert pair
# ssl-ca = "/etc/ssl/influxdata-selfsigned-incl-pub-key.pem"
# Absolutes paths to pem encoded key and cert files.
ssl-cert = "/etc/ssl/influxdb-selfsigned.crt"
ssl-key = "/etc/ssl/influxdb-selfsigned.key"
...
insecure-skip-verify = false
...
subscription-protocol = "https"
...
```
The relevant properties in Example 1 are:
* `urls` &ndash; note the protocol is `https` and _not_ `http`.
* `ssl-cert` and `ssl-key` &ndash; to indicate the location of the certificate and key files.
* `insecure-skip-verify` &ndash; for testing with a self-signed certificate set this to `true` otherwise it should be `false`, especially in production environments.
* `subscription-protocol` &ndash; to declare the correct protocol for subscription communications. For example if Kapacitor is to run on HTTP then this should be set to `"http"`, however if Kapacitor is to run on "HTTPS" then this should be set to `"https"`.
Note that when a CA file contains the certificate and key together the property
`ssl-ca` can be used in place of `ssl-cert` and `ssl-key`.
As environment variables these properties can be set as follows:
**Example 2 &ndash; TLS Configuration Properties for InfluxDB &ndash; ENVARS**
```
KAPACITOR_INFLUXDB_0_URLS_0="https://localhost:8086"
KAPACITOR_INFLUXDB_0_SSL_CERT="/etc/ssl/influxdb-selfsigned.crt"
KAPACITOR_INFLUXDB_0_SSL_KEY="/etc/ssl/influxdb-selfsigned.key"
KAPACITOR_INFLUXDB_0_INSECURE_SKIP_VERIFY=true
KAPACITOR_INFLUXDB_0_SUBSCRIPTION_PROTOCOL="https"
```
When using Systemd to manage the Kapacitor daemon the above parameters can be
stored in the file `/etc/default/kapacitor`.
#### Kapacitor to InfluxDB TLS configuration over HTTP API
These properties can also be set using the HTTP API. To get the current
`InfluxDB` part of the Kapacitor configuration, use the following `curl` command:
```
curl -ks http://localhost:9092/kapacitor/v1/config/influxdb | python -m json.tool > kapacitor-influxdb.conf
```
This results in the following file:
**Example 3 &ndash; The InfluxDB part of the Kapacitor configuration**
```json
{
"elements": [
{
"link": {
"href": "/kapacitor/v1/config/influxdb/localhost",
"rel": "self"
},
"options": {
"default": true,
"disable-subscriptions": false,
"enabled": true,
"excluded-subscriptions": {
"_kapacitor": [
"autogen"
]
},
"http-port": 0,
"insecure-skip-verify": false,
"kapacitor-hostname": "",
"name": "localhost",
"password": true,
"ssl-ca": "",
"ssl-cert": "/etc/ssl/influxdb-selfsigned.crt",
"ssl-key": "/etc/ssl/influxdb-selfsigned.key",
"startup-timeout": "5m0s",
"subscription-mode": "cluster",
"subscription-protocol": "https",
"subscriptions": {},
"subscriptions-sync-interval": "1m0s",
"timeout": "0s",
"udp-bind": "",
"udp-buffer": 1000,
"udp-read-buffer": 0,
"urls": [
"https://localhost:8086"
],
"username": "admin"
},
"redacted": [
"password"
]
}
],
"link": {
"href": "/kapacitor/v1/config/influxdb",
"rel": "self"
}
}
```
Properties can be updated by _POSTing_ a JSON document containing the field `"set"``
followed by the properties to be modified.
For example, the following command switches off the `insecure-skip-verify` property.
```
curl -kv -d '{ "set": { "insecure-skip-verify": false } }' http://localhost:9092/kapacitor/v1/config/influxdb/
...
upload completely sent off: 43 out of 43 bytes
< HTTP/1.1 204 No Content
< Content-Type: application/json; charset=utf-8
< Request-Id: 189e9abb-157b-11e8-866a-000000000000
< X-Kapacitor-Version: 1.5.1~n201802140813
< Date: Mon, 19 Feb 2018 13:45:07 GMT
<
* Connection #0 to host localhost left intact
```
Similar commands:
* To change the URLS:
`curl -kv -d '{ "set": { "urls": [ "https://lenovo-TP02:8086" ]} }' https://localhost:9092/kapacitor/v1/config/influxdb/`
* To set the `subscription-protocol`:
`curl -kv -d '{ "set": { "subscription-protocol": "https" } }' https://localhost:9092/kapacitor/v1/config/influxdb/`
* To set the path to the CA Certificate:
`curl -kv -d '{ "set": { "ssl-ca": "/etc/ssl/influxdata-selfsigned-incl-pub-key.pem" } }' https://localhost:9092/kapacitor/v1/config/influxdb/`
Other properties can be set in a similar fashion.
### Kapacitor and InfluxDB Authentication
An additional security mechanism available in InfluxDB is Authentication and
Authorization. Kapacitor can be configured to communicate with InfluxDB using
a username:password pair. These properties can be set in the configuration
file, as environment variables or over the HTTP API.
**Example 4 &ndash; InfluxDB Authentication Parameters &ndash; kapacitor.conf**
```toml
[[influxdb]]
# Connect to an InfluxDB cluster
# Kapacitor can subscribe, query and write to this cluster.
# Using InfluxDB is not required and can be disabled.
enabled = true
default = true
name = "localhost"
urls = ["https://localhost:8086"]
username = "admin"
password = "changeit"
timeout = 0
...
```
The relevant parameters in Example 4 are `username` and `password`.
These can also be set as environment variables.
**Example 5 &ndash; InfluxDB Authentication Paramenters &ndash; ENVARS**
```
KAPACITOR_INFLUXDB_0_USERNAME="admin"
KAPACITOR_INFLUXDB_0_PASSWORD="changeit"
```
When using Systemd to manage the Kapacitor daemon the above parameters can be
stored in the file `/etc/defaults/kapacitor`.
Alternately they can be set or updated over the HTTP API.
```
$ curl -kv -d '{ "set": { "username": "foo", "password": "bar" } }' https://localhost:9092/kapacitor/v1/config/influxdb/
```
## Kapacitor Security
Open-source Kapacitor offers TLS for encrypting communications to the HTTP API.
### Kapacitor over TLS
This feature can be enabled in the configuration `http` group of the configuration.
Activation requires simply setting the property `https-enabled` to `true` and
then providing a path to a certificate with the property, `https-certificate`.
If your certificate's private key is separate, specify the path to the private key
using the `https-private-key` property.
The following example shows how this is done in the `kapacitor.conf` file.
**Example 6 &ndash; Enabling TLS in kapacitor.conf**
```toml
[http]
# HTTP API Server for Kapacitor
# This server is always on,
# it serves both as a write endpoint
# and as the API endpoint for all other
# Kapacitor calls.
bind-address = ":9092"
log-enabled = true
write-tracing = false
pprof-enabled = false
https-enabled = true
https-certificate = "/etc/ssl/influxdata-selfsigned.crt"
https-private-key = "/etc/ssl/influxdata-selfsigned.key"
```
These values can also be set as environment variables as shown in the next example.
**Example 7 &ndash; Enabling TLS as ENVARS**
```
KAPACITOR_HTTP_HTTPS_ENABLED=true
KAPACITOR_HTTP_HTTPS_CERTIFICATE="/etc/ssl/influxdata-selfsigned.crt"
KAPACITOR_HTTP_HTTPS_PRIVATE_KEY="/etc/ssl/influxdata-selfsigned.key"
```
However, they _cannot_ be set over the HTTP API.
Please remember, that when Kapacitor is running on HTTPS, this needs to be
reflected in the `subscription-protocol` property for the `[[influxdb]]` group
of the Kapacitor configuration. See [Example 1](#example-1) above. The value of
this property needs to be set to `https`. Failure to do so will result in
a `TLS handshake error` with the message ` oversized record received with
length 21536` in the Kapacitor log as shown here:
```
ts=2018-02-19T13:23:49.684+01:00 lvl=error msg="2018/02/19 13:23:49 http: TLS handshake error from 127.0.0.1:49946: tls: oversized record received with length 21536\n" service=http service=httpd_server_errors
```
If for any reason TLS is switched off, this property needs to be reset to `http`.
Failure to do so will result in the inability of InfluxDB to push subscribed
data to Kapacitor with a message in the InfluxDB log like the following:
```
mar 05 17:02:40 algonquin influxd[32520]: [I] 2018-03-05T16:02:40Z Post https://localhost:9092/write?consistency=&db=telegraf&precision=ns&rp=autogen: http: server gave HTTP response to HTTPS client service=subscriber
```
#### Kapacitor command-line client with HTTPS
Once HTTPS has been enabled the Kapacitor command line client will need to be
supplied the `-url` argument in order to connect. If a self-signed or other
certificate is used, which has not been added to the system certificate store,
an addition argument `-skipVerify` will also need to be provided.
```
$ kapacitor -url https://localhost:9092 -skipVerify list tasks
ID Type Status Executing Databases and Retention Policies
chronograf-v1-3586109e-8b7d-437a-80eb-a9c50d00ad53 stream enabled true ["telegraf"."autogen"]
```
### Kapacitor Authentication and Authorization
The following applies to the open-source distribution of Kapacitor. While it is
possible to add parameters such as `username`, `password` and `auth-enabled` to
the section `[http]` of the configuration file, `kapacitor.conf`, and while the
Kapacitor server will then expect a username and password to be supplied when
connecting, the authorization and authentication handler in the open-source
distribution does not enforce checks against a user-store, nor does it verify
access permissions to resources using an Access Control List (ACL).
A true authentication and authorization handler is available only in the
Enterprise Kapacitor distribution.
### Note on HTTP API Configuration and Restarting Kapacitor
Please be aware that when configuration values are set using the HTTP API, that
these values will persist in the Kapacitor database even after restart. To
switch off these overrides on restart set the property `skip-config-overrides`
to `true` either in the configuration file (`kapacitor.conf`) or as an
environment variable (`KAPACITOR_SKIP_CONFIG_OVERRIDES`).
When troubleshooting connection issues after restart, check the HTTP API, for example
at <span>http</span><span>://</span><span>localhost:9092/kapacitor/v1/config</span>.
This can be especially useful if Kapacitor to InfluxDB communications do not
seem to be respecting values seen in the file `kapacitor.conf` or in environment
variables.
## Secure Kapacitor and Chronograf
With Kapacitor configured with HTTPS/TLS enabled many users will want to add
Kapacitor to their connection configuration in Chronograf. The primary
requirement for this to work is to have the base signing certificate installed
on the host where the Chronograf service is running. With most operating systems
this should already be the case.
When working with a **self-signed** certificate, this means installing the
self-signed certificate into the system.
### Install a Self-Signed Certificate on Debian
As an example of installing a self-signed certificate to the system, in
Debian/Ubuntu any certificate can be copied to the directory
`/usr/local/share/ca-certificates/` and then the certificate store can be rebuilt.
```
$ sudo cp /etc/ssl/influxdb-selfsigned.crt /usr/local/share/ca-certificates/
$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Replacing debian:influxdb-selfsigned.pem
done.
done.
```
If a self-signed or other certificate has been added to the system the
Chronograf service needs to be restarted to gather the new certificate
information.
```
$ sudo systemctl restart chronograf.service
```
### Adding a Kapacitor Connection in Chronograf
The following instructions apply to the Chronograf UI. If Chronograf has been
installed it can be found by default at port 8888 (e.g. <span>http</span>://<span>localhost:</span>8888).
1) In the left side navigation bar open the **Configuration** page.
This will show all available InfluxDB connections. In the row containing the
InfluxDB connection for which a Kapacitor connection is to be added, click the
link **Add Kapacitor Connection**. This will load the Add a New Kapacitor
Connection page.
**Image 1 &ndash; Adding a Kapacitor Connection**
<img src="/img/kapacitor/chrono/Add_Kapacitor_Connection01.png" alt="add kapacitor 01" style="max-width: 926px;" />
2) In the **Connection Details** group fill in such details as a name for the
connection and click the **Connect** button.
**Image 2 &ndash; Kapacitor Connection Details**
<img src="/img/kapacitor/chrono/Add_Kapacitor_Connection02.png" alt="add kapacitor 02" style="max-width: 926px;" />
3) If the certificate is installed on the system a success notification will
appear.
**Image 3 &ndash; Kapacitor Connection Success**
<img src="/img/kapacitor/chrono/Add_Kapacitor_Connection03.png" alt="add kapacitor 03" style="max-width: 926px;" />
If an error notification is returned check the Chronograf log for proxy errors.
For example:
```
mar 06 13:53:07 lenovo-tp02 chronograf[12079]: 2018/03/06 13:53:07 http: proxy error: x509: certificate is valid for locahlost, not localhost
```
4) Also tabbed forms for editing and adding Kapacitor Handler Endpoints will
appear. In wider screens they will be to the right of the Connection Details
group. In narrower screens they will be below the Connection Details group.
**Image 4 &ndash; Configure Kapacitor Handler Endpoints**
<img src="/img/kapacitor/chrono/Add_Kapacitor_Connection04b.png" alt="add kapacitor 04" style="max-width: 926px;" />
At this point Kapacitor can be used to generate alerts and TICKscripts through
Chronograf. These features are available through the **Alerting** item in the
left navigation bar.

View File

@ -0,0 +1,147 @@
---
title: Manage Kapacitor subscriptions
description: Kapacitor subscribes to InfluxDB and receives all data as it is written to InfluxDB. This article walks through how Kapacitor subscriptions work, how to configure them, and how to manage them.
menu:
kapacitor_1_5:
name: Manage subscriptions
parent: Administration
weight: 100
---
Kapacitor is tightly integrated with InfluxDB through the use of [InfluxDB subscriptions](/influxdb/latest/administration/subscription-management/),
local or remote endpoints to which all data written to InfluxDB is copied.
Kapacitor subscribes to InfluxDB allowing it to capture, manipulate, and act on your data.
## How Kapacitor subscriptions work
Kapacitor allows you to manipulate and act on data as it is written into InfluxDB.
Rather than querying InfluxDB for data *(except when using the [BatchNode](/kapacitor/v1.5/nodes/batch_node/))*,
all data is copied to your Kapacitor server or cluster through an InfluxDB subscription.
This reduces the query load on InfluxDB and isolates overhead associated with data
manipulation to your Kapacitor server or cluster.
On startup, Kapacitor will check for a subscription in InfluxDB with a name matching the Kapacitor server or cluster ID.
This ID is stored inside of `/var/lib/kapacitor/`.
If the ID file doesn't exist on startup, Kapacitor will create one.
If a subscription matching the Kapacitor ID doesn't exist in InfluxDB, Kapacitor
will create a new subscription in InfluxDB.
This process ensures that when Kapacitor stops, it will reconnect to the same subscription
on restart as long as the contents of `/var/lib/kapacitor/` remain intact.
_The directory in which Kapacitor stores its ID can be configured with the
[`data-dir` root configuration option](/kapacitor/v1.5/administration/configuration/#organization)
in the `kapacitor.conf`._
> #### Kapacitor IDs in containerized or ephemeral filesystems
> In containerized environments, filesystems are considered ephemeral and typically
> do not persist between container stops and restarts.
> If `/var/lib/kapacitor/` is not persisted, Kapacitor will create a new InfluxDB subscription
> on startup, resulting in unnecessary "duplicate" subscriptions.
> You will then need to manually [drop the unnecessary subscriptions](/influxdb/latest/administration/subscription-management/#remove-subscriptions).
>
> To avoid this, InfluxData recommends that you persist the `/var/lib/kapacitor` directory.
> Many persistence strategies are available and which to use depends on your
> specific architecture and containerization technology.
## Configure Kapacitor subscriptions
Kapacitor subscription configuration options are available under the `[[influxdb]]` section in the [`kapacitor.conf`](/kapacitor/v1.5/administration/configuration/).
Below is an example of subscription-specific configuration options followed by a description of each.
_**Example Kapacitor subscription configuration**_
```toml
[[influxdb]]
# ...
disable-subscriptions = false
subscription-mode = "server"
subscription-protocol = "http"
subscriptions-sync-interval = "1m0s"
# ...
[influxdb.subscriptions]
my_database1 = [ "default", "longterm" ]
[influxdb.excluded-subscriptions]
my_database2 = [ "default", "shortterm" ]
```
### `disable-subscriptions`
Set to `true` to disable all subscriptions.
### `subscription-mode`
Defines the subscription mode of Kapacitor.
Available options:
- `"server"`
- `"cluster"` _(See warning below)_
{{% warn %}}
The default setting for `subscription-mode` is `cluster`, however this should
not be used with [Kapacitor Enterprise](/enterprise_kapacitor/).
Multi-node Kapacitor Enterprise clusters should only use the `server` subscription-mode,
otherwise subscription data will not be received.
{{% /warn %}}
### `subscription-protocol`
Defines which protocol to use for subscriptions.
Available options:
- `"udp"`
- `"http"`
- `"https"`
### `[influxdb.subscriptions]`
Defines a set of databases and retention policies to subscribe to.
If empty, Kapacitor will subscribe to all databases and retention policies except for those listed in
[`[influxdb.excluded-subscriptions]`](#influxdb-excluded-subscriptions).
```toml
[influxdb.subscriptions]
# Pattern:
db_name = <list of retention policies>
# Example:
my_database = [ "default", "longterm" ]
```
### `[influxdb.excluded-subscriptions]`
Defines a set of databases and retention policies to exclude from subscriptions.
```toml
[influxdb.excluded-subscriptions]
# Pattern:
db_name = <list of retention policies>
# Example:
my_database = [ "default", "longterm" ]
```
> Only one of `[influxdb.subscriptions]` or `[influxdb.excluded-subscriptions]`
> need be defined. They essentially fulfill the same purpose in different ways,
> but specific use cases do lend themselves to one or the other.
## Troubleshooting
### View the Kapacitor server or cluster ID
There are two ways to view your Kapacitor server or cluster ID:
1. View the contents of `/var/lib/kapacitor/server.id` or `/var/lib/kapacitor/cluster.id`.
_The location of ID files depends on your operating system and the
[`data-dir`](/kapacitor/v1.5/administration/configuration/#organization)
setting in your `kapacitor.conf`._
2. Run the following command:
```bash
kapacitor stats general
```
The server and cluster IDs are included in the output.
### Duplicate Kapacitor subscriptions
Duplicate Kapacitor subscriptions are often caused by the contents of `/var/lib/kapacitor`
not persisting between restarts as described [above](#kapacitor-ids-in-containerized-or-ephemeral-filesystems).
The solution is to ensure the contents of this director are persisted.
Any duplicate Kapacitor subscriptions already created will need to be [manually removed](/influxdb/latest/administration/subscription-management/#remove-subscriptions).

View File

@ -0,0 +1,473 @@
---
title: Upgrading to Kapacitor v1.5
aliases:
- kapacitor/v1.5/introduction/upgrading/
menu:
kapacitor_1_5:
weight: 30
parent: Administration
---
## Contents
1. [Overview](#overview)
2. [Stopping the Kapacitor service](#stopping-the-kapacitor-service)
3. [Backup configuration and data](#backup-configuration-and-data)
4. [Debian package upgrade](#debian-package-upgrade)
5. [RPM package upgrade](#rpm-package-upgrade)
5. [Upgrade with .zip or .tar.gz](#upgrade-with-zip-or-tar-gz)
6. [Verifying the restart](#verifying-the-restart)
## Overview
How Kapacitor was installed will determine how Kapacitor should be upgraded.
The application may have been installed directly using the package management mechanisms of the OS or it may have been installed by unpackging the `.zip` or `.tar.gz` distributions. This document will cover upgrading Kapacitor from release 1.3.1 to release 1.5 on Linux(Ubuntu 16.04 and CentOS 7.3). This document presents some specifics of upgrading using the `.deb` package; some similar specifics of upgrading using the `.rpm` package; and then more generally upgrading using the `.tar.gz` binary distribution. The binary package upgrade should serve as an example offering hints as to how to upgrade using the binary distributions on other operating systems, for example on Windows using the `.zip` file. On other operating systems the general steps presented here will be roughly the same.
Before proceeding with the Kapacitor upgrade please ensure that InfluxDB and Telegraf (if used) have been upgraded to a release compatible with the latest release of Kapacitor. In this example we will use:
* InfluxDB 1.5.2
* Telegraf 1.6
* Kapacitor 1.5
For instructions on upgrading InfluxDB, please see the [InfluxDB upgrade](/influxdb/latest/administration/upgrading/) documentation. For instructions on upgrading Telegraf, please see the [Telegraf upgrade](/telegraf/latest/administration/upgrading/#main-nav) documentation.
For information about what is new in the latest Kapacitor release, view the [Changelog](/kapacitor/v1.5/about_the_project/releasenotes-changelog/).
In general the steps for upgrading Kapacitor are as follows:
1. Download a copy of the latest Kapacitor install package or binary distribution from the [Influxdata download site](https://portal.influxdata.com/downloads).
**Important note** - When upgrading Kapacitor, simply download the package using `wget`. Do not proceed directly with the installation/upgrade until the following instructions and recommendations have been understood and put to use.
1. Stop the running Kapacitor service.
1. Backup the configuration file (e.g. `/etc/kapacitor/kapacitor.conf` - n.b. the default location).
1. (Optional) Back up a copy of the contents of the Kapacitor data directory (e.g `/var/lib/kapacitor/*` - n.b. the default location).
1. Perform the upgrade.
1. If during the upgrade the current configuration was not preserved, manually migrate the values in the backup configuration file to the new one.
1. Restart the Kapacitor service.
1. Verify the restart in the log files and by testing existing tasks.
## Stopping the Kapacitor service
No matter how Kapacitor was installed, it is assumed that Kapacitor is configured to run as a service using `systemd`.
Through `systemctl` check to see if the Kapacitor service is running .
```bash
$ sudo systemctl status kapacitor.service
● kapacitor.service - Time series data processing engine.
Loaded: loaded (/lib/systemd/system/kapacitor.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Po 2017-08-21 14:06:18 CEST; 2s ago
Docs: https://github.com/influxdb/kapacitor
Process: 27741 ExecStart=/usr/bin/kapacitord -config /etc/kapacitor/kapacitor.conf $KAPACITOR_OPTS (code=exited, status=0/SUCCESS)
Main PID: 27741 (code=exited, status=0/SUCCESS)
```
The value for the `Active` field shown above should be set to 'inactive'.
If instead this value happens to be `active(running)`, the service can be stopped using `systemctl`.
*Example - Stopping the service*
```bash
sudo systemctl stop kapacitor.service
```
## Backup configuration and data
Whenever upgrading, no matter the upgrade approach, it can pay to be a bit paranoid and to backup essential files and data. The Kapacitor configuration file, located at `/etc/kapacitor/kapacitor.conf` by default, is most important when upgrading Kapacitor. In addition, you may want to backup your Kapacitor database, replays, and id files in `/var/lib/kapacitor`.
## Debian package upgrade
Check to see if Kapacitor was installed as a Debian package.
```bash
$ dpkg --list | grep "kapacitor"
ii kapacitor 1.3.1-1 amd64 Time series data processing engine
```
If the line `ii kapacitor...` is returned, it is safe to continue the upgrade using the Debian package and the instructions in this section. If nothing is returned, please consult the [Upgrade with .zip or .tar.gz section below](#upgrade-with-zip-or-tar-gz) for a general example on how to proceed.
### Package upgrade
Kapacitor can now be upgraded using the Debian package manager:
*Example - upgrade with dpkg*
```
$ sudo dpkg -i kapacitor_1.5.1_amd64.deb
(Reading database ... 283418 files and directories currently installed.)
Preparing to unpack kapacitor_1.5.1_amd64.deb ...
Unpacking kapacitor (1.5.1-1) over (1.3.1-1) ...
Removed symlink /etc/systemd/system/kapacitor.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/kapacitor.service.
Setting up kapacitor (1.5.1-1) ...
```
During the upgrade the package manager will detect any differences between the current configuration file and the new configuration file included in the installation package. The package manager prompts the user to choose how to deal with this conflict. The default behavior is to preserve the existing configuration file. This is generally the safest choice, but it can mean losing visibility of new features provided in the more recent release.
*Example - Prompt on configuration file conflict*
```
Configuration file '/etc/kapacitor/kapacitor.conf'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** kapacitor.conf (Y/I/N/O/D/Z) [default=N] ?
```
### Migrate configuration file values
If during the upgrade the configuration file was overwritten, open the new configuration file in an editor such as `nano` or `vim` and from the backup copy of the old configuration file update the values of all changed keys - for example the InfluxDB fields for `username`, `password`, `urls` and the paths to `ssl-cert` and `ssl-key`. Depending on the installation, there will most likely be more than just these.
### Restart Kapacitor
Restart is best handled through `systemctl`.
```bash
sudo systemctl restart kapacitor.service
```
Note that `restart` is used here instead of `start`, in the event that Kapacitor was not shutdown properly.
For tips on verifying the restart, see the [Verifying the Restart](#verifying-the-restart) section below.
## RPM package upgrade
Check to see if Kapacitor was installed as an RPM package.
*Example - checking for Kapacitor installation*
```
# yum list installed kapacitor
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: ftp.sh.cvut.cz
* extras: ftp.fi.muni.cz
* updates: ftp.sh.cvut.cz
Installed Packages
kapacitor.x86_64 1.5.1-1 installed
```
If the line `kapacitor.x86_64...1.5.1-1...installed` is returned, it is safe to continue the upgrade using the RPM package and the instructions in this section. If instead the message `Error: No matching Packages to list` was returned please consult the [Upgrade with .zip or .tar.gz section below](#upgrade-with-zip-or-tar-gz) for a general example on how to proceed.
### Package upgrade
Please note that the following example commands are run as user `root`. To use them directly please log in as the `root` user or append `sudo` to them.
Kapacitor can now be upgraded using `yum localupdate` from the directory into which the installation packages were downloaded:
*Example - yum localupdate*
```
# yum -y localupdate kapacitor-1.5.1.x86_64.rpm
Loaded plugins: fastestmirror
Examining kapacitor-1.5.1.x86_64.rpm: kapacitor-1.3.1-1.x86_64
Marking kapacitor-1.5.1.x86_64.rpm as an update to kapacitor-1.3.1-1.x86_64
Resolving Dependencies
--> Running transaction check
---> Package kapacitor.x86_64 0:1.3.1-1 will be updated
---> Package kapacitor.x86_64 0:1.5.1-1 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=============================================================================================================================================================
Package Arch Version Repository Size
=============================================================================================================================================================
Updating:
kapacitor x86_64 1.5.1-1 /kapacitor-1.5.1.x86_64 90 M
Transaction Summary
=============================================================================================================================================================
Upgrade 1 Package
Total size: 90 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kapacitor-1.5.1-1.x86_64 1/2
warning: /etc/kapacitor/kapacitor.conf created as /etc/kapacitor/kapacitor.conf.rpmnew
Failed to execute operation: Too many levels of symbolic links
warning: %post(kapacitor-1.5.1-1.x86_64) scriptlet failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package kapacitor-1.5.1-1.x86_64
Cleanup : kapacitor-1.3.1-1.x86_64 2/2
Removed symlink /etc/systemd/system/multi-user.target.wants/kapacitor.service.
Removed symlink /etc/systemd/system/kapacitor.service.
Created symlink from /etc/systemd/system/kapacitor.service to /usr/lib/systemd/system/kapacitor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kapacitor.service to /usr/lib/systemd/system/kapacitor.service.
Verifying : kapacitor-1.5.1-1.x86_64 1/2
Verifying : kapacitor-1.3.1-1.x86_64 2/2
Updated:
kapacitor.x86_64 0:1.5.1-1
Complete!
```
If after running `yum localupdate` the console messages are the same as above, it is safe to continue with managing the configuration files.
### Migrate configuration file values
In the example from the previous section a warning concerning the `kapacitor.conf` file may have been observed. The original configuration file has been preserved and the new configuration file has been created with the extension `.rpmnew`. To use the new configuration file rename the current configuration file `kapacitor.conf.121` and the new configuration file `kapacitor.conf`. Using `vim` or `nano` manually migrate the old values from `kapacitor.conf.121` or from a backup copy into the new copy of `kapacitor.conf`.
### Restart Kapacitor
Restart is best handled through `systemctl`.
```bash
systemctl restart kapacitor.service
```
Note that `restart` is used here instead of `start`, in the event that Kapacitor was not shutdown properly.
For tips on verifying the restart see the [Verifying the Restart](#verifying-the-restart) section below.
## Upgrade with .zip or .tar.gz
How Kapacitor has been installed using the binary distribution (.zip, .tgz) is open to a certain number of variables depending on the specific OS, organizational preferences and other factors. The package contents may have been simply unpacked in a `/home/<user>` directory. They may have been copied into the system directories suggested by the package file structure. Or they may have been leveraged using another file system strategy. The following discussion presents one hypothetical installation. The steps are presentational and should, with a little bit of creative thinking, be adaptable to other types of installation.
### A hypothetical installation
The following presentation will use a hypothetical installation, where all Influxdata products have been unpacked and are running from the directory `/opt/influxdata`. Please note that it is recommended that Influxdata products should be installed using the system specific install packages (e.g. `.deb`, `.rpm`) whenever possible, however on other systems, for which there is no current installation package, the binary distribution (`.zip`, `.tar.gz`) can be used.
*Example - the Influxdata directory*
```
$ ls -l /opt/influxdata/
total 20
lrwxrwxrwx 1 influxdb influxdb 33 srp 22 12:51 influxdb -> /opt/influxdata/influxdb-1.3.1-1/
drwxr-xr-x 5 influxdb influxdb 4096 kvě 8 22:16 influxdb-1.3.1-1
lrwxrwxrwx 1 kapacitor kapacitor 34 srp 22 12:52 kapacitor -> /opt/influxdata/kapacitor-1.5.1-1/
drwxr-xr-x 6 kapacitor kapacitor 4096 srp 22 10:56 kapacitor-1.5.1-1
drwxr-xr-x 2 influxdb influxdb 4096 srp 22 13:52 ssl
drwxrwxr-x 5 telegraf telegraf 4096 úno 1 2017 telegraf
```
In the above example it can be seen that for the InfluxDB server and the Kapacitor application a generic directory has been created using a symbolic link to the directory for the specific product release.
Elsewhere in the file system, configuration and lib directories have been pointed into these locations using additional symbolic links.
*Example - symbolic links from /etc*
```
...
$ ls -l `find /etc -maxdepth 1 -type l -print`
lrwxrwxrwx 1 root root 38 srp 22 12:56 /etc/influxdb -> /opt/influxdata/influxdb/etc/influxdb/
lrwxrwxrwx 1 root root 40 srp 22 12:57 /etc/kapacitor -> /opt/influxdata/kapacitor/etc/kapacitor/
lrwxrwxrwx 1 root root 38 srp 22 12:57 /etc/telegraf -> /opt/influxdata/telegraf/etc/telegraf/
...
```
*Example - symbolic links from /usr/lib*
```
$ ls -l `find /usr/lib -maxdepth 1 -type l -print`
lrwxrwxrwx 1 root root 42 srp 22 13:31 /usr/lib/influxdb -> /opt/influxdata/influxdb/usr/lib/influxdb/
lrwxrwxrwx 1 root root 44 srp 22 13:33 /usr/lib/kapacitor -> /opt/influxdata/kapacitor/usr/lib/kapacitor/
...
lrwxrwxrwx 1 root root 42 srp 22 13:32 /usr/lib/telegraf -> /opt/influxdata/telegraf/usr/lib/telegraf/
```
*Example - symbolic links from /usr/bin*
```
ls -l `find /usr/bin -maxdepth 1 -type l -print`
...
lrwxrwxrwx 1 root root 39 srp 22 14:40 /usr/bin/influx -> /opt/influxdata/influxdb/usr/bin/influx
lrwxrwxrwx 1 root root 40 srp 22 14:40 /usr/bin/influxd -> /opt/influxdata/influxdb/usr/bin/influxd
...
lrwxrwxrwx 1 root root 43 srp 22 14:04 /usr/bin/kapacitor -> /opt/influxdata/kapacitor/usr/bin/kapacitor
lrwxrwxrwx 1 root root 44 srp 22 14:04 /usr/bin/kapacitord -> /opt/influxdata/kapacitor/usr/bin/kapacitord
...
lrwxrwxrwx 1 root root 41 srp 22 13:57 /usr/bin/telegraf -> /opt/influxdata/telegraf/usr/bin/telegraf
...
```
Data file directories have been setup by hand.
*Example - /var/lib directory*
```
$ ls -l /var/lib/ | sort -k3,3
total 284
...
drwxr-xr-x 5 influxdb influxdb 4096 srp 22 14:12 influxdb
drwxr-xr-x 3 kapacitor kapacitor 4096 srp 22 14:16 kapacitor
...
```
InfluxDB is configured to use HTTPS and authentication. InfluxDB, Telegraf and Kapacitor have been configured to start and stop with Systemd.
*Example - symbolic links in the systemd directory*
```
$ ls -l `find /etc/systemd/system -maxdepth 1 -type l -print`
...
lrwxrwxrwx 1 root root 42 srp 22 13:39 /etc/systemd/system/influxdb.service -> /usr/lib/influxdb/scripts/influxdb.service
lrwxrwxrwx 1 root root 44 srp 22 13:40 /etc/systemd/system/kapacitor.service -> /usr/lib/kapacitor/scripts/kapacitor.service
lrwxrwxrwx 1 root root 42 srp 22 13:39 /etc/systemd/system/telegraf.service -> /usr/lib/telegraf/scripts/telegraf.service
...
```
### Manual upgrade
Ensure that InfluxDB and Telegraf (if installed) have been upgraded, that the Kapacitor service has been stopped and that a backup copy of `kapacitor.conf` has been saved.
Here the latest InfluxDB distribution has been unpacked alongside the previous distribution and the general symbolic link has been updated. The Telegraf distribution has been unpacked on top of the previous one.
*Example - the Influxdata directory post InfluxDB and Telegraf upgrade*
```
$ ls -l /opt/influxdata/
total 24
drwxr-xr-x 2 root root 4096 srp 22 15:21 bak
lrwxrwxrwx 1 root root 17 srp 22 15:15 influxdb -> influxdb-1.5.2-1/
drwxr-xr-x 5 influxdb influxdb 4096 kvě 8 22:16 influxdb-1.2.4-1
drwxr-xr-x 5 influxdb influxdb 4096 srp 5 01:33 influxdb-1.5.2-1
lrwxrwxrwx 1 kapacitor kapacitor 34 srp 22 12:52 kapacitor -> /opt/influxdata/kapacitor-1.5.1-1/
drwxr-xr-x 6 kapacitor kapacitor 4096 srp 22 10:56 kapacitor-1.5.1-1
drwxr-xr-x 2 influxdb influxdb 4096 srp 22 13:52 ssl
drwxr-xr-x 5 telegraf telegraf 4096 čec 27 01:26 telegraf
```
Kapacitor is upgraded using the same approach as the InfluxDB upgrade. The new distribution package is unpacked alongside of the current one.
*Example - unpacking the latest Kapacitor distribution*
```
$ cd /opt/influxdata
$ sudo tar -xvzf /home/karl/Downloads/install/kapacitor-1.3.1_linux_amd64.tar.gz
./kapacitor-1.5.1-1/
./kapacitor-1.5.1-1/usr/
./kapacitor-1.5.1-1/usr/bin/
./kapacitor-1.5.1-1/usr/bin/kapacitord
./kapacitor-1.5.1-1/usr/bin/kapacitor
./kapacitor-1.5.1-1/usr/bin/tickfmt
./kapacitor-1.5.1-1/usr/lib/
./kapacitor-1.5.1-1/usr/lib/kapacitor/
./kapacitor-1.5.1-1/usr/lib/kapacitor/scripts/
./kapacitor-1.5.1-1/usr/lib/kapacitor/scripts/init.sh
./kapacitor-1.5.1-1/usr/lib/kapacitor/scripts/kapacitor.service
./kapacitor-1.5.1-1/usr/share/
./kapacitor-1.5.1-1/usr/share/bash-completion/
./kapacitor-1.5.1-1/usr/share/bash-completion/completions/
./kapacitor-1.5.1-1/usr/share/bash-completion/completions/kapacitor
./kapacitor-1.5.1-1/var/
./kapacitor-1.5.1-1/var/log/
./kapacitor-1.5.1-1/var/log/kapacitor/
./kapacitor-1.5.1-1/var/lib/
./kapacitor-1.5.1-1/var/lib/kapacitor/
./kapacitor-1.5.1-1/etc/
./kapacitor-1.5.1-1/etc/kapacitor/
./kapacitor-1.5.1-1/etc/kapacitor/kapacitor.conf
./kapacitor-1.5.1-1/etc/logrotate.d/
./kapacitor-1.5.1-1/etc/logrotate.d/kapacitor
```
Following extraction the old symbolic link is removed and a new one is created to the new distribution. This approach is similar to simply unpacking or copying the distribution contents over the existing directories, which is also a feasible approach. Parallel unpacking and link creation offers the advantage of preserving the previous installation, albeit in a now inactive place. This approach facilitates reverting back to the previous installation, if for some reason that will be desired.
*Example - Post extraction commands*
```bash
$ sudo chown -R kapacitor:kapacitor kapacitor-1.5.1-1/
$ sudo rm kapacitor
$ sudo ln -s ./kapacitor-1.5.1-1/ ./kapacitor
$ sudo chown kapacitor:kapacitor kapacitor
$ ls -l
total 28
drwxr-xr-x 2 root root 4096 srp 22 15:21 bak
lrwxrwxrwx 1 root root 17 srp 22 15:15 influxdb -> influxdb-1.5.2-1/
drwxr-xr-x 5 influxdb influxdb 4096 kvě 8 22:16 influxdb-1.2.4-1
drwxr-xr-x 5 influxdb influxdb 4096 srp 5 01:33 influxdb-1.5.2-1
lrwxrwxrwx 1 kapacitor kapacitor 20 srp 22 15:35 kapacitor -> ./kapacitor-1.5.1-1/
drwxr-xr-x 6 kapacitor kapacitor 4096 srp 22 10:56 kapacitor-1.5.1-1
drwxr-xr-x 5 kapacitor kapacitor 4096 čen 2 20:22 kapacitor-1.5.1-1
drwxr-xr-x 2 influxdb influxdb 4096 srp 22 13:52 ssl
drwxr-xr-x 5 telegraf telegraf 4096 čec 27 01:26 telegraf
```
### Migrate configuration file values
Using `vim` the values from the backup of the previous configuration file are manually migrated to the new one.
```bash
$ sudo -u kapacitor vim kapacitor/etc/kapacitor/kapacitor.conf
```
### Restart Kapacitor
Restart is handled through `systemctl`.
```bash
sudo systemctl restart kapacitor.service
```
Note that `restart` is used here instead of `start`, in the event that Kapacitor was not shutdown properly.
## Verifying the restart
First check the service status in `systemctl`.
*Example - service status check*
```bash
$ sudo systemctl status kapacitor.service
● kapacitor.service - Time series data processing engine.
Loaded: loaded (/lib/systemd/system/kapacitor.service; enabled; vendor preset: enabled)
Active: active (running) since Po 2017-08-21 14:22:18 CEST; 16min ago
Docs: https://github.com/influxdb/kapacitor
Main PID: 29452 (kapacitord)
Tasks: 13
Memory: 11.6M
CPU: 726ms
CGroup: /system.slice/kapacitor.service
└─29452 /usr/bin/kapacitord -config /etc/kapacitor/kapacitor.conf
```
Check the log in `journalctl`
*Example - journalctl check*
```
srp 21 14:22:18 algonquin systemd[1]: Started Time series data processing engine..
srp 21 14:22:18 algonquin kapacitord[29452]: '##:::'##::::'###::::'########:::::'###:::::'######::'####:'########::'#######::'########::
srp 21 14:22:18 algonquin kapacitord[29452]: ##::'##::::'## ##::: ##.... ##:::'## ##:::'##... ##:. ##::... ##..::'##.... ##: ##.... ##:
srp 21 14:22:18 algonquin kapacitord[29452]: ##:'##::::'##:. ##:: ##:::: ##::'##:. ##:: ##:::..::: ##::::: ##:::: ##:::: ##: ##:::: ##:
srp 21 14:22:18 algonquin kapacitord[29452]: #####::::'##:::. ##: ########::'##:::. ##: ##:::::::: ##::::: ##:::: ##:::: ##: ########::
srp 21 14:22:18 algonquin kapacitord[29452]: ##. ##::: #########: ##.....::: #########: ##:::::::: ##::::: ##:::: ##:::: ##: ##.. ##:::
srp 21 14:22:18 algonquin kapacitord[29452]: ##:. ##:: ##.... ##: ##:::::::: ##.... ##: ##::: ##:: ##::::: ##:::: ##:::: ##: ##::. ##::
srp 21 14:22:18 algonquin kapacitord[29452]: ##::. ##: ##:::: ##: ##:::::::: ##:::: ##:. ######::'####:::: ##::::. #######:: ##:::. ##:
srp 21 14:22:18 algonquin kapacitord[29452]: ..::::..::..:::::..::..:::::::::..:::::..:::......:::....:::::..::::::.......:::..:::::..::
srp 21 14:22:18 algonquin kapacitord[29452]: 2017/08/21 14:22:18 Using configuration at: /etc/kapacitor/kapacitor.conf
```
Check as well the log in the directory `/var/log/kapacitor`.
*Example - kapacitor.log check*
```bash
$ sudo tail -f /var/log/kapacitor/kapacitor.log
[httpd] 127.0.0.1 - - [21/Aug/2017:14:41:50 +0200] "POST /write?consistency=&db=_internal&precision=ns&rp=monitor HTTP/1.1" 204 0 "-" "InfluxDBClient" 1a122e03-866e-11e7-80f1-000000000000 375
[httpd] 127.0.0.1 - - [21/Aug/2017:14:41:50 +0200] "POST /write?consistency=&db=telegraf&precision=ns&rp=autogen HTTP/1.1" 204 0 "-" "InfluxDBClient" 1a401bb1-866e-11e7-80f2-000000000000 303
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:00 +0200] "POST /write?consistency=&db=_internal&precision=ns&rp=monitor HTTP/1.1" 204 0 "-" "InfluxDBClient" 200818be-866e-11e7-80f3-000000000000 398
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:00 +0200] "POST /write?consistency=&db=telegraf&precision=ns&rp=autogen HTTP/1.1" 204 0 "-" "InfluxDBClient" 20360382-866e-11e7-80f4-000000000000 304
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:10 +0200] "POST /write?consistency=&db=_internal&precision=ns&rp=monitor HTTP/1.1" 204 0 "-" "InfluxDBClient" 25fded1a-866e-11e7-80f5-000000000000 550
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:10 +0200] "POST /write?consistency=&db=telegraf&precision=ns&rp=autogen HTTP/1.1" 204 0 "-" "InfluxDBClient" 262be594-866e-11e7-80f6-000000000000 295
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:20 +0200] "POST /write?consistency=&db=_internal&precision=ns&rp=monitor HTTP/1.1" 204 0 "-" "InfluxDBClient" 2bf3d170-866e-11e7-80f7-000000000000 473
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:20 +0200] "POST /write?consistency=&db=telegraf&precision=ns&rp=autogen HTTP/1.1" 204 0 "-" "InfluxDBClient" 2c21ddde-866e-11e7-80f8-000000000000 615
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:30 +0200] "POST /write?consistency=&db=_internal&precision=ns&rp=monitor HTTP/1.1" 204 0 "-" "InfluxDBClient" 31e9b251-866e-11e7-80f9-000000000000 424
[httpd] 127.0.0.1 - - [21/Aug/2017:14:42:30 +0200] "POST /write?consistency=&db=telegraf&precision=ns&rp=autogen HTTP/1.1" 204 0 "-" "InfluxDBClient" 3217a267-866e-11e7-80fa-000000000000 288
```
Check for Kapacitor client activity in Influxdb.
*Example - Influxdb check*
```bash
sudo journalctl --unit influxdb.service | grep "Kapacitor"
srp 21 14:45:18 algonquin influxd[27308]: [httpd] 127.0.0.1 - admin [21/Aug/2017:14:45:18 +0200] "GET /ping HTTP/1.1" 204 0 "-" "KapacitorInfluxDBClient" 965e7c0b-866e-11e7-81c7-000000000000 21
srp 21 14:45:18 algonquin influxd[27308]: [httpd] 127.0.0.1 - admin [21/Aug/2017:14:45:18 +0200] "POST /query?db=&q=SHOW+DATABASES HTTP/1.1" 200 123 "-" "KapacitorInfluxDBClient" 965e89e5-866e-11e7-81c8-000000000000 570
srp 21 14:45:18 algonquin influxd[27308]: [httpd] 127.0.0.1 - admin [21/Aug/2017:14:45:18 +0200] "POST /query?db=&q=SHOW+RETENTION+POLICIES+ON+_internal HTTP/1.1" 200 158 "-" "KapacitorInfluxDBClient" 965fcf0f-866e-11e7-81c9-000000000000 308
srp 21 14:45:18 algonquin influxd[27308]: [httpd] 127.0.0.1 - admin [21/Aug/2017:14:45:18 +0200] "POST /query?db=&q=SHOW+RETENTION+POLICIES+ON+telegraf HTTP/1.1" 200 154 "-" "KapacitorInfluxDBClient" 96608b2b-866e-11e7-81ca-000000000000 1812
srp 21 14:45:18 algonquin influxd[27308]: [httpd] 127.0.0.1 - admin [21/Aug/2017:14:45:18 +0200] "POST /query?db=&q=SHOW+SUBSCRIPTIONS HTTP/1.1" 200 228 "-" "KapacitorInfluxDBClient" 96618c32-866e-11e7-81cb-000000000000 380
```
Verify that old tasks are once again visible and enabled.
*Example - tasks check*
```bash
$ kapacitor list tasks
ID Type Status Executing Databases and Retention Policies
cpu_alert_batch batch disabled false ["telegraf"."autogen"]
cpu_alert_stream stream enabled true ["telegraf"."autogen"]
```
Testing recording existing tasks and replaying the results is also recommended for checking the status of the newly upgraded Kapacitor service. Which tasks to record will depend on the specifics of the installation. Please see the [Kapacitor API documentation](/kapacitor/v1.5/working/api#recordings) for more details.
If these checks look correct, then the upgrade can be considered complete.

View File

@ -0,0 +1,133 @@
---
title: Kapacitor event handlers
description: Kapacitor event handlers provide ways to integrate Kapacitor alert messages with logging, specific URLs, and many third-party applications.
aliases:
- /kapacitor/v1.5/working/event-handler-setup/
menu:
kapacitor_1_5_ref:
name: Event handlers
weight: 50
---
Kapacitor can be integrated into a monitoring system by sending
[alert messages](/kapacitor/v1.5/nodes/alert_node/#message) to supported event
handlers. Currently, Kapacitor can send alert messages to specific log files and
specific URLs, as well as to many third party applications.
These documents outline configuration options, setup instructions,
[handler file](#handler-file) and [TICKscript](/kapacitor/v1.5/tick/introduction/)
syntax for officially supported Kapacitor event handlers.
[Aggregate](/kapacitor/v1.5/event_handlers/aggregate/)
[Alerta](/kapacitor/v1.5/event_handlers/alerta/)
[Discord](/kapacitor/v1.5/event_handlers/discord/)
[Email](/kapacitor/v1.5/event_handlers/email/)
[Exec](/kapacitor/v1.5/event_handlers/exec/)
[Hipchat](/kapacitor/v1.5/event_handlers/hipchat/)
[Kafka](/kapacitor/v1.5/event_handlers/kafka/)
[Log](/kapacitor/v1.5/event_handlers/log/)
[Microsoft Teams](/kapacitor/v1.5/event_handlers/microsoftteams/)
[MQTT](/kapacitor/v1.5/event_handlers/mqtt/)
[Opsgenie](/kapacitor/v1.5/event_handlers/opsgenie/)
[Pagerduty](/kapacitor/v1.5/event_handlers/pagerduty/)
[Post](/kapacitor/v1.5/event_handlers/post/)
[Publish](/kapacitor/v1.5/event_handlers/publish/)
[Pushover](/kapacitor/v1.5/event_handlers/pushover/)
[Sensu](/kapacitor/v1.5/event_handlers/sensu/)
[Slack](/kapacitor/v1.5/event_handlers/slack/)
[Snmptrap](/kapacitor/v1.5/event_handlers/snmptrap/)
[Talk](/kapacitor/v1.5/event_handlers/talk/)
[TCP](/kapacitor/v1.5/event_handlers/tcp/)
[Telegram](/kapacitor/v1.5/event_handlers/telegram/)
[Victorops](/kapacitor/v1.5/event_handlers/victorops/)
> **Note:** Setup instructions are not currently available for all supported
> event handlers, but additional information will be added over time. If
> you are familiar with the setup process for a specific event handler, please
> feel free to [contribute](https://github.com/influxdata/docs.influxdata.com/blob/master/CONTRIBUTING.md).
## Configure event handlers
Required and default configuration options for most event handlers are
configured in your Kapacitor configuration file, `kapacitor.conf`.
_The default location for this is `/etc/kapacitor/kapacitor.conf`, but may be
different depending on your Kapacitor setup._
Many event handlers provide options that can be defined in a TICKscript or in a
handler file while some can only be configured in a handler file.
These configurable options are outlined in the documentation for each handler.
## Add and use event handlers
Enable the event handler in your `kapacitor.conf` if applicable. Once
enabled, do one of the following:
- [Create a topic handler with a handler file](#create-a-topic-handler-with-a-handler-file), and then [add the handler](#add-the-handler).
- [Use a handler in a TICKscripts](#use-a-handler-in-a-tickscripts).
> **Note:** Not all event handlers can be used in TICKscripts.
### Create a topic handler with a handler file
An event handler file is a simple YAML or JSON file that contains information
about the handler.
Although many handlers can be added in a TICKscript, managing multiple handlers in TICKscripts can be cumbersome.
Handler files let you add and use handlers outside of TICKscripts.
For some handler types, using handler files is the only option.
The handler file contains the following:
<span style="color: #ff9e46; font-style: italic; font-size: .8rem;">* Required</span>
- **ID**<span style="color: #ff9e46; font-style: italic;">\*</span>: The unique ID
of the handler.
- **Topic**<span style="color: #ff9e46; font-style: italic;">\*</span>: The topic
to which the handler subscribes.
- **Match**: A lambda expression to filter matching alerts. By default, all alerts
match. Learn more about [match expressions](/kapacitor/v1.5/working/alerts/#match-expressions).
- **Kind**<span style="color: #ff9e46; font-style: italic;">\*</span>: The kind of
handler.
- **Options**: Configurable options determined by the handler kind. If none are
provided, default values defined for the handler in the `kapacitor.conf` are used.
```yaml
id: handler-id
topic: topic-name
match: changed()
kind: slack
options:
channel: '#oh-nos'
```
#### Add the handler
Use the Kapacitor CLI to define a new handler with a handler file:
```bash
# Pattern
kapacitor define-topic-handler <handler-file-name>
# Example
kapacitor define-topic-handler slack_cpu_handler.yaml
```
### Use a handler in a TICKscript
Many event handlers can be used directly in TICKscripts to send events.
This is generally done with handlers that send messages to third-parties. Below
is an example TICKscript that publishes CPU alerts to Slack using the `.slack()`
event handler:
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "idle_usage" < 10)
.message('You better check your CPU usage.')
.slack()
```
> Events are sent to handlers if the alert is in a state other than OK or the
alert just changed to the OK state from a non OK state (the alert
recovered). Use the [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly) property to send events to handlers only if the alert state changes.

View File

@ -0,0 +1,84 @@
---
title: Aggregate event handler
description: The aggregate event handler allows you to aggregate alerts messages over a specified interval. This page includes aggregate options and usage examples.
menu:
kapacitor_1_5_ref:
name: Aggregrate
weight: 100
parent: Event handlers
---
The aggregate event handler aggregates multiple events into a single event.
It subscribes to a topic and aggregates published messages within a defined
interval into an aggregated topic.
## Options
The following aggregate event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file).
| Name | Type | Description |
| ---- | ---- | ----------- |
| interval | duration | How often to aggregate events. Interval must be specified in nanoseconds. |
| topic | string | A topic into which to publish the aggregate events. |
| message | string | A template string where `{{.Interval}}` and `{{.Count}}` are available for constructing a meaningful message. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: aggregate
options:
interval: 300000000000
topic: agg_5m
message: '{{.Count}} new events in the last {{.Interval}}'
```
## Using the aggregate event handler
The aggregate event handler subscribes to a topic and aggregates messages
published to that topic at specified intervals.
The TICKscript below, `cpu_alert.tick`, publishes alerts to the `cpu` topic if
CPU idle usage is less than 10% (or CPU usage is greater than 90%).
#### cpu\_alert.tick
```js
stream
|from()
.measurement('cpu')
.groupBy(*)
|alert()
.crit(lambda: "usage_idle" < 10)
.topic('cpu')
```
Add and enable this TICKscript with the following:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a new handler file, `aggregated_cpu_alerts.yaml`, using the `aggregate`
event handler that subscribes to the `cpu` topic, aggregates alerts from the
last 10 minutes, and publishes aggregated messages to a new `aggr_cpu` topic.
_Handler files can be YAML or JSON._
#### aggr_cpu_alerts.yaml
```yaml
id: aggr_cpu_alerts_10m
topic: cpu
kind: aggregate
options:
interval: 600000000000
topic: aggr_cpu
message: '{{.Count}} CPU alerts in the last {{.Interval}}'
```
Add the handler file:
```bash
kapacitor define-topic-handler aggr_cpu_alerts_10m.yaml
```
Aggregated CPU alert messages will be published to the `aggr_cpu` topic every
10 minutes. Further handling of the aggregated events can be configured on the
`aggr_cpu` topic.

View File

@ -0,0 +1,197 @@
---
title: Alerta event handler
description: The Alerta event handler allows you to send Kapacitor alerts to Alerta. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Alerta
weight: 200
parent: Event handlers
---
[Alerta](http://alerta.io/) is a monitoring tool used to consolidate and
deduplicate alerts from multiple sources for quick at-a-glance visualization.
Kapacitor can be configured to send alert messages to Alerta.
## Configuration
Configuration as well as default [option](#options) values for the Alerta event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[alerta]
enabled = true
url = "http://127.0.0.1"
token = "mysupersecretauthtoken"
environment = "production"
origin = "kapacitor"
```
#### `enabled`
Set to `true` to enable the Alerta event handler.
#### `url`
The Alerta URL.
#### `token`
Default Alerta authentication token.
#### `token-prefix`
Default token prefix.
_If you receive invalid token errors, you may need to change this to "Key"._
#### `environment`
Default Alerta environment.
#### `origin`
Default origin of alert.
## Options
The following Alerta event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.alerta()` in a TICKscript.
<span style="color: #ff9e46; font-style: italic; font-size: .8rem;">* Required</span>
| Name | Type | Description |
| ---- | ---- | ----------- |
| token | string | Alerta authentication token. If empty uses the token from the configuration. |
| token-prefix | string | Alerta authentication token prefix. If empty, uses "Bearer". |
| resource<span style="color: #ff9e46; font-style: italic;">\*</span> | string | Alerta resource. Can be a template and has access to the same data as the AlertNode.Details property. Default: {{ .Name }} |
| event<span style="color: #ff9e46; font-style: italic;">\*</span> | string | Alerta event. Can be a template and has access to the same data as the idInfo property. Default: {{ .ID }}. |
| environment | string | Alerta environment. Can be a template and has access to the same data as the AlertNode.Details property. Default is set from the configuration. |
| group | string | Alerta group. Can be a template and has access to the same data as the AlertNode.Details property. Default: {{ .Group }}. |
| value | string | Alerta value. Can be a template and has access to the same data as the AlertNode.Details property. Default is an empty string. |
| origin | string | Alerta origin. If empty uses the origin from the configuration. |
| service | list of strings | List of effected Services. |
| timeout | duration string | Alerta timeout. Default is 24 hours. |
> **Note:** The `resource` and `event` properties are required.
> Alerta cannot be configured globally because of these required properties.
### Example: handler file
```yaml
topic: topic-name
id: handler-id
kind: alerta
options:
token: 'mysupersecretauthtoken'
token-prefix: 'Bearer'
resource: '{{ .Name }}'
event: '{{ .ID }}'
environment: 'Production'
group: '{{ .Group }}'
value: 'some-value'
origin: 'kapacitor'
service: ['service1', 'service2']
timeout: 24h
```
### Example: TICKscript
```js
|alert()
// ...
.stateChangesOnly()
.alerta()
.token('mysupersecretauthtoken')
.tokenPrefix('Bearer')
.resource('{{ .Name }}')
.event('{{ .ID }}')
.environment('Production')
.group('{{ .Group }}')
.value('some-value')
.origin('kapacitor')
.service('service1', 'service2')
.timeout(24h)
```
## Using the Alerta event handler
With the Alerta event handler enabled and configured in your `kapacitor.conf`,
use the `.alerta()` attribute in your TICKscripts to send alerts to Alerta or
define an Alerta handler that subscribes to a topic and sends published alerts
to Alerta.
> To avoid posting a message every alert interval, use
> [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly)
> so only events where the alert changed state are sent to Alerta.
The examples below use the following Alerta configuration defined in the `kapacitor.conf`:
_**Alerta settings in kapacitor.conf**_
```toml
[alerta]
enabled = true
url = "http://127.0.0.1"
token = "mysupersecretauthtoken"
environment = "production"
origin = "kapacitor"
```
### Send alerts to an Alerta room from a TICKscript
The following TICKscript sends the message, "Hey, check your CPU", to Alerta
whenever idle CPU usage drops below 10% using the `.alerta()` event handler and
default Alerta settings defined in the `kapacitor.conf`.
_**alerta-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.stateChangesOnly()
.message('Hey, check your CPU')
.alerta()
.resource('{{ .Name }}')
.event('{{ .ID }}')
```
### Send alerts to an Alerta room from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". An Alerta handler is added that subscribes to the `cpu` topic
and publishes all alert messages to Alerta using default settings defined in the
`kapacitor.conf`.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.stateChangesOnly()
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Alerta
event handler to send alerts to the `alerts` channel in Alerta.
_**alerta\_cpu\_handler.yaml**_
```yaml
id: alerta-cpu-alert
topic: cpu
kind: alerta
options:
resource: '{{ .Name }}'
event: '{{ .ID }}'
origin: 'kapacitor'
```
Add the handler:
```bash
kapacitor define-topic-handler alerta_cpu_handler.yaml
```

View File

@ -0,0 +1,308 @@
---
title: Discord event handler
description: The Discord event handler lets you send Kapacitor alerts to Discord. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Discord
weight: 250
parent: Event handlers
---
[Discord](https://discordapp.com) is a popular chat service targeted primarily at gamers and by teams outside of gaming looking for a free solution.
To configure Kapacitor to send alert messages to Discord, set the applicable configuration options.
## Configuration
Configuration as well as default [option](#options) values for the Discord event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[[discord]]
enabled = false
default = true
url = "https://discordapp.com/api/webhooks/xxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
workspace = "guild-channel"
timestamp = true
username = "Kapacitor"
avatar-url = "https://influxdata.github.io/branding/img/downloads/influxdata-logo--symbol--pool-alpha.png"
embed-title = "Kapacitor Alert"
global = false
state-changes-only = false
ssl-ca = "/path/to/ca.crt"
ssl-cert = "/path/to/cert.crt"
ssl-key = "/path/to/private-key.key"
insecure-skip-verify = false
```
> Multiple Discord clients may be configured by repeating `[[discord]]` sections.
The `workspace` acts as a unique identifier for each configured Discord client.
#### `enabled`
Set to `true` to enable the Discord event handler.
#### `default`
If multiple Discord client configurations are specified, identify one configuration as the default.
#### `workspace`
The Discord workspace ID.
Set this string to identify this particular Discord configuration.
For example, the name of the Discord channel and the guild it's a part
of, such as `<guild>-<channel>`.
#### `timestamp`
Boolean signifying whether the timestamp should be shown in the embed.
#### `url`
The Discord webhook URL. This can be obtained by adding a webhook in the channel settings - see [Intro to Webhooks](https://support.discordapp.com/hc/en-us/articles/228383668) for a full guide.
Discord will provide you with the webhook URL.
#### `username`
Set the Discord bot username to override the username set when generating the webhook.
#### `avatar-url`
Set a URL to a specified avatar to override the avatar set when generating the webhook.
#### `embed-title`
Set the title to display in the alert embed. If blank, no title will is set.
#### `global`
Set to `true` to send all alerts to Discord without explicitly specifying Discord in the TICKscript.
#### `state-changes-only`
Sets all alerts in state-changes-only mode, meaning alerts will only be sent if
the alert state changes.
_Only applies if `global` is `true`._
#### `ssl-ca`
Set path to certificate authority file.
#### `ssl-cert`
Set path to host certificate file.
#### `ssl-key`
Set path to certificate private key file.
#### `insecure-skip-verify`
Set to `true` to use SSL but skip chain and host verification.
_This is necessary if using a self-signed certificate._
## Options
Set the following Discord event handler options in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.discord()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| workspace | string | Specifies which Discord configuration to use when there are multiple. |
| timestamp | bool | Specifies whether to show the timestamp in the embed footer. If blank uses the choice from the configuration. |
| username | string | Username of the Discord bot. If empty uses the username from the configuration. |
| avatar-url | string | URL of image to use as the webhook's avatar. If empty uses the url from the configuration. |
| embed-title | string | Title of alert embed posted to the webhook. If empty uses the title set in the configuration. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: discord
options:
workspace: 'guild-channel'
username: 'Kapacitor'
avatar-url: 'https://influxdata.github.io/branding/img/downloads/influxdata-logo--symbol--pool-alpha.png'
timestamp: true
embed-title: 'Kapacitor Alert'
```
### Example: TICKscript
```js
|alert()
// ...
.discord()
.workspace('guild-channel')
.username('Kapacitor')
.avatarUrl('https://influxdata.github.io/branding/img/downloads/influxdata-logo--symbol--pool-alpha.png')
.timestamp(true)
.embedTitle('Kapacitor Alert')
```
## Set up Guild
To allow Kapacitor to send alerts to Discord, obtain a webhook url from Discord - see [Intro to Webhooks](https://support.discordapp.com/hc/en-us/articles/228383668)
Then, add the generated webhook URL as the `url` in the `[[discord]]` configuration section of
your `kapacitor.conf`.
## Using the Discord event handler
With one or more Discord event handlers enabled and configured in your
`kapacitor.conf`, use the `.discord()` attribute in your TICKscripts to send
alerts to Discord or define a Discord handler that subscribes to a topic and sends
published alerts to Discord.
> To avoid posting a message every alert interval, use
> [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly)
> so only events where the alert changed state are sent to Discord.
See examples below for sample Discord configurations defined the `kapacitor.conf`:
_**Discord settings in kapacitor.conf**_
```toml
[[discord]]
enabled = true
default = true
url = "https://discordapp.com/api/webhooks/xxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
workspace = "guild-alerts"
timestamp = true
username = "AlertBot"
avatar-url = "https://influxdata.github.io/branding/img/downloads/influxdata-logo--symbol--pool-alpha.png"
embed-title = "Alert"
global = false
state-changes-only = false
[[discord]]
enabled = true
default = false
url = "https://discordapp.com/api/webhooks/xxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
workspace = "guild-errors"
timestamp = true
username = "StatsBot"
avatar-url = "https://influxdata.github.io/branding/img/downloads/influxdata-logo--symbol--pool-alpha.png"
embed-title = "Errors"
global = false
state-changes-only = false
```
### Send alerts to Discord from a TICKscript
Use the `.discord()` event handler in your TICKscript to send an alert.
For example, this configuration will send an alert with the message
"Hey, check your CPU", to the Discord channel whenever idle CPU usage
drops below 20%.
_**discord-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.warn(lambda: "usage_idle" < 20)
.stateChangesOnly()
.message('Hey, check your CPU')
.discord()
.embedTitle('Uh Oh!')
```
### Send alerts to Discord from a defined handler
Add a Discord handler that subscribes to the `cpu` by creating a TICKscript that publishes alert messages to a topic.
For example, this configuration will send an alert with the message "Hey, check your CPU".
A Discord handler is added that subscribes to the `cpu` topic and publishes all
alert messages to Discord.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an critical alert message to the `cpu` topic any time
idle CPU usage drops below 5%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 5)
.stateChangesOnly()
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Discord
event handler to send alerts to Discord. This handler is using the non-default Discord
handler, "critical-alerts", which sends messages to the #critical-alerts channel
in Discord.
_**discord\_cpu\_handler.yaml**_
```yaml
id: discord-cpu-alert
topic: cpu
kind: discord
options:
workspace: 'guild-alerts'
embed-title: 'Hey, Listen!'
```
Add the handler:
```bash
kapacitor define-topic-handler discord_cpu_handler.yaml
```
### Using multiple Discord configurations
Kapacitor can use multiple Discord integrations, each identified by the value of
the [`workspace`](#workspace) config. The TICKscript below illustrates how
multiple Discord integrations can be used.
In the `kapacitor.conf` [above](#using-the-discord-event-handler), there are two
Discord configurations; one for alerts and the other for daily stats. The
`workspace` configuration for each Discord configuration act as a unique identifiers.
The following TICKscript sends alerts to the `alerts` Discord workspace.
_**discord-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 5)
.stateChangesOnly()
.message('Hey, I think the machine is on fire.')
.discord()
.workspace('alerts')
.embedTitle('AAAAAAAAAAAAAAAAAAAAAA')
```
Error rates are also stored in the same InfluxDB instance and we want to
send daily reports of `500` errors to the `error-reports` Discord workspace.
The following TICKscript collects `500` error occurances and publishes them to
the `500-errors` topic.
_**500_errors.tick**_
```js
stream
|from()
.measurement('errors')
.groupBy('500')
|alert()
.info(lamda: 'count' > 0)
.noRecoveries()
.topic('500-errors')
```
Below is an [aggregate](/kapacitor/v1.5/event_handlers/aggregate/) handler that
subscribes to the `500-errors` topic, aggregates the number of 500 errors over a
24 hour period, then publishes an aggregate message to the `500-errors-24h` topic.
_**500\_errors\_24h.yaml**_
```yaml
id: 500-errors-24h
topic: 500-errors
kind: aggregate
options:
interval: 24h
topic: 500-errors-24h
message: '{{ .Count }} 500 errors last 24 hours.'
```
Last, but not least, a Discord handler that subscribes to the `500-errors-24h`
topic and publishes aggregated count messages to the `error-reports` Discord workspace:
_**discord\_500\_errors\_daily.yaml**_
```yaml
id: discord-500-errors-daily
topic: 500-errors-24h
kind: discord
options:
workspace: guild-errors
```

View File

@ -0,0 +1,187 @@
---
title: Email event handler
description: The "email" event handler allows you to send Kapacitor alerts via email. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Email
weight: 300
parent: Event handlers
---
The Email event handler sends alert messages via SMTP/email.
## Configuration
Configuration as well as default [option](#options) values for the Email event
handler are set in the `[smtp]` section of your `kapacitor.conf`.
Below is an example configuration:
```toml
[smtp]
enabled = true
host = "localhost"
port = 25
username = "username"
password = "passw0rd"
from = "me@example.com"
to = ["me@example.com", "you@example.com"]
no-verify = false
idle-timeout = "30s"
global = false
state-changes-only = false
```
#### `enabled`
Set to `true` to enable the SMTP event handler.
#### `host`
The SMTP host.
#### `port`
The SMTP port.
#### `username`
Your SMTP username.
#### `password`
Your SMTP password.
#### `from`
The "From" address for outgoing mail.
#### `to`
List of default "To" addresses.
#### `no-verify`
Skip TLS certificate verification when connecting to the SMTP server.
#### `idle-timeout`
The time after which idle connections are closed.
#### `global`
If `true`, all alerts will be sent via Email without explicitly specifying the
SMTP handler in the TICKscript.
#### `state-changes-only`
Sets all alerts in state-changes-only mode, meaning alerts will only be sent if
the alert state changes.
Only applies if `global` is `true`.
## Options
The following Email event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.email()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| to | list of strings | List of email addresses. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: smtp
options:
to:
- oncall1@example.com
- oncall2@example.com
```
### Example: TICKscript
```js
|alert()
// ...
.email()
.to('oncall1@example.com')
.to('oncall2@example.com')
// OR
.email('oncall1@example.com')
.to('oncall2@example.com')
```
### Using the SMTP/Email event handler
The Email event handler can be used in both TICKscripts and handler files to email alerts.
The email subject is the [AlertNode.Message](/kapacitor/v1.5/nodes/alert_node/#message) property.
The email body is the [AlertNode.Details](/kapacitor/v1.5/nodes/alert_node/#details) property.
The emails are sent as HTML emails so the body can contain html markup.
_**SMTP settings in kapacitor.conf**_
```toml
[smtp]
enabled = true
host = "smtp.myserver.com"
port = 25
username = "username"
password = "passw0rd"
from = "me@emyserver.com"
to = ["oncall0@mydomain.com"]
no-verify = false
idle-timeout = "30s"
global = false
state-changes-only = false
```
### Email alerts from a TICKscript
The following TICKscript uses the `.email()` event handler to send out emails
whenever idle CPU usage drops below 10%.
_**email-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: 'usage_idle' < 10)
.message('Hey, check your CPU')
.email()
.to('oncall1@mydomain.com')
.to('oncall2@mydomain.com')
```
### Email alerts from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". An email handler is added that subscribes to the `cpu` topic
and emails all alerts.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle
CPU usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the `email` or `smtp`
event handler to email alerts.
_**email\_cpu\_handler.yaml**_
```yaml
id: email-cpu-alert
topic: cpu
kind: smtp
options:
to:
- oncall1@mydomain.com
- oncall2@mydomain.com
```
Add the handler:
```bash
kapacitor define-topic-handler email_cpu_handler.yaml
```

View File

@ -0,0 +1,109 @@
---
title: Exec event handler
description: The "exec" event handler allows you to execute external programs when Kapacitor alert messages are triggered. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Exec
weight: 400
parent: Event handlers
---
The exec event handler executes an external program.
Event data is passed over STDIN to the process.
## Options
The following exec event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.exec()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| prog | string | Path to program to execute. |
| args | list of string | List of arguments to the program. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: exec
options:
prog: /path/to/executable
args: 'executable arguments'
```
### Example: TICKscript
```js
|alert()
// ...
.exec('/path/to/executable', 'executable arguments')
```
## Using the exec event handler
The exec event handler can be used in both TICKscripts and handler files to
execute an external program based off of alert logic.
> **Note:** Exec programs are run as the `kapacitor` user which typically only
> has access to the default system `$PATH`.
> If using an executable not in the `$PATH`, pass the executable's absolute path.
### Execute an external program from a TICKscript
The following TICKscript executes the `sound-the-alarm.py` Python script whenever
idle CPU usage drops below 10% using the `.exec()` event handler.
_**exec-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.exec('/usr/bin/python', 'sound-the-alarm.py')
```
### Execute an external program from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". An exec handler is added that subscribes to the `cpu` topic and
executes the `sound-the-alarm.py` Python script whenever an alert message is published.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the exec event
handler to execute the `sound-the-alarm.py` Python script.
_**exec\_cpu\_handler.yaml**_
```yaml
id: exec-cpu-alert
topic: cpu
kind: exec
options:
prog: '/usr/bin/python'
args: 'sound-the-alarm.py'
```
Add the handler:
```bash
kapacitor define-topic-handler exec_cpu_handler.yaml
```

View File

@ -0,0 +1,200 @@
---
title: HipChat event handler
description: The HipChat event handler allows you to send Kapacitor alerts to HipChat. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: HipChat
weight: 500
parent: Event handlers
---
[HipChat](https://www.hipchat.com/) is Atlassian's web service for group chat,
video chat, and screen sharing.
Kapacitor can be configured to send alert messages to a HipChat room.
## Configuration
Configuration as well as default [option](#options) values for the HipChat event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[hipchat]
enabled = true
url = "https://subdomain.hipchat.com/v2/room"
room = "xxxx"
token = "xxxx"
global = false
state-changes-only = false
```
#### `enabled`
Set to `true` to enable HipChat event handler.
#### `url`
The HipChat API URL. Replace subdomain with your HipChat subdomain.
#### `room`
Default room for messages.
This serves as the default room ID if the TICKscript does not specify a room ID.
_Visit the [HipChat API documentation](https://www.hipchat.com/docs/apiv2) for
information on obtain your room ID._
#### `token`
Default authentication token.
This serves as the default token if the TICKscript does not specify an API
access token.
_Visit the [HipChat API documentation](https://www.hipchat.com/docs/apiv2) for
information on obtain your authentication token._
#### `global`
If `true`, all alerts are sent to HipChat without explicitly specifying HipChat
in the TICKscript.
#### `state-changes-only`
If `true`, alerts will only be sent to HipChat if the alert state changes.
This only applies if the `global` is also set to `true`.
## Options
The following HipChat event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.hipchat()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| room | string | HipChat room in which to post messages. If empty uses the channel from the configuration. |
| token | string | HipChat authentication token. If empty uses the token from the configuration. |
### Example: handler file
```yaml
topic: topic-name
id: handler-id
kind: hipchat
options:
room: 'alerts'
token: 'mysupersecretauthtoken'
```
### Example: TICKscript
```js
|alert()
// ...
.hipChat()
.room('alerts')
.token('mysupersecretauthtoken')
```
## HipChat Setup
### Requirements
To configure Kapacitor with HipChat, the following is needed:
* A HipChat subdomain name
* A HipChat room ID
* A HipChat API access token for sending notifications
### Get your HipChat API access token
1. Log into your HipChat account dashboard.
2. Select "API access" in the left menu.
3. Under "Create new token", enter a label for the token.
The label is arbitrary and is meant only to help identify the token.
4. Under "Create new token", select "Send Notification" as the Scope.
5. Click "Create".
Your token appears in the table just above the `Create new token` section:
![HipChat token](/img/kapacitor/hipchat-token.png)
## Using the HipChat Event Handler
With the HipChat event handler enabled in your `kapacitor.conf`, use the
`.hipchat()` attribute in your TICKscripts to send alerts to HipChat or define a
HipChat handler that subscribes to a topic and sends published alerts to HipChat.
> To avoid posting a message every alert interval, use
> [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly)
> so only events where the alert changed state are sent to Alerta.
The examples below use the following HipChat configuration defined in the `kapacitor.conf`:
_**HipChat settings in kapacitor.conf**_
```toml
[hipchat]
enabled = true
url = "https://testtest.hipchat.com/v2/room"
room = "malerts"
token = "tokentokentokentokentoken"
global = false
state-changes-only = true
```
### Send alerts to a HipChat room from a TICKscript
The following TICKscript uses the `.hipchat()` event handler to send the message,
"Hey, check your CPU", whenever idle CPU usage drops below 10%.
It publishes the messages to the `alerts` room associated with the HipChat
subdomain defined in the `kapacitor.conf`.
_**hipchat-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.stateChangesOnly()
.message('Hey, check your CPU')
.hipchat()
.room('alerts')
```
### Send alerts to the HipChat room from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU".
A HipChat handler is added that subscribes to the `cpu` topic and publishes all
alert messages to the `alerts` room associated with the `testest` HipChat
subdomain defined in the `kapacitor.conf`.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time CPU
idle usage drops below 10% _(or CPU usage is above 90%)_.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.stateChangesOnly()
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the HipChat
event handler to send alerts to the `alerts` channel in HipChat.
_**hipchat\_cpu\_handler.yaml**_
```yaml
id: hipchat-cpu-alert
topic: cpu
kind: hipchat
options:
room: 'alerts'
```
Add the handler:
```bash
kapacitor define-topic-handler hipchat_cpu_handler.yaml
```

View File

@ -0,0 +1,195 @@
---
title: Kafka event handler
description: The Kafka event handler allows you to send Kapacitor alerts to an Apache Kafka cluster. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Kafka
weight: 600
parent: Event handlers
---
[Apache Kafka](https://kafka.apache.org/) is a distributed streaming platform
designed for building real-time data pipelines and streaming apps.
Kapacitor can be configured to send alert messages to a Kafka cluster.
## Configuration
Configuration as well as default [option](#options) values for the Kafka event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[[kafka]]
enabled = true
id = "localhost"
brokers = []
timeout = "10s"
batch-size = 100
batch-timeout = "1s"
use-ssl = false
ssl-ca = ""
ssl-cert = ""
ssl-key = ""
insecure-skip-verify = false
```
> Multiple Kafka clients may be configured by repeating `[[kafka]]` sections.
> The `id` acts as a unique identifier for each configured Kafka client.
#### `enabled`
Set to `true` to enable the Kafka event handler.
#### `id`
A unique identifier for the Kafka cluster.
#### `brokers`
List of Kafka broker addresses using the `host:port` format.
#### `timeout`
Timeout on network operations with the Kafka brokers.
If 0 a default of 10s is used.
#### `batch-size`
The number of messages batched before being sent to Kafka.
If 0 a default of 100 is used.
#### `batch-timeout`
The maximum amount of time to wait before flushing an incomplete batch.
If 0 a default of 1s is used.
#### `use-ssl`
Enable SSL communication.
Must be `true` for other SSL options to take effect.
#### `ssl-ca`
Path to certificate authority file.
#### `ssl-cert`
Path to host certificate file.
#### `ssl-key`
Path to certificate private key file.
#### `insecure-skip-verify`
Use SSL but skip chain and host verification.
_This is necessary if using a self-signed certificate._
## Options
The following Kafka event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.kafka()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| cluster | string | Name of the Kafka cluster. |
| topic | string | Kafka topic. _In TICKscripts, this is set using `.kafkaTopic()`._ |
| template | string | Message template. |
### Example: handler file
```yaml
id: kafka-event-handler
topic: kapacitor-topic-name
kind: kafka
options:
cluster: 'kafka-cluster'
topic: 'kafka-topic-name'
template: 'kafka-template-name'
```
### Example: TICKscript
```js
|alert()
// ...
.kafka()
.cluster('kafka-cluster')
.kafkaTopic('kafka-topic-name')
.template('kafka-template-name')
```
## Using the Kafka Event Handler
With the Kafka event handler enabled in your `kapacitor.conf`, use the `.kafka()`
attribute in your TICKscripts to send alerts to a Kafka cluster or define a
Kafka handler that subscribes to a topic and sends published alerts to Kafka.
The examples below use the following Kafka configuration defined in the `kapacitor.conf`:
_**Kafka settings in kapacitor.conf**_
```toml
[[kafka]]
enabled = true
id = "infra-monitoring"
brokers = ["123.45.67.89:9092", "123.45.67.90:9092"]
timeout = "10s"
batch-size = 100
batch-timeout = "1s"
use-ssl = true
ssl-ca = "/etc/ssl/certs/ca.crt"
ssl-cert = "/etc/ssl/certs/cert.crt"
ssl-key = "/etc/ssl/certs/cert-key.key"
insecure-skip-verify = true
```
### Send alerts to a Kafka cluster from a TICKscript
The following TICKscript uses the `.kafka()` event handler to send the message,
"Hey, check your CPU", whenever idle CPU usage drops below 10%.
It publishes the messages to the `cpu-alerts` topic in the `infra-monitoring`
Kafka cluster defined in the `kapacitor.conf`.
_**kafka-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.kafka()
.kafkaTopic('cpu-alerts')
```
### Send alerts to a Kafka cluster from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". A Kafka handler is added that subscribes to the `cpu` topic and
publishes all alert messages to the `cpu-alerts` topic associated with the
`infra-monitoring` Kafka cluster defined in the `kapacitor.conf`.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time CPU
idle usage drops below 10% _(or CPU usage is above 90%)_.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Kafka
event handler to send alerts to the `cpu-alerts` topic in Kafka.
_**kafka\_cpu\_handler.yaml**_
```yaml
id: kafka-cpu-alert
topic: cpu
kind: kafka
options:
topic: 'cpu-alerts'
```
Add the handler:
```bash
kapacitor define-topic-handler kafka_cpu_handler.yaml
```

View File

@ -0,0 +1,107 @@
---
title: Log event handler
description: The "log" event handler allows you to send Kapacitor alert messages to a log file. This page includes options and usage examples.
menu:
kapacitor_1_5_ref:
name: Log
weight: 700
parent: Event handlers
---
The log event handler writes to a specified log file with one alert event per line.
If the specified log file does not exist, it will be created.
## Options
The following log event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.log()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| path | string | Absolute path to the log file. |
| mode | int | File mode and permissions to use when creating the file. Default is `0600`. _**The leading 0 is required to interpret the value as an octal integer.**_ |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: log
options:
path: '/tmp/alerts.log'
mode: 0644
```
### Example: TICKscript
```js
|alert()
// ...
.log('/tmp/alerts.log')
.mode(0644)
```
## Using the log event handler
The log event handler can be used in both TICKscripts and handler files to log
messages to a log file.
### Log messages from a TICKscript
The following TICKscript uses the `.log()` event handler to log a message to the
`/tmp/alerts.log` log file whenever idle CPU usage drops below 10%.
_**log-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('{{ .Time }}: CPU usage over 90%')
.log('/tmp/alerts.log')
```
### Log messages from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"'{{ .Time }}: CPU usage over 90%'".
A log handler is added that subscribes to the `cpu` topic and logs messages to
`/tmp/alerts.log` whenever a new message is published.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('{{ .Time }}: CPU usage over 90%')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the log event
handler to log messages to the `/tmp/alerts.log` log file.
_**log\_cpu\_handler.yaml**_
```yaml
id: log-cpu-alert
topic: cpu
kind: log
options:
path: '/tmp/alerts.log'
```
Add the handler:
```bash
kapacitor define-topic-handler log_cpu_handler.yaml
```

View File

@ -0,0 +1,177 @@
---
title: Microsoft Teams event handler
description: The Microsoft Teams event handler lets you send Kapacitor alerts to a Microsoft Teams channel. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Microsoft Teams
weight: 750
parent: Event handlers
---
[Microsoft Teams](https://www.microsoft.com/en-us/microsoft-365/microsoft-teams/group-chat-software) is a widely used "digital workspace" that facilitates communication among team members. To configure Kapacitor to send alerts to one or more Microsoft Teams channels, do the following:
- [Set up a Teams](#set-up-teams)
- [Configuration](#configuration)
- [Handler file options](#handler-file-options)
- [Example Teams handler file](#example-teams-handler-file)
- [Example alerts](#example-alerts)
- [Send an alert to Teams](#send-an-alert-to-teams)
## Set up Teams
1. Log in to Teams, and then [create a new incoming webhook](https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/connectors#setting-up-a-custom-incoming-webhook) for a Teams channel.
2. In your `kapacitor.conf` file, add a `[teams]` section with [configuration options](#Teams-configuration-options) for the Microsoft Teams event
handler, including the incoming webhook URL as the `channelurl`. For example:
```toml
[teams]
enabled = true
default = true
channel-url = "https://outlook.office.com/webhook/..."
global = true
state-changes-only = true
```
3. To add multiple Microsoft Teams clients, repeat steps 1-2 to obtain a new web hook and add another `[teams]` section in `kapacitor.conf`.
The `channelurl` acts as a unique identifier for each configured Teams client.
### Configuration
#### `enabled`
Set to `true` to enable the Microsoft Teams event handler.
#### `default`
If there are multiple `teams` configurations, identify one as the default.
#### `channelurl`
Specify the Microsoft Team webhook URL to send messages and alerts.
#### `global`
Set to true to send all alerts to Teams without explicitly specifying Microsoft Teams in the TICKscript.\
#### `state-changes-only`
Set to true to send alerts for state-changes-only.
_Only applies if `global` is `true`._
### Handler file options
The following options can be set in a Microsoft Teams event [handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.teams()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| team | string | Specifies which Team configuration to use when there are multiple configurations. |
| channel | string | Teams channel to post messages to. If empty uses the channel from the configuration. |
### Example handler file
```yaml
id: handler-id
topic: topic-name
kind: teams
options:
team: 'teams.microsoft.com/team/'
channel: '#alerts'
```
For information about using handler files, see [Add and use event handlers](/kapacitor/v1.5/event_handlers/#create-a-topic-handler-with-a-handler-file).
## Example alerts
#### Send alert to Teams channel in configuration file
```js
stream
|alert()
.teams()
```
#### Send alert to Teams channel with webhook (overrides configuration file)
```js
stream
|alert()
.teams()
.channelURL('https://outlook.office.com/webhook/...')
```
#### Send alerts to Teams from a TICKscript
Use the `.teams()` attribute in your TICKscripts to:
- Send alerts to Teams
- Define a Teams handler that subscribes to a topic and sends published alerts to Teams
> To avoid posting a message every alert interval, use
> [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly)
> so only events where the alert changed state are sent to Teams.
The following TICKscript uses the `.teams()` event handler to send the message,
"Hey, check your CPU", to the `#alerts` Teams channel when idle CPU usage drops below 20%.
_**teams-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.warn(lambda: "usage_idle" < 20)
.stateChangesOnly()
.message('Hey, check your CPU')
.teams()
```
#### Send alerts to Teams from a defined handler
The following example sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A Teams handler is added that subscribes to the `cpu` topic and publishes all
alert messages to Teams.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an critical alert message to the `cpu` topic any time
idle CPU usage drops below 5%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 5)
.stateChangesOnly()
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Teams
event handler to send alerts to Teams. This handler uses a non-default Teams
handler, "critical-alerts", which sends messages to the #critical-alerts channel
in Teams.
_**teams\_cpu\_handler.yaml**_
```yaml
id: teams-cpu-alert
topic: cpu
kind: teams
channelurl: 'alerts'
```
Add the handler:
```bash
kapacitor define-topic-handler teams_cpu_handler.yaml
```

View File

@ -0,0 +1,201 @@
---
title: MQTT event handler
description: The MQTT event handler allows you to send Kapacitor alert messages to an MQTT handler. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: MQTT
weight: 800
parent: Event handlers
---
[MQTT](http://mqtt.org/) is a lightweight messaging protocol for small sensors and mobile devices.
Kapacitor can be configured to send alert messages to an MQTT broker.
## Configuration
Configuration as well as default [option](#options) values for the MQTT
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[[mqtt]]
enabled = true
name = "localhost"
default = true
url = "tcp://localhost:1883"
ssl-ca = "/etc/kapacitor/ca.pem"
ssl-cert = "/etc/kapacitor/cert.pem"
ssl-key = "/etc/kapacitor/key.pem"
client-id = "xxxx"
username = "xxxx"
password = "xxxx"
```
> Multiple MQTT brokers may be configured by repeating `[[mqtt]]` sections.
> The `name` acts as a unique identifier for each configured MQTT client.
#### `enabled`
Set to `true` to enable the MQTT event handler.
#### `name`
Unique name for this broker configuration.
#### `default`
When using multiple MQTT configurations, sets the current configuration as
the default.
#### `url`
URL of the MQTT broker.
Possible protocols include:
**tcp** - Raw TCP network connection
**ssl** - TLS protected TCP network connection
**ws** - Websocket network connection
#### `ssl-ca`
Absolute path to certificate autority (CA) file.
_A CA can be provided without a key/certificate pair._
#### `ssl-cert`
Absolute path to pem encoded certificate file.
#### `ssl-key`
Absolute path to pem encoded key file.
#### `client-id`
Unique ID for this MQTT client.
If empty, the value of `name` is used.
#### `username`
MQTT username.
#### `password`
MQTT password.
## Options
The following MQTT event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.mqtt()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| broker-name | string | The name of the configured MQTT broker to use when publishing the alert. If empty defaults to the configured default broker. |
| topic | string | The MQTT topic to which alerts will be dispatched |
| qos | int64 | The [QoS](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718099) that will be used to deliver the alerts. Valid values include: <br><br><code>0</code> : At most once delivery<br><code>1</code> : At least once delivery<br><code>2</code> : Exactly once delivery |
| retained | bool | Indicates whether this alert should be delivered to clients that were not connected to the broker at the time of the alert. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: mqtt
options:
broker-name: 'name'
topic: 'topic-name'
qos: 1
retained: true
```
### Example: TICKscript
```js
|alert()
// ...
.mqtt('topic-name')
.brokerName('name')
.qos(1)
.retained()
```
## Using the MQTT event handler
The MQTT event handler can be used in both TICKscripts and handler files to send
alerts to an MQTT broker.
The examples below use the following MQTT broker configurations defined in the
`kapacitor.conf`:
_**MQTT settings in kapacitor.conf**_
```toml
[[mqtt]]
enabled = true
name = "localhost"
default = true
url = "tcp://localhost:1883"
[[mqtt]]
enabled = true
name = "alerts-broker"
default = false
url = "ssl://123.45.67.89:1883"
ssl-ca = "/etc/kapacitor/ca.pem"
ssl-cert = "/etc/kapacitor/cert.pem"
ssl-key = "/etc/kapacitor/key.pem"
client-id = "alerts-broker"
username = "myuser"
password = "mysupersecretpassw0rd"
```
### Send alerts to an MQTT broker from a TICKscript
The following TICKscript uses the `.mqtt()` event handler to send alerts to the
`alerts` MQTT topic of the default MQTT broker defined in the `kapacitor.confi`
whenever idle CPU usage drops below 10%.
_**log-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('{{ .Time }}: CPU usage over 90%')
.mqtt('alerts')
.qos(2)
```
### Send alerts to an MQTT broker from a defined handler
The following setup sends an alert to the `cpu` topic.
An MQTT handler is added that subscribes to the `cpu` topic and sends messages
to `alerts` MQTT topic of the `alerts-broker` whenever a new message is published.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('{{ .Time }}: CPU usage over 90%')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the MQTT event
handler to send alerts to the `alerts-broker`.
_**log\_cpu\_handler.yaml**_
```yaml
id: log-cpu-alert
topic: cpu
kind: mqtt
options:
broker-name: 'alerts-broker'
topic: 'alerts'
qos: 2
```
Add the handler:
```bash
kapacitor define-topic-handler log_cpu_handler.yaml
```

View File

@ -0,0 +1,181 @@
---
title: OpsGenie v1 event handler
description: The OpsGenie v1 event handler allows you to send Kapacitor alerts to OpsGenie. This page includes configuration options and usage examples.
---
[OpsGenie](https://www.opsgenie.com/) is an incident response orchestration platform for DevOps & ITOps teams.
Kapacitor can be configured to send alert messages to OpsGenie.
{{% warn %}}
<em>
This page is specific to OpsGenie's v1 API which has been deprecated.
OpsGenie recommends migrating to their v2 API. View the
<a href="https://docs.opsgenie.com/docs/migration-guide-for-alert-rest-api" target="\_blank">OpsGenie API migration guide</a>
for more information about upgrading.
If using the v2 API, view the <a href="/kapacitor/v1.5/event_handlers/opsgenie/v2">OpsGenie v2 event handler</a> documentation.
</em>
{{% /warn %}}
## Configuration
Configuration as well as default [option](#options) values for the OpsGenie v1
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[opsgenie]
enabled = true
api-key = "mysupersecretapikey"
teams = ["team1", "team2"]
recipients = ["recipient1", "recipient2"]
url = "https://api.opsgenie.com/v1/json/alert"
recovery_url = "https://api.opsgenie.com/v1/json/alert/note"
global = false
```
#### `enabled`
Set to `true` to enable the OpsGenie v1 event handler.
#### `api-key`
Your OpsGenie API Key.
#### `teams`
Default OpsGenie teams. _Can be overridden per alert._
#### `recipients`
Default OpsGenie recipients. _Can be overridden per alert._
#### `url`
The OpsGenie API URL. _**This should not need to be changed.**_
#### `recovery_url`
The OpsGenie Recovery URL. Change this based on which behavior you want a
recovery to trigger (add notes, close alert, etc.)
#### `global`
If `true`, all alerts are sent to OpsGenie without specifying `opsgenie` in the
TICKscript.
The team and recipients can still be overridden.
## Options
The following OpsGenie v1 event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.opsGenie()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| teams-list | list of strings | List of teams. |
| recipients-list | list of strings | List of recipients. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: opsgenie
options:
teams-list:
- 'team1'
- 'team2'
recipients-list:
- 'recipient1'
- 'recipient2'
```
### Example: TICKscript
```js
|alert()
// ...
.opsGenie()
.teams('team1', 'team2')
.recipients('recipient1', 'recipient2')
```
## OpsGenie Setup
To allow Kapacitor to send alerts to OpsGenie,
[create an OpsGeneie API Integration](https://docs.opsgenie.com/docs/api-integration#section-using-api-integration).
Use the generated API key as the `api-key` in the `[opsgenie]` section of your
`kapacitor.conf`
## Using the OpsGenie event handler
With the OpsGenie v1 event handler enabled and configured in your
`kapacitor.conf`, use the `.opsGenie()` attribute in your TICKscripts to send
alerts to OpsGenie or define a OpsGenie v1 handler that subscribes to a topic
and sends published alerts to OpsGenie.
The examples below use the following OpsGenie configuration defined in the `kapacitor.conf`:
_**OpsGenie v1 settings in kapacitor.conf**_
```toml
[opsgenie]
enabled = true
api-key = "mysupersecretapikey"
teams = ["engineering"]
recipients = ["supervisor1", "supervisor2"]
url = "https://api.opsgenie.com/v1/json/alert"
recovery_url = "https://api.opsgenie.com/v1/json/alert/note"
global = false
```
### Send alerts to OpsGenie from a TICKscript
The following TICKscript uses the `.opsGenie()` event handler to send the message,
"Hey, check your CPU", to OpsGenie whenever idle CPU usage drops below 10%.
_**opsgenie-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: 'usage_idle' < 10)
.message('Hey, check your CPU')
.opsGenie()
.teams('engineering', 'support')
```
### Send alerts to OpsGenie from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". An OpsGenie v1 handler is added that subscribes to the `cpu`
topic and publishes all alert messages to OpsGenie.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: 'usage_idle' < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the OpsGenie v1
event handler to send alerts to OpsGenie.
_**opsgenie\_cpu\_handler.yaml**_
```yaml
id: opsgenie-cpu-alert
topic: cpu
kind: opsgenie
options:
teams-list:
- 'engineering'
- 'support'
```
Add the handler:
```bash
kapacitor define-topic-handler opsgenie_cpu_handler.yaml
```

View File

@ -0,0 +1,184 @@
---
title: OpsGenie v2 event handler
description: The OpsGenie v2 event handler allows you to send Kapacitor alerts to OpsGenie. This page includes configuration options and usage examples.
aliases:
- kapacitor/v1.5/event_handlers/opsgenie
menu:
kapacitor_1_5_ref:
name: OpsGenie
weight: 900
parent: Event handlers
---
[OpsGenie](https://www.opsgenie.com/) is an incident response orchestration
platform for DevOps & ITOps teams.
Kapacitor can be configured to send alert messages to OpsGenie.
> This page is specific to OpsGenie's v2 API. If still using their v1 API, view
> the [OpsGenie v1 event handler](/kapacitor/v1.5/event_handlers/opsgenie/v1/) documentation.
## Configuration
Configuration as well as default [option](#options) values for the OpsGenie v2
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[opsgenie2]
enabled = true
api-key = "mysupersecretapikey"
teams = ["team1", "team2"]
recipients = ["recipient1", "recipient2"]
url = "https://api.opsgenie.com/v2/alerts"
recovery_action = "notes"
global = false
```
#### `enabled`
Set to `true` to enable the OpsGenie v2 event handler.
#### `api-key`
Your OpsGenie API Key.
#### `teams`
Default OpsGenie teams. _Can be overridden per alert._
#### `recipients`
Default OpsGenie recipients. _Can be overridden per alert._
#### `url`
The OpsGenie API URL. _**This should not need to be changed.**_
#### `recovery_action`
The Recovery Action specifies which action to take when alerts recover.
Valid values include:
* `notes` - Add a note to the alert.
* `close` - Close the alert.
#### `global`
If `true`, all alerts are sent to OpsGenie without specifying `opsgenie2` in the TICKscript.
The team and recipients can still be overridden.
## Options
The following OpsGenie v2 event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.opsGenie2()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| teams-list | list of strings | List of teams. |
| recipients-list | list of strings | List of recipients. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: opsgenie2
options:
teams-list:
- 'team1'
- 'team2'
recipients-list:
- 'recipient1'
- 'recipient2'
```
### Example: TICKscript
```js
|alert()
// ...
.opsGenie2()
.teams('team1', 'team2')
.recipients('recipient1', 'recipient2')
```
## OpsGenie Setup
To allow Kapacitor to send alerts to OpsGenie,
[create an OpsGeneie API Integration](https://docs.opsgenie.com/docs/api-integration#section-using-api-integration).
Use the generated API key as the `api-key` in the `[opsgenie2]` section of your
`kapacitor.conf`
## Using the OpsGenie event handler
With the OpsGenie v2 event handler enabled and configured in your
`kapacitor.conf`, use the `.opsGenie2()` attribute in your TICKscripts to send
alerts to OpsGenie or define an OpsGenie v2 handler that subscribes to a topic
and sends published alerts to OpsGenie.
The examples below use the following OpsGenie configuration defined in the `kapacitor.conf`:
_**OpsGenie v2 settings in kapacitor.conf**_
```toml
[opsgenie2]
enabled = true
api-key = "mysupersecretapikey"
teams = ["engineering"]
recipients = ["supervisor1", "supervisor2"]
url = "https://api.opsgenie.com/v2/alerts"
recovery_action = "close"
global = false
```
### Send alerts to OpsGenie from a TICKscript
The following TICKscript uses the `.opsGenie2()` event handler to send the
message, "Hey, check your CPU", to OpsGenie whenever idle CPU usage drops below 10%.
_**opsgenie2-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: 'usage_idle' < 10)
.message('Hey, check your CPU')
.opsGenie2()
.teams('engineering', 'support')
```
### Send alerts to OpsGenie from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". An OpsGenie v2 handler is added that subscribes to the `cpu`
topic and publishes all alert messages to OpsGenie.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle
CPU usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: 'usage_idle' < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the OpsGenie v2
event handler to send alerts to OpsGenie.
_**opsgenie2\_cpu\_handler.yaml**_
```yaml
id: opsgenie-cpu-alert
topic: cpu
kind: opsgenie2
options:
teams-list:
- 'engineering'
- 'support'
```
Add the handler:
```bash
kapacitor define-topic-handler opsgenie2_cpu_handler.yaml
```

View File

@ -0,0 +1,157 @@
---
title: PagerDuty v1 event handler
description: The PagerDuty v1 event handler allows you to send Kapacitor alerts to PagerDuty. This page includes configuration options and usage examples.
---
[PagerDuty](https://www.pagerduty.com/) is an incident management platform that
helps teams detect and fix infrastructure problems quickly.
Kapacitor can be configured to send alert messages to PagerDuty.
{{% warn %}}
<em>
This page is specific to PagerDuty's v1 API which has been deprecated.
PagerDuty recommends migrating to their v2 API. View the
<a href="https://v2.developer.pagerduty.com/docs/migrating-to-api-v2" target="\_blank">PagerDuty API migration guide</a>
for more information about upgrading. If using the v2 API, view the
<a href="/kapacitor/v1.5/event_handlers/pagerduty/v2">PagerDuty v2 event handler</a> documentation.
</em>
{{% /warn %}}
## Configuration
Configuration as well as default [option](#options) values for the PagerDuty v1
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[pagerduty]
enabled = true
service-key = ""
url = "https://events.pagerduty.com/generic/2010-04-15/create_event.json"
global = false
```
#### `enabled`
Set to `true` to enable the PagerDuty v1 event handler.
#### `service-key`
Your [PagerDuty Service Key](https://support.pagerduty.com/docs/services-and-integrations).
#### `url`
The PagerDuty API v1 URL. _**This should not need to be changed.**_
#### `global`
If `true`, all alerts will be sent to PagerDuty without explicitly specifying
PagerDuty in TICKscripts.
## Options
The following PagerDuty v1 event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.pagerDuty()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| service-key | string | The PagerDuty service key to use for the alert. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: pagerduty
options:
service-key: 'myservicekey'
```
### Example: TICKscript
```js
|alert()
// ...
.pagerDuty()
.serviceKey('myservicekey')
```
## PagerDuty Setup
To allow Kapacitor to send alerts to PagerDuty
[enable a new "Generic API" integration](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-generic-events-api-integration).
Use the generated "Integration Key" as the `service-key` under the `[pagerduty]`
section of your `kapacitor.conf`.
## Using the PagerDuty v1 Event Handler
With the PagerDuty v1 event handler enabled in your `kapacitor.conf`, use the
`.pagerDuty()` attribute in your TICKscripts to send alerts to a PagerDuty or
define a PagerDuty v1 handler that subscribes to a topic and sends published
alerts to PagerDuty.
The examples below use the following PagerDuty v1 configuration defined in the `kapacitor.conf`:
_**PagerDuty v1 settings in kapacitor.conf**_
```toml
[pagerduty]
enabled = true
service-key = "myservicekey"
url = "https://events.pagerduty.com/generic/2010-04-15/create_event.json"
global = false
```
### Send alerts to PagerDuty from a TICKscript
The following TICKscript uses the `.pagerDuty()` event handler to send the
message, "Hey, check your CPU", whenever idle CPU usage drops below 10%.
_**pagerduty-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.pagerDuty()
```
### Send alerts to PagerDuty from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A PagerDuty v1 handler is added that subscribes to the `cpu` topic and publishes
all alert messages to PagerDuty.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time CPU
idle usage drops below 10% _(or CPU usage is above 90%)_.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the PagerDuty
v1 event handler to send alerts to PagerDuty.
_**pagerduty\_cpu\_handler.yaml**_
```yaml
topic: cpu
id: pagerduty-cpu-alert
kind: pagerduty
options:
service-key: 'myservicekey'
```
Add the handler:
```bash
kapacitor define-topic-handler pagerduty_cpu_handler.yaml
```

View File

@ -0,0 +1,164 @@
---
title: PagerDuty v2 event handler
description: The PagerDuty v2 event handler allows you to send Kapacitor alerts to PagerDuty. This page includes configuration options and usage examples.
aliases:
- /kapacitor/v1.5/event_handlers/pagerduty/
menu:
kapacitor_1_5_ref:
name: PagerDuty
weight: 1000
parent: Event handlers
---
[PagerDuty](https://www.pagerduty.com/) is an incident management platform that
helps teams detect and fix infrastructure problems quickly.
Kapacitor can be configured to send alert messages to PagerDuty.
> This page is specific to PagerDuty's v2 API. If still using their v1 API, view
> the [PagerDuty v1 event handler](/kapacitor/v1.5/event_handlers/pagerduty/v1/) documentation.
## Configuration
Configuration as well as default [option](#options) values for the PagerDuty v2
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[pagerduty2]
enabled = true
routing-key = ""
url = "https://events.pagerduty.com/v2/enqueue"
global = false
```
#### `enabled`
Set to `true` to enable the PagerDuty v2 event handler.
#### `routing-key`
Your [PagerDuty Routing Key](https://support.pagerduty.com/docs/services-and-integrations).
#### `url`
The PagerDuty API v2 URL. _**This should not need to be changed.**_
#### `global`
If `true`, all alerts will be sent to PagerDuty without explicitly specifying
PagerDuty in TICKscripts.
## Options
The following PagerDuty v2 event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.pagerDuty2()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| routing-key | string | The PagerDuty routing key to use for the alert. |
| link | strings | A custom link put in the `links` field of the body sent to the PagerDuty API. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: pagerduty2
options:
routing-key: 'myroutingkey'
links:
- href: 'https://chronograf.example.com/sources/1/dashboards/2'
text: 'Overview Dashboard'
- href: 'https://chronograf.example.com/'
```
### Example: TICKscript
```js
|alert()
// ...
.pagerDuty2()
.routingKey('myroutingkey')
.link('https://chronograf.example.com/sources/1/dashboards/2', 'Overview Dashboard')
.link('https://chronograf.example.com/')
```
## PagerDuty Setup
To allow Kapacitor to send alerts to PagerDuty
[enable a new "Generic API" integration](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-generic-events-api-integration).
Use the generated "Integration Key" as the `routing-key` under the `[pagerduty2]`
section of your `kapacitor.conf`.
## Using the PagerDuty v2 Event Handler
With the PagerDuty v2 event handler enabled in your `kapacitor.conf`, use the
`.pagerDuty2()` attribute in your TICKscripts to send alerts to a PagerDuty or
define a PagerDuty v2 handler that subscribes to a topic and sends published
alerts to PagerDuty.
The examples below use the following PagerDuty v2 configuration defined in the `kapacitor.conf`:
_**PagerDuty v2 settings in kapacitor.conf**_
```toml
[pagerduty2]
enabled = true
routing-key = "myroutingkey"
url = "https://events.pagerduty.com/v2/enqueue"
global = false
```
### Send alerts to PagerDuty from a TICKscript
The following TICKscript uses the `.pagerDuty2()` event handler to send the
message, "Hey, check your CPU", whenever idle CPU usage drops below 10%.
_**pagerduty2-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.pagerDuty2()
```
### Send alerts to PagerDuty from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU". A PagerDuty v2 handler is added that subscribes to the `cpu`
topic and publishes all alert messages to PagerDuty.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time CPU
idle usage drops below 10% _(or CPU usage is above 90%)_.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the PagerDuty v2
event handler to send alerts to PagerDuty.
_**pagerduty2\_cpu\_handler.yaml**_
```yaml
topic: cpu
id: pagerduty2-cpu-alert
kind: pagerduty2
options:
routing-key: 'myroutingkey'
```
Add the handler:
```bash
kapacitor define-topic-handler pagerduty2_cpu_handler.yaml
```

View File

@ -0,0 +1,361 @@
---
title: Post event handler
description: The "post" event handler allows you to POST Kapacitor alert data to an HTTP endpoint. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Post
weight: 1100
parent: Event handlers
---
The post event handler posts JSON encoded data to an HTTP endpoint.
## Configuration
Configuration as well as default [option](#options) values for the post event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
### Post Settings in kapacitor.conf
```toml
[[httppost]]
endpoint = "example"
url = "http://example.com/path"
headers = { Example = "your-key" }
basic-auth = { username = "my-user", password = "my-pass" }
alert-template = "{{.Message}}:{{range .Data.Series}}{{.Tags}},{{range .Values}}{{.}}{{end}}{{end}}"
alert-template-file = "/path/to/template/file"
row-template = "{{.Name}} host={{index .Tags \"host\"}}{{range .Values}} {{index . "time"}} {{index . "value"}}{{end}}"
row-template-file = "/path/to/template/file"
```
#### `endpoint`
Name of a configured HTTP POST endpoint that acts as an identifier for `[[httppost]]`
configurations when multiple are present.
_Endpoints are identifiers only. They are not appended to HTTP POST URLs._
#### `url`
The URL to which the alert data will be posted.
#### `headers`
Set of extra header values to set on the POST request.
#### `basic-auth`
Set of authentication credentials to set on the POST request.
#### `alert-template`
Alert template for constructing a custom HTTP body.
Alert templates are only used with post [alert](/kapacitor/v1.5/nodes/alert_node/)
handlers as they consume alert data.
_Skip to [alert templating](#alert-templates)._
#### `alert-template-file`
Absolute path to an alert template file.
_Skip to [alert templating](#alert-templates)._
#### `row-template`
Row template for constructing a custom HTTP body.
Row templates are only used with the [httpPost node](/kapacitor/v1.5/nodes/http_post_node/)
pipeline nodes as they consume a row at a time.
_Skip to [row templating](#row-templates)._
#### `row-template-file`
Absolute path to a row template file.
_Skip to [row templating](#row-templates)._
### Defining configuration options with environment variables
The `endpoint`, `url`, and `headers` configuration options can be defined with
environment variables:
```bash
KAPACITOR_HTTPPOST_0_ENDPOINT = "example"
KAPACITOR_HTTPPOST_0_URL = "http://example.com/path"
KAPACITOR_HTTPPOST_0_HEADERS_Example1 = "header1"
KAPACITOR_HTTPPOST_0_HEADERS_Example2 = "header2"
```
### Configuring and using multiple HTTP POST endpoints
The `kapacitor.conf` supports multiple `[[httppost]]` sections.
The [`endpoint`](#endpoint) configuration option of each acts as a unique identifier for that specific configuration.
To use a specific `[[httppost]]` configuration with the Post alert handler,
specify the endpoint in your [post alert handler file](#example-handler-file-using-a-pre-configured-endpoint),
or [your TICKscript](#example-tickscript-using-a-pre-configured-endpoint).
_**kapacitor.conf**_
```toml
[[httppost]]
endpoint = "endpoint1"
url = "http://example-1.com/path"
# ...
[[httppost]]
endpoint = "endpoint2"
url = "http://example-2.com/path"
# ...
```
Multiple HTTP POST endpoint configurations can also be added using environment variables.
Variables values are grouped together using the number in each variable key.
```bash
KAPACITOR_HTTPPOST_0_ENDPOINT = "example0"
KAPACITOR_HTTPPOST_0_URL = "http://example-0.com/path"
KAPACITOR_HTTPPOST_0_HEADERS_Example1 = "header1"
KAPACITOR_HTTPPOST_0_HEADERS_Example2 = "header2"
KAPACITOR_HTTPPOST_1_ENDPOINT = "example1"
KAPACITOR_HTTPPOST_1_URL = "http://example-1.com/path"
KAPACITOR_HTTPPOST_1_HEADERS_Example1 = "header1"
KAPACITOR_HTTPPOST_1_HEADERS_Example2 = "header2"
```
## Options
The following post event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.post()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| url | string | The URL to which the alert data will be posted. |
| endpoint | string | Name of a HTTP POST endpoint (configured in the `kapacitor.conf`) to use. _Cannot be specified in place of the URL._ |
| headers | map of string to string | Set of extra header values to set on the POST request. |
| captureresponse | bool | If the HTTP status code is not an `2xx` code, read and log the HTTP response. |
| timeout | duration | Timeout for the HTTP POST. |
| skipSSLVerification | bool | Disables SSL verification for the POST request. |
### Example: Handler file - Using a pre-configured endpoint
```yaml
id: handler-id
topic: topic-name
kind: post
options:
# Using the 'example' endpoint configured in the kapacitor.conf
endpoint: example
```
### Example: Handler file - Defining post options "inline"
```yaml
id: handler-id
topic: topic-name
kind: post
options:
# Defining post options "inline"
url: http://example.com/path
headers:
'Example1': 'example1'
'Example2': 'example2'
capture-response: true
timeout: 10s
skipSSLVerification: true
```
### Example: TICKscript - Using a pre-configured endpoint
```js
|alert()
// ...
// Using the 'example' endpoint configured in the kapacitor.conf
.post()
.endpoint('example')
```
### Example: TICKscript - Defining post options "inline"
```js
|alert()
// ...
// Defining post options "inline"
.post('https://example.com/path')
.header('Example1', 'example1')
.header('Example2', 'example2')
.captureResponse()
.timeout(10s)
.skipSSLVerification()
```
## Using the Post event handler
The post event handler can be used in both TICKscripts and handler files to post
alert and HTTP POST data to an HTTP endpoint.
The examples below deal with alerts and use the same `[[httppost]]` configuration
defined in the `kapacitor.conf`:
_**HTTP POST settings in kapacitor.conf**_
```toml
[[httppost]]
endpoint = "api-alert"
url = "http://mydomain.com/api/alerts"
headers = { From = "alerts@mydomain.com" }
alert-template = "{{.Message}}:{{range .Data.Series}}{{.Tags}},{{range .Values}}{{.}}{{end}}{{end}}"
```
### Post alerts from a TICKscript
The following TICKscripts use the `.post()` event handler to post the message,
"Hey, check your CPU", whenever idle CPU usage drops below 10%.
_**post-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.post()
.endpoint('api-alerts')
```
If you don't want to use the `[[httppost]]` settings defined in the `kapacitor.conf`,
you can specify your post options inline.
_**post-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.post('https://example.com/path')
.header('Example1', 'example1')
.header('Example2', 'example2')
.captureResponse()
.timeout(10s)
.skipSSLVerification()
```
### Post alerts from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU".
A post handler is added that subscribes to the `cpu` topic and posts all alert
messages to the url and endpoint defined in the `kapacitor.conf`.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the post event
handler to post alerts to an HTTP endpoint.
_**post\_cpu\_handler.yaml**_
```yaml
id: post-cpu-alert
topic: cpu
kind: post
options:
url: 'http://example.com/path'
headers:
'From': 'alert@mydomain.com'
```
Add the handler:
```bash
kapacitor define-topic-handler post_cpu_handler.yaml
```
## Post templating
The post event handler allows you to customize the content and structure of
POSTs with alert and row templates.
### Alert templates
Alert templates are used to construct a custom HTTP body.
They are only used with post [alert](/kapacitor/v1.5/nodes/alert_node/) handlers
as they consume alert data.
Templates are defined either inline in the `kapacitor.conf` using the
[`alert-template`](#alert-template) configuration or in a separate file and referenced
using the [`alert-template-file`](#alert-template-file) config.
Alert templates use [Golang Template](https://golang.org/pkg/text/template/) and
have access to the following fields:
| Field | Description |
| ----- | ----------- |
| .ID | The unique ID for the alert. |
| .Message | The message of the alert. |
| .Details | The details of the alert. |
| .Time | The time the alert event occurred. |
| .Duration | The duration of the alert event. |
| .Level | The level of the alert, i.e INFO, WARN, or CRITICAL. |
| .Data | The data that triggered the alert. |
| .PreviousLevel | The previous level of the alert, i.e INFO, WARN, or CRITICAL. |
| .Recoverable | Indicates whether or not the alert is auto-recoverable. |
#### Inline alert template
_**kapacitor.conf**_
```toml
[[httppost]]
endpoint = "example"
url = "http://example.com/path"
alert-template = "{{.Message}}:{{range .Data.Series}}{{.Tags}},{{range .Values}}{{.}}{{end}}{{end}}"
```
#### Alert template file
_**kapacitor.conf**_
```toml
[[httppost]]
endpoint = "example"
url = "http://example.com/path"
alert-template-file = "/etc/templates/alert.html"
```
_**/etc/templates/alert.html**_
```html
{{.Message}}:{{range .Data.Series}}{{.Tags}},{{range .Values}}{{.}}{{end}}{{end}}
```
### Row templates
Row templates are used to construct a custom HTTP body.
They are only used with [httpPost](/kapacitor/v1.5/nodes/http_post_node/)
handlers as they consume a row at a time.
Templates are defined either inline in the `kapacitor.conf` using the
[`row-template`](#row-template) configuration or in a separate file and referenced
using the [`row-template-file`](#row-template-file) config.
Row templates use [Golang Template](https://golang.org/pkg/text/template/) and
have access to the following fields:
| Field | Description |
| ----- | ----------- |
| .Name | The measurement name of the data stream |
| .Tags | A map of tags on the data. |
| .Values | A list of values; each a map containing a "time" key for the time of the point and keys for all other fields on the point. |
#### Inline row template
_**kapacitor.conf**_
```toml
[[httppost]]
endpoint = "example"
url = "http://example.com/path"
row-template = '{{.Name}} host={{index .Tags "host"}}{{range .Values}} {{index . "time"}} {{index . "value"}}{{end}}'
```
#### Row template file
_**kapacitor.conf**_
```toml
[[httppost]]
endpoint = "example"
url = "http://example.com/path"
row-template-file = "/etc/templates/row.html"
```
_**/etc/templates/row.html**_
```html
{{.Name}} host={{index .Tags \"host\"}}{{range .Values}} {{index . "time"}} {{index . "value"}}{{end}}
```

View File

@ -0,0 +1,78 @@
---
title: Publish event handler
description: The "publish" event handler allows you to publish Kapacitor alerts messages to mulitple Kapacitor topics. This page includes options and usage examples.
menu:
kapacitor_1_5_ref:
name: Publish
weight: 1200
parent: Event handlers
---
The publish event handler publishes events to another topic.
## Options
The following publish event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file).
| Name | Type | Description |
| ---- | ---- | ----------- |
| topics | list of string | List of topic names to publish events. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: publish
options:
topics:
- system
- ops_team
```
## Using the publish event handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A publish handler is added that subscribes to the `cpu` topic and publishes new
alerts to other topics.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the publish
event handler to publish alerts to other topics.
_**publish\_cpu\_alerts\_handler.yaml**_
```yaml
id: publish-cpu-alert
topic: cpu
kind: publish
options:
topics:
- system
- ops_team
```
Add the handler:
```bash
kapacitor define-topic-handler publish_cpu_alerts_handler.yaml
```

View File

@ -0,0 +1,165 @@
---
title: Pushover event handler
description: The Pushover event handler allows you to send Kapacitor alerts to Pushover. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Pushover
weight: 1300
parent: Event handlers
---
[Pushover](https://pushover.net/) is a service that sends instant push
notifications to phone and tablets.
Kapacitor can be configured to send alert messages to Pushover.
## Configuration
Configuration as well as default [option](#options) values for the Pushover
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[pushover]
enabled = true
token = "mysupersecrettoken"
user-key = "myuserkey"
url = "https://api.pushover.net/1/messages.json"
```
#### `enabled`
Set to `true` to enable the Pushover event handler.
#### `token`
Your Pushover API token.
#### `user-key`
Your Pushover USER_TOKEN.
#### `url`
The URL for the Pushover API. _**This should not need to be changed.**_
## Options
The following Pushover event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.pushover()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| device | string | Specific list of users' devices rather than all of a users' devices. Multiple device names may be separated by a comma. |
| title | string | The message title. By default, the app's name is used. |
| url | string | A supplementary URL to show with the message. |
| url-title | string | A title for a supplementary URL, otherwise just the URL is shown. |
| sound | string | The name of one of the sounds supported by the device clients to override the user's default sound choice. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: pushover
options:
device: device1, device2, device3
title: Alert from Kapacitor
url: http://example.com
url-title: This is an example title
sound: siren
```
### Example: TICKscript
```js
|alert()
// ...
.pushover()
.device('device1, device2, device3')
.title('Alert from Kapacitor')
.URL('http://example.com')
.URLTitle('This is an example title')
.sound('siren')
```
### Pushover Priority Levels
Pushover expects priority levels with each alert.
Kapacitor alert levels are mapped to the following priority levels:
| Alert Level | Priority Level |
| ----------- | -------------- |
| **OK** | -2 priority level. |
| **Info** | -1 priority level. |
| **Warning** | 0 priority level. |
| **Critical** | 1 priority level. |
## Pushover Setup
[Register your application with Pushover](https://pushover.net/apps/build) to
get a Pushover token.
Include the token in the `[pushover]` configuration section of your `kapacitor.conf`.
## Using the Pushover event handler
With the Pushover event handler enabled and configured in your `kapacitor.conf`,
use the `.pushover()` attribute in your TICKscripts to send alerts to Pushover
or define a Pushover handler that subscribes to a topic and sends published
alerts to Pushover.
### Send alerts to Pushover from a TICKscript
The following TICKscript sends the message, "Hey, check your CPU", to Pushover
whenever idle CPU usage drops below 10% using the `.pushover()` event handler.
_**pushover-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.pushover()
.title('Alert from Kapacitor')
.sound('siren')
```
### Send alerts to Pushover from a defined handler
The following setup sends an alert to the `cpu` topic with the message, "Hey,
check your CPU".
A Pushover handler is added that subscribes to the `cpu` topic and publishes all
alert messages to Pushover.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Pushover
event handler to send alerts to Pushover.
_**pushover\_cpu\_handler.yaml**_
```yaml
id: pushover-cpu-alert
topic: cpu
kind: pushover
options:
title: Alert from Kapacitor
sound: siren
```
Add the handler:
```bash
kapacitor define-topic-handler pushover_cpu_handler.yaml
```

View File

@ -0,0 +1,158 @@
---
title: Sensu event handler
description: The Sensu event handler allows you to send Kapacitor alerts to Sensu. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Sensu
weight: 1400
parent: Event handlers
---
[Sensu](https://sensu.io/) is a service that provides infrastructure, service,
and application monitoring as well as other metrics.
Kapacitor can be configured to send alert messages to Sensu.
## Configuration
Configuration as well as default [option](#options) values for the Sensu event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[sensu]
enabled = true
addr = "sensu-client:3030"
source = "Kapacitor"
handlers = ["hander1-name", "handler2-name"]
```
#### `enabled`
Set to `true` to enable the Sensu event handler.
#### `addr`
The Sensu Client `host:port` address.
#### `source`
Default "Just-in-Time" (JIT) source.
#### `handlers`
List of [Sensu handlers](https://docs.sensu.io/sensu-core/1.3/guides/intro-to-handlers/) to use.
## Options
The following Sensu event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.sensu()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| source | string | Sensu source for which to post messages. |
| handlers | list of strings | Sensu handler list. If empty, uses the handler list from the configuration. |
| metadata | map of key value pairs | Adds key values pairs to the Sensu API request. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: sensu
options:
source: Kapacitor
handlers:
- handler1-name
- handler2-name
metadata:
key1: value1
key2: 5
key3: 5.0
```
### Example: TICKscript
```js
|alert()
// ...
.sensu()
.source('Kapacitor')
.handlers('handler1-name', 'handler2-name')
.metadata('key1', 'value1')
.metadata('key2', 5)
.metadata('key3', 5.0)
```
## Using the Sensu event handler
With the Sensu event handler enabled and configured in your `kapacitor.conf`,
use the `.sensu()` attribute in your TICKscripts to send alerts to Sensu or
define a Sensu handler that subscribes to a topic and sends published alerts
to Sensu.
_**Sensu settings in kapacitor.conf**_
```toml
[sensu]
enabled = true
addr = "123.45.67.89:3030"
source = "Kapacitor"
handlers = ["tcp", "transport"]
```
### Send alerts to Sensu from a TICKscript
The following TICKscript uses the `.sensu()` event handler to send the message,
"Hey, check your CPU", to Sensu whenever idle CPU usage drops below 10%.
_**sensu-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.sensu()
```
### Send alerts to Sensu from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A Sensu handler is added that subscribes to the `cpu` topic and publishes all
alert messages to Sensu.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Sensu
event handler to send alerts to Sensu.
_**sensu\_cpu\_handler.yaml**_
```yaml
id: sensu-cpu-alert
topic: cpu
kind: sensu
options:
source: Kapacitor
handlers:
- tcp
- transport
```
Add the handler:
```bash
kapacitor define-topic-handler sensu_cpu_handler.yaml
```

View File

@ -0,0 +1,297 @@
---
title: Slack event handler
description: The Slack event handler allows you to send Kapacitor alerts to Slack. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Slack
weight: 1500
parent: Event handlers
---
[Slack](https://slack.com) is a widely used "digital workspace" that facilitates
communication among team members.
Kapacitor can be configured to send alert messages to Slack.
## Configuration
Configuration as well as default [option](#options) values for the Slack event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[[slack]]
enabled = true
default = true
workspace = "example.slack.com"
url = "https://hooks.slack.com/xxxx/xxxx/xxxx"
channel = "#alerts"
username = "kapacitor"
global = false
state-changes-only = false
ssl-ca = "/path/to/ca.crt"
ssl-cert = "/path/to/cert.crt"
ssl-key = "/path/to/private-key.key"
insecure-skip-verify = false
```
> Multiple Slack clients may be configured by repeating `[[slack]]` sections.
The `workspace` acts as a unique identifier for each configured Slack client.
#### `enabled`
Set to `true` to enable the Slack event handler.
#### `default`
Identify one of the Slack configurations as the default if there are multiple
Slack configurations.
#### `workspace`
The Slack workspace ID.
This can be any string that identifies this particular Slack configuration.
A logical choice is the name of the Slack workspace, e.g. `<workspace>.slack.com`.
#### `url`
The Slack webhook URL. This can be obtained by adding an Incoming Webhook integration.
Login to your Slack workspace in your browser and
[add a new webhook](https://slack.com/services/new/incoming-webhook) for Kapacitor.
Slack will provide you the webhook URL.
#### `channel`
Default channel for messages.
#### `username`
The username of the Slack bot.
#### `global`
If true all the alerts will be sent to Slack without explicitly specifying Slack
in the TICKscript.
#### `state-changes-only`
Sets all alerts in state-changes-only mode, meaning alerts will only be sent if
the alert state changes.
_Only applies if `global` is `true`._
#### `ssl-ca`
Path to certificate authority file.
#### `ssl-cert`
Path to host certificate file.
#### `ssl-key`
Path to certificate private key file.
#### `insecure-skip-verify`
Use SSL but skip chain and host verification.
_This is necessary if using a self-signed certificate._
## Options
The following Slack event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.slack()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| workspace | string | Specifies which Slack configuration to use when there are multiple. |
| channel | string | Slack channel in which to post messages. If empty uses the channel from the configuration. |
| username | string | Username of the Slack bot. If empty uses the username from the configuration. |
| icon-emoji | string | IconEmoji is an emoji name surrounded in ':' characters. The emoji image will replace the normal user icon for the slack bot. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: slack
options:
workspace: 'workspace.slack.com'
channel: '#alerts'
username: 'kapacitor'
icon-emoji: ':smile:'
```
### Example: TICKscript
```js
|alert()
// ...
.slack()
.workspace('workspace.slack.com')
.channel('#alerts')
.username('kapacitor')
.iconEmoji(':smile:')
```
## Slack Setup
To allow Kapacitor to send alerts to Slack, login to your Slack workspace and
[create a new incoming webhook](https://slack.com/services/new/incoming-webhook )
for Kapacitor. Add the generated webhook URL as the `url` in the `[[slack]]`
configuration section of your `kapacitor.conf`.
## Using the Slack event handler
With one or more Slack event handlers enabled and configured in your
`kapacitor.conf`, use the `.slack()` attribute in your TICKscripts to send
alerts to Slack or define a Slack handler that subscribes to a topic and sends
published alerts to Slack.
> To avoid posting a message every alert interval, use
> [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly)
> so only events where the alert changed state are sent to Slack.
The examples below use the following Slack configurations defined in the `kapacitor.conf`:
_**Slack settings in kapacitor.conf**_
```toml
[[slack]]
enabled = true
default = true
workspace = "alerts"
url = "https://hooks.slack.com/xxxx/xxxx/example1"
channel = "#alerts"
username = "AlertBot"
global = false
state-changes-only = false
[[slack]]
enabled = true
default = false
workspace = "error-reports"
url = "https://hooks.slack.com/xxxx/xxxx/example2"
channel = "#error-reports"
username = "StatsBot"
global = false
state-changes-only = false
```
### Send alerts to Slack from a TICKscript
The following TICKscript uses the `.slack()` event handler to send the message,
"Hey, check your CPU", to the `#alerts` Slack channel whenever idle CPU usage
drops below 20%.
_**slack-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.warn(lambda: "usage_idle" < 20)
.stateChangesOnly()
.message('Hey, check your CPU')
.slack()
.iconEmoji(':exclamation:')
```
### Send alerts to Slack from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A Slack handler is added that subscribes to the `cpu` topic and publishes all
alert messages to Slack.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an critical alert message to the `cpu` topic any time
idle CPU usage drops below 5%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 5)
.stateChangesOnly()
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Slack
event handler to send alerts to Slack. This handler using the non-default Slack
handler, "critical-alerts", which sends messages to the #critical-alerts channel
in Slack.
_**slack\_cpu\_handler.yaml**_
```yaml
id: slack-cpu-alert
topic: cpu
kind: slack
options:
workspace: 'alerts'
icon-emoji: ':fire:'
```
Add the handler:
```bash
kapacitor define-topic-handler slack_cpu_handler.yaml
```
### Using multiple Slack configurations
Kapacitor can use multiple Slack integrations, each identified by the value of
the [`workspace`](#workspace) config. The TICKscript below illustrates how
multiple Slack integrations can be used.
In the `kapacitor.conf` [above](#using-the-slack-event-handler), there are two
Slack configurations; one for alerts and the other for daily stats. The
`workspace` configuration for each Slack configuration act as a unique identifiers.
The following TICKscript sends alerts to the `alerts` Slack workspace.
_**slack-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 5)
.stateChangesOnly()
.message('Hey, I think the machine is on fire.')
.slack()
.workspace('alerts')
.iconEmoji(':fire:')
```
Error rates are also being stored in the same InfluxDB instance and we want to
send daily reports of `500` errors to the `error-reports` Slack workspace.
The following TICKscript collects `500` error occurances and publishes them to
the `500-errors` topic.
_**500_errors.tick**_
```js
stream
|from()
.measurement('errors')
.groupBy('500')
|alert()
.info(lamda: 'count' > 0)
.noRecoveries()
.topic('500-errors')
```
Below is an [aggregate](/kapacitor/v1.5/event_handlers/aggregate/) handler that
subscribes to the `500-errors` topic, aggregates the number of 500 errors over a
24 hour period, then publishes an aggregate message to the `500-errors-24h` topic.
_**500\_errors\_24h.yaml**_
```yaml
id: 500-errors-24h
topic: 500-errors
kind: aggregate
options:
interval: 24h
topic: 500-errors-24h
message: '{{ .Count }} 500 errors last 24 hours.'
```
Last, but not least, a Slack handler that subscribes to the `500-errors-24h`
topic and publishes aggregated count messages to the `error-reports` Slack workspace:
_**slack\_500\_errors\_daily.yaml**_
```yaml
id: slack-500-errors-daily
topic: 500-errors-24h
kind: slack
options:
workspace: error-reports
```

View File

@ -0,0 +1,163 @@
---
title: SNMP trap event handler
description: The "snmptrap" event handler allows you to send Kapacitor alerts SNMP traps. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: SNMP Trap
weight: 1600
parent: Event handlers
---
The SNMP trap event handler sends alert messages as SNMP traps.
## Configuration
Configuration as well as default [option](#options) values for the SNMP trap
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[snmptrap]
enabled = true
addr = "localhost:162"
community = "kapacitor"
retries = 1
```
#### `enabled`
Set to `true` to enable the SNMP trap event handler.
#### `addr`
The `host:port` address of the SNMP trap server.
#### `community`
The community to use for traps.
#### `retries`
Number of retries when sending traps.
## Options
The following SNMP trap event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.snmpTrap()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| trap-oid | string | OID of the trap. |
| data-list | object | Each data object has `oid`, `type`, and `value` fields. Each field is a string. |
### SNMP Trap Data Types
The SNMP trap event handler supports the following data types:
| Abbreviation | Datatype |
| ------------ | -------- |
| c | Counter |
| i | Integer |
| n | Null |
| s | String |
| t | Time ticks |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: snmptrap
options:
trap-oid: 1.3.6.1.4.1.1
data-list:
- oid: 1.3.6.1.4.1.1.5
type: s
value: '{{ .Level }}'
- oid: 1.3.6.1.4.1.1.6
type: i
value: 50
- oid: 1.3.6.1.4.1.1.7
type: c
value: '{{ index .Fields "num_requests" }}'
- oid: 1.3.6.1.4.1.1.8
type: s
value: '{{ .Message }}'
```
### Example: TICKscript
```js
|alert()
// ...
.snmpTrap('1.3.6.1.4.1.1')
.data('1.3.6.1.4.1.1.5', 's', '{{ .Level }}')
.data('1.3.6.1.4.1.1.6', 'i', '50')
.data('1.3.6.1.4.1.1.7', 'c', '{{ index .Fields "num_requests" }}')
.data('1.3.6.1.4.1.1.8', 's', '{{ .Message }}')
```
## Using the SNMP trap event handler
The SNMP trap event handler can be used in both TICKscripts and handler files
to send alerts as SNMP traps.
### Sending SNMP traps from a TICKscript
The following TICKscript uses the `.snmptrap()` event handler to send alerts as
SNMP traps whenever idle CPU usage drops below 10%.
_**snmptrap-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.snmpTrap('1.3.6.1.2.1.1')
.data('1.3.6.1.2.1.1.7', 'i', '{{ index .Field "value" }}')
```
### Publish to multiple topics from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
An SNMP trap handler is added that subscribes to the `cpu` topic and sends new
alerts as SNMP traps.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the SNMP trap
event handler to send alerts as SNMP traps.
_**snmptrap\_cpu\_handler.yaml**_
```yaml
id: snmptrap-cpu-alert
topic: cpu
kind: snmptrap
options:
trap-oid: '1.3.6.1.2.1.1'
data-list:
- oid: '1.3.6.1.2.1.1.7'
type: i
value: '{{ index .Field "value" }}'
```
Add the handler:
```bash
kapacitor define-topic-handler snmptrap_cpu_handler.yaml
```

View File

@ -0,0 +1,144 @@
---
title: Talk event handler
description: The Talk event handler allows you to send Kapacitor alerts to Talk. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Talk
weight: 1700
parent: Event handlers
---
[Talk](https://jianliao.com/site) is a service that aggregates information into
a centralized hub.
Kapacitor can be configured to send alert messages to Talk.
## Conifiguration
Configuration as well as default [option](#options) values for the Talk event
handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[talk]
enabled = true
url = "https://jianliao.com/v2/services/webhook/uuid"
author_name = "Kapacitor"
```
#### `enabled`
Set to `true` to enable the Talk event handler.
#### `url`
The Talk webhook URL.
#### `author_name`
The default authorName.
## Options
The following Talk event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.talk()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| Title | string | Message title. |
| Text | string | Message text. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: talk
options:
title: 'Message Title'
text: 'This is the text included in the message.'
```
### Example: TICKscript
```js
|alert()
// ...
.talk()
.title('Message Title')
.text('This is the text included in the message.')
```
## Talk Setup
Create a new incoming webhook to allow Kapacitor to send alerts to Talk.
1. [Sign into your Talk account](https:/account.jianliao.com/signin).
2. Under the "Team" tab, click “Integrations”.
3. Select “Customize service” and click the Incoming Webhook “Add” button.
4. Choose the topic to connect with and click “Confirm Add” button.
5. Once the service is created, youll see the “Generate Webhook url”.
6. Place the generated Webhook URL as the `url` in the `[talk]` section of your
`kapacitor.conf`.
## Using the Talk event handler
With the Talk event handler enabled and configured in your `kapacitor.conf`,
use the `.talk()` attribute in your TICKscripts to send alerts to Talk or define
a Talk handler that subscribes to a topic and sends published alerts to Talk.
### Send alerts to Talk from a TICKscript
The following TICKscript sends the message, "Hey, check your CPU", to Talk
whenever idle CPU usage drops below 10% using the `.talk()` event handler.
_**talk-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.talk()
.title('Alert from Kapacitor')
```
### Send alerts to Talk from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A Talk handler is added that subscribes to the `cpu` topic and publishes all
alert messages to Talk.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Talk event
handler to send alerts to Talk.
_**talk\_cpu\_handler.yaml**_
```yaml
id: talk-cpu-alert
topic: cpu
kind: talk
options:
title: Alert from Kapacitor
```
Add the handler:
```bash
kapacitor define-topic-handler talk_cpu_handler.yaml
```

View File

@ -0,0 +1,98 @@
---
title: TCP event handler
description: The "tcp" event handler allows you to send Kapacitor alert data to a TCP endpoint. This page includes options and usage examples.
menu:
kapacitor_1_5_ref:
name: TCP
weight: 1800
parent: Event handlers
---
The TCP event handler sends JSON encoded alert data to a TCP endpoint.
## Options
The following TCP event handler options can be set in a [handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using `.tcp()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| address | string | Address of TCP endpoint. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: tcp
options:
address: 127.0.0.1:7777
```
### Example: TICKscript
```js
|alert()
// ...
.tcp('127.0.0.1:7777')
```
## Using the TCP event handler
The TCP event handler can be used in both TICKscripts and handler files to send
alert data to TCP endpoint.
### Send alert data to a TCP endpoint from a TICKscript
The following TICKscript uses the `.tcp()` event handler to send alert data
whenever idle CPU usage drops below 10%.
_**tcp-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.tcp('127.0.0.1:7777')
```
### Send alert data to a TCP endpoint from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU". A TCP handler is added that subscribes to the `cpu` topic
and sends all alert messages to a TCP endpoint.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle CPU
usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the TCP event
handler to send alert data to a TCP endpoint.
_**tcp\_cpu\_handler.yaml**_
```yaml
id: tcp-cpu-alert
topic: cpu
kind: tcp
options:
address: 127.0.0.1:7777
```
Add the handler:
```bash
kapacitor define-topic-handler tcp_cpu_handler.yaml
```

View File

@ -0,0 +1,321 @@
---
title: Telegram event handler
description: The Telegram event handler allows you to send Kapacitor alerts to Telegram. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: Telegram
weight: 1900
parent: Event handlers
---
[Telegram](https://telegram.org/) is a messaging app built with a focus on
security and speed.
Kapacitor can be configured to send alert messages to a Telegram bot.
## Configuration
Configuration as well as default [option](#options) values for the Telegram
alert handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[telegram]
enabled = false
url = "https://api.telegram.org/bot"
token = ""
chat-id = ""
parse-mode = "Markdown"
disable-web-page-preview = false
disable-notification = false
global = false
state-changes-only = false
```
#### `enabled`
Set to `true` to enable the Telegram event handler.
#### `url`
The Telegram Bot URL.
_**This should not need to be changed.**
#### `token`
Telegram bot token.
_[Contact @BotFather](https://telegram.me/botfather) to obtain a bot token._
#### `chat-id`
Default recipient for messages.
_[Contact @myidbot](https://telegram.me/myidbot) on Telegram to get an ID._
#### `parse-mode`
Specifies the syntax used to format messages. Options are `Markdown` or `HTML`
which allow Telegram apps to show bold, italic, fixed-width text or inline URLs
in alert message.
#### `disable-web-page-preview`
Disable link previews for links in this message.
#### `disable-notification`
Sends the message silently. iOS users will not receive a notification.
Android users will receive a notification with no sound.
#### `global`
If `true`, all alerts will be sent to Telegram without explicitly specifying
Telegram in the TICKscript.
#### `state-changes-only`
If `true`, alerts will only be sent to Telegram if the alert state changes.
This only applies if the `global` is also set to `true`.
## Options
The following Telegram event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.telegram()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| chat-id | string | Telegram user/group ID to post messages to. If empty uses the chati-d from the configuration. |
| parse-mode | string | Parse node, defaults to Markdown. If empty uses the parse-mode from the configuration. |
| disable-web-page-preview | bool | Web Page preview. If empty uses the disable-web-page-preview from the configuration. |
| disable-notification | bool | Disables Notification. If empty uses the disable-notification from the configuration. |
### Example: handler file
```yaml
topic: topic-name
id: handler-id
kind: telegram
options:
chat-id: '123456789'
parse-mode: 'Markdown'
disable-web-page-preview: false
disable-notification: false
```
### Example: TICKscript
```js
|alert()
// ...
.telegram()
.chatId('123456789')
.disableNotification()
.disableWebPagePreview()
.parseMode('Markdown')
```
## Telegram Setup
### Requirements
To configure Kapacitor with Telegram, the following is needed:
* a Telegram bot
* a Telegram API access token
* a Telegram chat ID
### Create a Telegram bot
1. Search for the `@BotFather` username in your Telegram application
2. Click `Start` to begin a conversation with `@BotFather`
3. Send `/newbot` to `@BotFather`. `@BotFather` will respond:
---
_Alright, a new bot. How are we going to call it? Please choose a name for your bot._
---
`@BotFather` will prompt you through the rest of the bot-creation process;
feel free to follow his directions or continue with our version of the steps
below. Both setups result in success!
4. Send your bot's name to `@BotFather`. Your bot's name can be anything.
> Note that this is not your bot's Telegram `@username`. You will create the
> username in step 5.
`@BotFather` will respond:
---
_Good. Now let's choose a username for your bot. It must end in `bot`.
Like this, for example: TetrisBot or tetris\_bot._
---
5. Send your bot's username to `@BotFather`. `BotFather` will respond:
---
_Done! Congratulations on your new bot.
You will find it at t.me/<bot-username>.
You can now add a description, about section and profile picture for your
bot, see /help for a list of commands. By the way, when you've finished creating
your cool bot, ping our Bot Support if you want a better username for it.
Just make sure the bot is fully operational before you do this._
_Use this token to access the HTTP API:
\<API-access-token\>_
_For a description of the Bot API, see this page:
[https://core.telegram.org/bots/api](https://core.telegram.org/bots/api)_
---
6. Begin a conversation with your bot.
Click on the `t.me/<bot-username>` link in `@BotFather`'s response and click
`Start` at the bottom of your Telegram application.
Your newly-created bot will appear in the chat list on the left side of the application.
### Get a Telegram API access token
Telegram's `@BotFather` bot sent you an API access token when you created your bot.
See the `@BotFather` response in step 5 of the previous section for where to find your token.
If you can't find the API access token, create a new token with the following steps
below.
1. Send `/token` to `@BotFather`
2. Select the relevant bot at the bottom of your Telegram application.
`@BotFather` responds with a new API access token:
---
_You can use this token to access HTTP API:
\<API-access-token\>_
_For a description of the Bot API, see this page:
[https://core.telegram.org/bots/api](https://core.telegram.org/bots/api)_
---
### Get your Telegram chat ID
1. Paste the following link in your browser. Replace `<API-access-token>` with
the API access token that you identified or created in the previous section:
```
https://api.telegram.org/bot<API-access-token>/getUpdates?offset=0
```
2. Send a message to your bot in the Telegram application.
The message text can be anything.
Your chat history must include at least one message to get your chat ID.
3. Refresh your browser.
4. Identify the numerical chat ID by finding the `id` inside the `chat` JSON object.
In the example below, the chat ID is `123456789`.
```json
{
"ok":true,
"result":[
{
"update_id":XXXXXXXXX,
"message":{
"message_id":2,
"from":{
"id":123456789,
"first_name":"Mushroom",
"last_name":"Kap"
},
"chat":{
"id":123456789,
"first_name":"Mushroom",
"last_name":"Kap",
"type":"private"
},
"date":1487183963,
"text":"hi"
}
}
]
}
```
## Using the Telegram event handler
With the Telegram event handler enabled and configured in your `kapacitor.conf`,
use the `.telegram()` attribute in your TICKscripts to send alerts to your
Telegram bot or define a Telegram handler that subscribes to a topic and sends
published alerts to your Telegram bot.
> To avoid posting a message every alert interval, use
> [AlertNode.StateChangesOnly](/kapacitor/v1.5/nodes/alert_node/#statechangesonly)
> so only events where the alert changed state are sent to Telegram.
The examples below use the following Telegram configuration defined in the `kapacitor.conf`:
_**Telegram settings in kapacitor.conf**_
```toml
[telegram]
enabled = true
url = "https://api.telegram.org/bot"
token = "mysupersecretauthtoken"
chat-id = ""
parse-mode = "Markdown"
disable-web-page-preview = false
disable-notification = false
global = false
state-changes-only = false
```
### Send alerts to a Telegram bot from a TICKscript
The following TICKscript uses the `.telegram()` event handler to send the message,
"Hey, check your CPU" to a Telegram bot whenever idle CPU usage drops below 10%.
It uses the default Telegram settings defined in the `kapacitor.conf`.
_**telegram-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.stateChangesOnly()
.message('Hey, check your CPU')
.telegram()
```
### Send alerts to the Telegram bot from a defined handler
The following setup sends the message, "Hey, check your CPU" to a Telgram bot
with the `123456789` chat-ID.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time CPU
idle usage drops below 10% _(or CPU usage is above 90%)_.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.stateChangesOnly()
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the Telegram
event handler to send alerts to the `123456789` chat-ID in Telegram.
_**telegram\_cpu\_handler.yaml**_
```yaml
id: telegram-cpu-alert
topic: cpu
kind: telegram
options:
chat-id: '123456789'
```
Add the handler:
```bash
kapacitor define-topic-handler telegram_cpu_handler.yaml
```

View File

@ -0,0 +1,171 @@
---
title: VictorOps event handler
description: The VictorOps event handler allows you to send Kapacitor alerts to VictorOps. This page includes configuration options and usage examples.
menu:
kapacitor_1_5_ref:
name: VictorOps
weight: 2000
parent: Event handlers
---
[VictorOps](https://victorops.com/) is an incident management platform that
provides observability, collaboration, & real-time alerting.
Kapacitor can be configured to send alert messages to VictorOps.
## Configuration
Configuration as well as default [option](#options) values for the VictorOps
event handler are set in your `kapacitor.conf`.
Below is an example configuration:
```toml
[victorops]
enabled = true
api-key = "xxxx"
routing-key = "xxxx"
url = "https://alert.victorops.com/integrations/generic/20131114/alert"
json-data = false
global = false
```
#### `enabled`
Set to `true` to enable the VictorOps event handler.
#### `api-key`
Your VictorOps API Key.
#### `routing-key`
Default VictorOps routing key, can be overridden per alert.
#### `url`
The VictorOps API URL. _**This should not need to be changed.**_
#### `json-data`
Use JSON for the "data" field.
> New VictorOps installations will want to set this to `true` as it makes
the data that triggered the alert available within VictorOps.
The default is `false` for backwards compatibility.
#### `global`
If true the all alerts will be sent to VictorOps without explicitly specifying
VictorOps in the TICKscript.
_The routing key can still be overridden._
## Options
The following VictorOpas event handler options can be set in a
[handler file](/kapacitor/v1.5/event_handlers/#handler-file) or when using
`.victorOps()` in a TICKscript.
| Name | Type | Description |
| ---- | ---- | ----------- |
| routing-key | string | The routing key of the alert event. |
### Example: handler file
```yaml
id: handler-id
topic: topic-name
kind: victorops
options:
routing-key: ops_team
```
### Example: TICKscript
```js
|alert()
// ...
.victorOps()
.routingKey('team_rocket')
```
## VictorOps Setup
To allow Kapacitor to send alerts to VictorOps, do the following:
1. Enable the "Alert Ingestion API" in the "Integrations" section of your
VictorOps dashboard.
2. Use provided API key as the `api-key` in the `[victorops]` section of your
`kapacitor.conf`.
## Using the VictorOps event handler
With the VictorOps event handler enabled and configured in your `kapacitor.conf`,
use the `.victorOps()` attribute in your TICKscripts to send alerts to VictorOps
or define a VictorOps handler that subscribes to a topic and sends published
alerts to VictorOps.
The examples below use the following VictorOps configuration defined in the `kapacitor.conf`:
_**VictorOps settings in kapacitor.conf**_
```toml
[victorops]
enabled = true
api-key = "mysupersecretapikey"
routing-key = "team_rocket"
url = "https://alert.victorops.com/integrations/generic/20131114/alert"
json-data = true
global = false
```
### Send alerts to an VictorOps room from a TICKscript
The following TICKscript uses the `.victorOps()` event handler to send the
message, "Hey, check your CPU", to VictorOps whenever idle CPU usage drops
below 10%.
_**victorops-cpu-alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.victorOps()
.routingKey('team_rocket')
```
### Send alerts to an VictorOps room from a defined handler
The following setup sends an alert to the `cpu` topic with the message,
"Hey, check your CPU".
A VictorOps handler is added that subscribes to the `cpu` topic and publishes
all alert messages to VictorOps using default settings defined in the `kapacitor.conf`.
Create a TICKscript that publishes alert messages to a topic.
The TICKscript below sends an alert message to the `cpu` topic any time idle
CPU usage drops below 10%.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 10)
.message('Hey, check your CPU')
.topic('cpu')
```
Add and enable the TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
kapacitor enable cpu_alert
```
Create a handler file that subscribes to the `cpu` topic and uses the VictorOps
event handler to send alerts VictorOps.
_**victorops\_cpu\_handler.yaml**_
```yaml
topic: cpu
id: victorops-cpu-alert
kind: victorops
options:
routing-key: 'team_rocket'
```
Add the handler:
```bash
kapacitor define-topic-handler victorops_cpu_handler.yaml
```

View File

@ -0,0 +1,47 @@
---
title: Guides
aliases:
- kapacitor/v1.5/examples/
menu:
kapacitor_1_5:
name: Guides
identifier: guides
weight: 35
---
The following is a list of examples in no particular order that demonstrate some of the features of Kapacitor.
These guides assume you're familiar with the basics of defining, recording, replaying and enabling tasks within Kapacitor.
See the [getting started](/kapacitor/v1.5/introduction/getting-started/) guide if you need a refresher.
### [Calculating rates across joined series + backfill](/kapacitor/v1.5/guides/join_backfill/)
Learn how to join two series and calculate a combined results, plus how to perform that operation on historical data.
### [Live leaderboard of game scores](/kapacitor/v1.5/guides/live_leaderboard/)
See how you can use Kapacitor to create a live updating leaderboard for a game.
### [Load directory](/kapacitor/v1.5/guides/load_directory/)
Put TICKscripts, TICKscript templates, and handler definitions in a directory,
from where they will be loaded when the Kapcitor daemon boots.
### [Custom anomaly detection](/kapacitor/v1.5/guides/anomaly_detection/)
Integrate your custom anomaly detection algorithm with Kapacitor.
### [Continuous Queries](/kapacitor/v1.5/guides/continuous_queries/)
See how to use Kapacitor as a continuous query engine.
### [Socket-based UDF](/kapacitor/v1.5/guides/socket_udf/)
Learn how to write a simple socket-based user-defined function (UDF).
### [Template tasks](/kapacitor/v1.5/guides/template_tasks/)
Use task templates to reduce the amount of TICKscripts you need to write.
### [Reference TICKscripts](/kapacitor/v1.5/guides/reference_scripts/)
Some examples of TICKscripts built against common Telegraf plugin data.

View File

@ -0,0 +1,742 @@
---
title: Custom anomaly detection using Kapacitor
aliases:
- kapacitor/v1.5/examples/anomaly_detection/
menu:
kapacitor_1_5:
name: Custom anomaly detection
identifier: anomaly_detection
weight: 20
parent: guides
---
Everyone has their own anomaly detection algorithm, so we have built
Kapacitor to integrate easily with which ever algorithm fits your
domain. Kapacitor calls these custom algorithms UDFs for User Defined
Functions. This guide will walk through the necessary steps for
writing and using your own UDFs within Kapacitor.
If you haven't already, we recommend following the [getting started
guide](/kapacitor/v1.5/introduction/getting-started/) for Kapacitor
prior to continuing.
## 3D printing
If you own or have recently purchased a 3D printer, you may know that
3D printing requires the environment to be at certain temperatures in
order to ensure quality prints. Prints can also take a long time
(some can take more than 24 hours), so you can't just watch the
temperature graphs the whole time to make sure the print is going
well. Also, if a print goes bad early, you want to make sure and stop
it so that you can restart it, and not waste materials on continuing a
bad print.
Due to the physical limitations of 3D printing, the printer software
is typically designed to keep the temperatures within certain
tolerances. For the sake of argument, let's say that you don't trust
the software to do it's job (or want to create your own), and want to
be alerted when the temperature reaches an abnormal level.
There are three temperatures when it comes to 3D printing:
1. The temperature of the hot end (where the plastic is melted before being printed).
2. The temperature of the bed (where the part is being printed).
3. The temperature of the ambient air (the air around the printer).
All three of these temperatures affect the quality of the print (some
being more important than others), but we want to make sure and track
all of them.
To keep our anomaly detection algorithm simple, let's compute a
`p-value` for each window of data we receive, and then emit a single
data point with that `p-value`. To compute the `p-value`, we will use
[Welch's t-test](https://en.wikipedia.org/wiki/Welch%27s_t_test). For
a null hypothesis, we will state that a new window is from the same
population as the historical windows. If the `p-value` drops low
enough, we can reject the null hypothesis and conclude that the window
must be from something different than the historical data population, or
_an anomaly_. This is an oversimplified approach, but we are learning
how to write UDFs, not statistics.
## Writing a user-defined function (UDF)
Now that we have an idea of what we want to do, let's understand how
Kapacitor wants to communicate with our process. From the [UDF
README](https://github.com/influxdata/kapacitor/tree/master/udf/agent)
we learn that Kapacitor will spawn a process called an `agent`. The
`agent` is responsible for describing what options it has, and then
initializing itself with a set of options. As data is received by the
UDF, the `agent` performs its computation and then returns the
resulting data to Kapacitor. All of this communication occurs over
STDIN and STDOUT using protocol buffers. As of this writing, Kapacitor
has agents implemented in Go and Python that take care of the
communication details and expose an interface for doing the actual
work. For this guide, we will be using the Python agent.
### The Handler interface
Here is the Python handler interface for the agent:
```python
# The Agent calls the appropriate methods on the Handler as requests are read off STDIN.
#
# Throwing an exception will cause the Agent to stop and an ErrorResponse to be sent.
# Some *Response objects (like SnapshotResponse) allow for returning their own error within the object itself.
# These types of errors will not stop the Agent and Kapacitor will deal with them appropriately.
#
# The Handler is called from a single thread, meaning methods will not be called concurrently.
#
# To write Points/Batches back to the Agent/Kapacitor use the Agent.write_response method, which is thread safe.
class Handler(object):
def info(self):
pass
def init(self, init_req):
pass
def snapshot(self):
pass
def restore(self, restore_req):
pass
def begin_batch(self):
pass
def point(self):
pass
def end_batch(self, end_req):
pass
```
### The Info method
Let's start with the `info` method. When Kapacitor starts up it will
call `info` and expect in return some information about how this UDF
behaves. Specifically, Kapacitor expects the kind of edge the UDF
wants and provides.
> **Remember**: within Kapacitor, data is transported in streams or
batches, so the UDF must declare what it expects.
In addition, UDFs can accept certain options so that they are
individually configurable. The `info` response can contain a list of
options, their names, and expected arguments.
For our example UDF, we need to know three things:
1. The field to operate on.
2. The size of the historical window to keep.
3. The significance level or `alpha` being used.
Below we have the implementation of the `info` method for our handler that defines the edge types and options available:
```python
...
def info(self):
"""
Respond with which type of edges we want/provide and any options we have.
"""
response = udf_pb2.Response()
# We will consume batch edges aka windows of data.
response.info.wants = udf_pb2.BATCH
# We will produce single points of data aka stream.
response.info.provides = udf_pb2.STREAM
# Here we can define options for the UDF.
# Define which field we should process.
response.info.options['field'].valueTypes.append(udf_pb2.STRING)
# Since we will be computing a moving average let's make the size configurable.
# Define an option 'size' that takes one integer argument.
response.info.options['size'].valueTypes.append(udf_pb2.INT)
# We need to know the alpha level so that we can ignore bad windows.
# Define an option 'alpha' that takes one double valued argument.
response.info.options['alpha'].valueTypes.append(udf_pb2.DOUBLE)
return response
...
```
When Kapacitor starts, it will spawn our UDF process and request
the `info` data and then shutdown the process. Kapacitor will
remember this information for each UDF. This way, Kapacitor can
understand the available options for a given UDF before its executed
inside of a task.
### The Init method
Next let's implement the `init` method, which is called once the task
starts executing. The `init` method receives a list of chosen
options, which are then used to configure the handler appropriately.
In response, we indicate whether the `init` request was successful,
and, if not, any error messages if the options were invalid.
```python
...
def init(self, init_req):
"""
Given a list of options initialize this instance of the handler
"""
success = True
msg = ''
size = 0
for opt in init_req.options:
if opt.name == 'field':
self._field = opt.values[0].stringValue
elif opt.name == 'size':
size = opt.values[0].intValue
elif opt.name == 'alpha':
self._alpha = opt.values[0].doubleValue
if size <= 1:
success = False
msg += ' must supply window size > 1'
if self._field == '':
success = False
msg += ' must supply a field name'
if self._alpha == 0:
success = False
msg += ' must supply an alpha value'
# Initialize our historical window
# We will define MovingStats in the next step
self._history = MovingStats(size)
response = udf_pb2.Response()
response.init.success = success
response.init.error = msg[1:]
return response
...
```
When a task starts, Kapacitor spawns a new process for the UDF and
calls `init`, passing any specified options from the TICKscript. Once
initialized, the process will remain running and Kapacitor will begin
sending data as it arrives.
### The Batch and Point methods
Our task wants a `batch` edge, meaning it expects to get data in
batches or windows. To send a batch of data to the UDF process,
Kapacitor first calls the `begin_batch` method, which indicates that all
subsequent points belong to a batch. Once the batch is complete, the
`end_batch` method is called with some metadata about the batch.
At a high level, this is what our UDF code will do for each of the
`begin_batch`, `point`, and `end_batch` calls:
* `begin_batch`: mark the start of a new batch and initialize a structure for it
* `point`: store the point
* `end_batch`: perform the `t-test` and then update the historical data
### The Complete UDF script
What follows is the complete UDF implementation with our `info`,
`init`, and batching methods (as well as everything else we need).
```python
from kapacitor.udf.agent import Agent, Handler
from scipy import stats
import math
import kapacitor.udf.udf_pb2
import sys
class TTestHandler(Handler):
"""
Keep a rolling window of historically normal data
When a new window arrives use a two-sided t-test to determine
if the new window is statistically significantly different.
"""
def __init__(self, agent):
self._agent = agent
self._field = ''
self._history = None
self._batch = None
self._alpha = 0.0
def info(self):
"""
Respond with which type of edges we want/provide and any options we have.
"""
response = udf_pb2.Response()
# We will consume batch edges aka windows of data.
response.info.wants = udf_pb2.BATCH
# We will produce single points of data aka stream.
response.info.provides = udf_pb2.STREAM
# Here we can define options for the UDF.
# Define which field we should process
response.info.options['field'].valueTypes.append(udf_pb2.STRING)
# Since we will be computing a moving average let's make the size configurable.
# Define an option 'size' that takes one integer argument.
response.info.options['size'].valueTypes.append(udf_pb2.INT)
# We need to know the alpha level so that we can ignore bad windows
# Define an option 'alpha' that takes one double argument.
response.info.options['alpha'].valueTypes.append(udf_pb2.DOUBLE)
return response
def init(self, init_req):
"""
Given a list of options initialize this instance of the handler
"""
success = True
msg = ''
size = 0
for opt in init_req.options:
if opt.name == 'field':
self._field = opt.values[0].stringValue
elif opt.name == 'size':
size = opt.values[0].intValue
elif opt.name == 'alpha':
self._alpha = opt.values[0].doubleValue
if size <= 1:
success = False
msg += ' must supply window size > 1'
if self._field == '':
success = False
msg += ' must supply a field name'
if self._alpha == 0:
success = False
msg += ' must supply an alpha value'
# Initialize our historical window
self._history = MovingStats(size)
response = udf_pb2.Response()
response.init.success = success
response.init.error = msg[1:]
return response
def begin_batch(self, begin_req):
# create new window for batch
self._batch = MovingStats(-1)
def point(self, point):
self._batch.update(point.fieldsDouble[self._field])
def end_batch(self, batch_meta):
pvalue = 1.0
if self._history.n != 0:
# Perform Welch's t test
t, pvalue = stats.ttest_ind_from_stats(
self._history.mean, self._history.stddev(), self._history.n,
self._batch.mean, self._batch.stddev(), self._batch.n,
equal_var=False)
# Send pvalue point back to Kapacitor
response = udf_pb2.Response()
response.point.time = batch_meta.tmax
response.point.name = batch_meta.name
response.point.group = batch_meta.group
response.point.tags.update(batch_meta.tags)
response.point.fieldsDouble["t"] = t
response.point.fieldsDouble["pvalue"] = pvalue
self._agent.write_response(response)
# Update historical stats with batch, but only if it was normal.
if pvalue > self._alpha:
for value in self._batch._window:
self._history.update(value)
class MovingStats(object):
"""
Calculate the moving mean and variance of a window.
Uses Welford's Algorithm.
"""
def __init__(self, size):
"""
Create new MovingStats object.
Size can be -1, infinite size or > 1 meaning static size
"""
self.size = size
if not (self.size == -1 or self.size > 1):
raise Exception("size must be -1 or > 1")
self._window = []
self.n = 0.0
self.mean = 0.0
self._s = 0.0
def stddev(self):
"""
Return the standard deviation
"""
if self.n == 1:
return 0.0
return math.sqrt(self._s / (self.n - 1))
def update(self, value):
# update stats for new value
self.n += 1.0
diff = (value - self.mean)
self.mean += diff / self.n
self._s += diff * (value - self.mean)
if self.n == self.size + 1:
# update stats for removing old value
old = self._window.pop(0)
oldM = (self.n * self.mean - old)/(self.n - 1)
self._s -= (old - self.mean) * (old - oldM)
self.mean = oldM
self.n -= 1
self._window.append(value)
if __name__ == '__main__':
# Create an agent
agent = Agent()
# Create a handler and pass it an agent so it can write points
h = TTestHandler(agent)
# Set the handler on the agent
agent.handler = h
# Anything printed to STDERR from a UDF process gets captured into the Kapacitor logs.
print >> sys.stderr, "Starting agent for TTestHandler"
agent.start()
agent.wait()
print >> sys.stderr, "Agent finished"
```
That was a lot, but now we are ready to configure Kapacitor to run our
code. Create a scratch dir for working through the rest of this
guide:
```bash
mkdir /tmp/kapacitor_udf
cd /tmp/kapacitor_udf
```
Save the above UDF python script into `/tmp/kapacitor_udf/ttest.py`.
### Configuring Kapacitor for our UDF
Add this snippet to your Kapacitor configuration file (typically located at `/etc/kapacitor/kapacitor.conf`):
```
[udf]
[udf.functions]
[udf.functions.tTest]
# Run python
prog = "/usr/bin/python2"
# Pass args to python
# -u for unbuffered STDIN and STDOUT
# and the path to the script
args = ["-u", "/tmp/kapacitor_udf/ttest.py"]
# If the python process is unresponsive for 10s kill it
timeout = "10s"
# Define env vars for the process, in this case the PYTHONPATH
[udf.functions.tTest.env]
PYTHONPATH = "/tmp/kapacitor_udf/kapacitor/udf/agent/py"
```
In the configuration we called the function `tTest`. That is also how
we will reference it in the TICKscript.
Notice that our Python script imported the `Agent` object, and we set
the `PYTHONPATH` in the configuration. Clone the Kapacitor source
into the tmp dir so we can point the `PYTHONPATH` at the necessary
python code. This is typically overkill since it's just two Python
files, but it makes it easy to follow:
```
git clone https://github.com/influxdata/kapacitor.git /tmp/kapacitor_udf/kapacitor
```
### Running Kapacitor with the UDF
Restart the Kapacitor daemon to make sure everything is configured
correctly:
```bash
service kapacitor restart
```
Check the logs (`/var/log/kapacitor/`) to make sure you see a
*Listening for signals* line and that no errors occurred. If you
don't see the line, it's because the UDF process is hung and not
responding. It should be killed after a timeout, so give it a moment
to stop properly. Once stopped, you can fix any errors and try again.
### The TICKscript
If everything was started correctly, then it's time to write our
TICKscript to use the `tTest` UDF method:
```js
dbrp "printer"."autogen"
// This TICKscript monitors the three temperatures for a 3d printing job,
// and triggers alerts if the temperatures start to experience abnormal behavior.
// Define our desired significance level.
var alpha = 0.001
// Select the temperatures measurements
var data = stream
|from()
.measurement('temperatures')
|window()
.period(5m)
.every(5m)
data
//Run our tTest UDF on the hotend temperature
@tTest()
// specify the hotend field
.field('hotend')
// Keep a 1h rolling window
.size(3600)
// pass in the alpha value
.alpha(alpha)
|alert()
.id('hotend')
.crit(lambda: "pvalue" < alpha)
.log('/tmp/kapacitor_udf/hotend_failure.log')
// Do the same for the bed and air temperature.
data
@tTest()
.field('bed')
.size(3600)
.alpha(alpha)
|alert()
.id('bed')
.crit(lambda: "pvalue" < alpha)
.log('/tmp/kapacitor_udf/bed_failure.log')
data
@tTest()
.field('air')
.size(3600)
.alpha(alpha)
|alert()
.id('air')
.crit(lambda: "pvalue" < alpha)
.log('/tmp/kapacitor_udf/air_failure.log')
```
Notice that we have called `tTest` three times. This means that
Kapacitor will spawn three different Python processes and pass the
respective init option to each one.
Save this script as `/tmp/kapacitor_udf/print_temps.tick` and define
the Kapacitor task:
```bash
kapacitor define print_temps -tick print_temps.tick
```
### Generating test data
To simulate our printer for testing, we will write a simple Python
script to generate temperatures. This script generates random
temperatures that are normally distributed around a target
temperature. At specified times, the variation and offset of the
temperatures changes, creating an anomaly.
> Don't worry too much about the details here. It would be much better
to use real data for testing our TICKscript and UDF, but this is
faster (and much cheaper than a 3D printer).
```python
#!/usr/bin/python2
from numpy import random
from datetime import timedelta, datetime
import sys
import time
import requests
# Target temperatures in C
hotend_t = 220
bed_t = 90
air_t = 70
# Connection info
write_url = 'http://localhost:9092/write?db=printer&rp=autogen&precision=s'
measurement = 'temperatures'
def temp(target, sigma):
"""
Pick a random temperature from a normal distribution
centered on target temperature.
"""
return random.normal(target, sigma)
def main():
hotend_sigma = 0
bed_sigma = 0
air_sigma = 0
hotend_offset = 0
bed_offset = 0
air_offset = 0
# Define some anomalies by changing sigma at certain times
# list of sigma values to start at a specified iteration
hotend_anomalies =[
(0, 0.5, 0), # normal sigma
(3600, 3.0, -1.5), # at one hour the hotend goes bad
(3900, 0.5, 0), # 5 minutes later recovers
]
bed_anomalies =[
(0, 1.0, 0), # normal sigma
(28800, 5.0, 2.0), # at 8 hours the bed goes bad
(29700, 1.0, 0), # 15 minutes later recovers
]
air_anomalies = [
(0, 3.0, 0), # normal sigma
(10800, 5.0, 0), # at 3 hours air starts to fluctuate more
(43200, 15.0, -5.0), # at 12 hours air goes really bad
(45000, 5.0, 0), # 30 minutes later recovers
(72000, 3.0, 0), # at 20 hours goes back to normal
]
# Start from 2016-01-01 00:00:00 UTC
# This makes it easy to reason about the data later
now = datetime(2016, 1, 1)
second = timedelta(seconds=1)
epoch = datetime(1970,1,1)
# 24 hours of temperatures once per second
points = []
for i in range(60*60*24+2):
# update sigma values
if len(hotend_anomalies) > 0 and i == hotend_anomalies[0][0]:
hotend_sigma = hotend_anomalies[0][1]
hotend_offset = hotend_anomalies[0][2]
hotend_anomalies = hotend_anomalies[1:]
if len(bed_anomalies) > 0 and i == bed_anomalies[0][0]:
bed_sigma = bed_anomalies[0][1]
bed_offset = bed_anomalies[0][2]
bed_anomalies = bed_anomalies[1:]
if len(air_anomalies) > 0 and i == air_anomalies[0][0]:
air_sigma = air_anomalies[0][1]
air_offset = air_anomalies[0][2]
air_anomalies = air_anomalies[1:]
# generate temps
hotend = temp(hotend_t+hotend_offset, hotend_sigma)
bed = temp(bed_t+bed_offset, bed_sigma)
air = temp(air_t+air_offset, air_sigma)
points.append("%s hotend=%f,bed=%f,air=%f %d" % (
measurement,
hotend,
bed,
air,
(now - epoch).total_seconds(),
))
now += second
# Write data to Kapacitor
r = requests.post(write_url, data='\n'.join(points))
if r.status_code != 204:
print >> sys.stderr, r.text
return 1
return 0
if __name__ == '__main__':
exit(main())
```
Save the above script as `/tmp/kapacitor_udf/printer_data.py`.
> This Python script has two Python dependencies: `requests` and `numpy`.
They can easily be installed via `pip` or your package manager.
At this point we have a task ready to go, and a script to generate
some fake data with anomalies. Now we can create a recording of our
fake data so that we can easily iterate on the task:
```sh
# Start the recording in the background
kapacitor record stream -task print_temps -duration 24h -no-wait
# Grab the ID from the output and store it in a var
rid=7bd3ced5-5e95-4a67-a0e1-f00860b1af47
# Run our python script to generate data
chmod +x ./printer_data.py
./printer_data.py
```
We can verify it worked by listing information about the recording.
Our recording came out to `1.6MB`, so yours should come out somewhere
close to that:
```
$ kapacitor list recordings $rid
ID Type Status Size Date
7bd3ced5-5e95-4a67-a0e1-f00860b1af47 stream finished 1.6 MB 04 May 16 11:44 MDT
```
### Detecting anomalies
Finally, let's run the play against our task and see how it works:
```
kapacitor replay -task print_temps -recording $rid -rec-time
```
Check the various log files to see if the algorithm caught the
anomalies:
```
cat /tmp/kapacitor_udf/{hotend,bed,air}_failure.log
```
Based on the `printer_data.py` script above, there should be anomalies at:
* 1hr: hotend
* 8hr: bed
* 12hr: air
There may be some false positives as well, but, since we want this to
work with real data (not our nice clean fake data), it doesn't help
much to tweak it at this point.
Well, there we have it. We can now get alerts when the temperatures
for our prints deviates from the norm. Hopefully you now have a
better understanding of how Kapacitor UDFs work, and have a good
working example as a launching point into further work with UDFS.
The framework is in place, now go plug in a real anomaly detection
algorithm that works for your domain!
## Extending the example
There are a few things that we have left as exercises to the reader:
1. Snapshot/Restore: Kapacitor will regularly snapshot the state of
your UDF process so that it can be restored if the process is
restarted. The examples
[here](https://github.com/influxdata/kapacitor/tree/master/udf/agent/examples/)
have implementations for the `snapshot` and `restore` methods.
Implement them for the `TTestHandler` handler as an exercise.
2. Change the algorithm from a t-test to something more fitting for
your domain. Both `numpy` and `scipy` have a wealth of algorithms.
3. The options returned by the `info` request can contain multiple
arguments. Modify the `field` option to accept three field names
and change the `TTestHandler` to maintain historical data and
batches for each field instead of just the one. That way only one
ttest.py process needs to be running.

View File

@ -0,0 +1,201 @@
---
title: Kapacitor as a Continuous Query engine
aliases:
- kapacitor/v1.5/examples/continuous_queries/
menu:
kapacitor_1_5:
name: Kapacitor as a Continuous Query engine
identifier: continuous_queries
weight: 30
parent: guides
---
Kapacitor can be used to do the same work as Continuous Queries (CQ) in InfluxDB.
Today we are going to explore reasons to use one over the other and the basics of using Kapacitor for CQ-type workloads.
## An Example
First, lets take a simple CQ and rewrite it as a Kapacitor TICKscript.
Here is a CQ that computes the mean of the `cpu.usage_idle` every 5 minutes and stores it in the new measurement `mean_cpu_idle`.
```
CREATE CONTINUOUS QUERY cpu_idle_mean ON telegraf BEGIN SELECT mean("usage_idle") as usage_idle INTO mean_cpu_idle FROM cpu GROUP BY time(5m),* END
```
To do the same with Kapacitor here is a streaming TICKscript.
```js
dbrp "telegraf"."autogen"
stream
|from()
.database('telegraf')
.measurement('cpu')
.groupBy(*)
|window()
.period(5m)
.every(5m)
.align()
|mean('usage_idle')
.as('usage_idle')
|influxDBOut()
.database('telegraf')
.retentionPolicy('autogen')
.measurement('mean_cpu_idle')
.precision('s')
```
The same thing can also be done as a batch task in Kapacitor.
```js
dbrp "telegraf"."autogen"
batch
|query('SELECT mean(usage_idle) as usage_idle FROM "telegraf"."autogen".cpu')
.period(5m)
.every(5m)
.groupBy(*)
|influxDBOut()
.database('telegraf')
.retentionPolicy('autogen')
.measurement('mean_cpu_idle')
.precision('s')
```
All three of these methods will produce the same results.
## Questions
At this point there are a few questions we should answer:
1. When should we use Kapacitor instead of CQs?
2. When should we use stream tasks vs batch tasks in Kapacitor?
### When should we use Kapacitor instead of CQs?
There are a few reasons to use Kapacitor instead of CQs.
* You are performing a significant number of CQs and want to isolate the work load.
By using Kapacitor to perform the aggregations InfluxDB's performance profile can remain more stable and isolated from Kapacitor's.
* You need to do more than just perform a query, for example maybe you only want to store only outliers from an aggregation instead of all of them.
Kapacitor can do significantly more with the data than CQs so you have more flexibility in transforming your data.
There are a few use cases where using CQs almost always makes sense.
* Performing downsampling for retention policies.
This is what CQs are designed for and do well.
No need to add another moving piece (i.e. Kapacitor) to your infrastructure if you do not need it.
Keep it simple.
* You only have a handful of CQs, again keep it simple, do not add more moving parts to your setup unless you need it.
### When should we use stream tasks vs batch tasks in Kapacitor?
Basically the answer boils down to two things, the available RAM and time period being used.
A stream task will have to keep all data in RAM for the specified period.
If this period is too long for the available RAM then you will first need to store the data in InfluxDB and then query using a batch task.
A stream task does have one slight advantage in that since its watching the stream of data it understands time by the timestamps on the data.
As such there are no race conditions for whether a given point will make it into a window or not.
If you are using a batch task it is still possible for a point to arrive late and be missed in a window.
## Another Example
Create a continuous query to downsample across retention policies.
```
CREATE CONTINUOUS QUERY cpu_idle_median ON telegraf BEGIN SELECT median("usage_idle") as usage_idle INTO "telegraf"."sampled_5m"."median_cpu_idle" FROM "telegraf"."autogen"."cpu" GROUP BY time(5m),* END
```
The stream TICKscript:
```js
dbrp "telegraf"."autogen"
stream
|from()
.database('telegraf')
.retentionPolicy('autogen')
.measurement('cpu')
.groupBy(*)
|window()
.period(5m)
.every(5m)
.align()
|median('usage_idle')
.as('usage_idle')
|influxDBOut()
.database('telegraf')
.retentionPolicy('sampled_5m')
.measurement('median_cpu_idle')
.precision('s')
```
And the batch TICKscript:
```js
dbrp "telegraf"."autogen"
batch
|query('SELECT median(usage_idle) as usage_idle FROM "telegraf"."autogen"."cpu"')
.period(5m)
.every(5m)
.groupBy(*)
|influxDBOut()
.database('telegraf')
.retentionPolicy('sampled_5m')
.measurement('median_cpu_idle')
.precision('s')
```
## Summary
Kapacitor is a powerful tool.
If you need more flexibility than CQs offer, use it.
For more information and help writing TICKscripts from InfluxQL queries take a look at these [docs](https://docs.influxdata.com/kapacitor/latest/nodes/influx_q_l_node/) on the InfluxQL node in Kapacitor.
Every function available in the InfluxDB query language is available in Kapacitor, so you can convert any query into a Kapacitor TICKscript.
## Important to Know
### Continuous queries and Kapacitor tasks may produce different results
For some types of queries, CQs (InfluxDB) and TICKscripts (Kapacitor) may return different results due to how each selects time boundaries.
Kapacitor chooses the maximum timestamp (tMax) while InfluxDB chooses the minimum timestamp (tMin).
The choice between using tMax or tMin is somewhat arbitrary for InfluxDB, however the same cannot be said for Kapacitor.
Kapacitor has the ability to do complex joining operations on overlapping time windows.
For example, if you were to join the mean over the last month with the the mean over the last day,
you would need their resulting values to occur at the same time, using the most recent time, tMax.
However, Kapacitor would use tMin and the resulting values would not occur at the same time.
One would be at the beginning of the last month, while the other would be at the beginning of the last day.
Consider the following query run as both an InfluxQL query and as a TICKscript:
#### InfluxQL
```sql
SELECT mean(*) FROM ... time >= '2017-03-13T17:50:00Z' AND time < '2017-03-13T17:51:00Z'
```
#### TICKscript
``` js
batch
|query('SELECT queryDurationNs FROM "_internal".monitor.queryExecutor')
.period(1m)
.every(1m)
.align()
|mean('queryDurationNs')
```
#### Query Results
| Query Method | Time | Mean |
|:------------ |:---- |:---- |
| Continuous Query | 2017-03-13T17:50:00Z | 8.083532716666666e+08 |
| TICKscript | 2017-03-13T17:51:00Z | 8.083532716666666e+08 |
> Note the difference between the returned timestamps.
This is a known issue discussed in [Issue #1258](https://github.com/influxdata/kapacitor/issues/1258) on Github.

View File

@ -0,0 +1,440 @@
---
title: Event handler setup
menu:
kapacitor_1_5:
weight: 70
parent: guides
---
Integrate Kapacitor into your monitoring system by sending [alert messages](/kapacitor/latest/nodes/alert_node/#message)
to supported event handlers.
Currently, Kapacitor can send alert messages to specific log files and specific URLs,
as well as to applications such as [Slack](https://slack.com/) and [HipChat](https://www.hipchat.com/).
This document offers step-by-step instructions for setting up event handlers with Kapacitor,
including relevant configuration options and [TICKscript](/kapacitor/latest/tick/) syntax.
Currently, this document doesn't cover every supported event handler, but we will
continue to add content to this page over time.
For a complete list of the supported event handlers and for additional information,
please see the [event handler reference documentation](/kapacitor/latest/nodes/alert_node/).
[HipChat Setup](#hipchat-setup)
[Telegram Setup](#telegram-setup")
## HipChat setup
[HipChat](https://www.hipchat.com/) is Atlassian's web service for group chat,
video chat, and screen sharing.
Configure Kapacitor to send alert messages to a HipChat room.
### Requirements
To configure Kapacitor with HipChat, you need:
- Your HipChat subdomain name
- Your HipChat room name
- A HipChat API access token for sending notifications
#### HipChat API access token
The following steps describe how to create the API access token.
1. From the HipChat home page, access **Account settings** by clicking on the
person icon in the top right corner.
2. Select **API access** from the items in the left menu sidebar.
3. Under **Create new token**, enter a label for your token (it can be anything).
4. Under **Create new token**, select **Send Notification** as the Scope.
5. Click **Create**.
Your token appears in the table just above the **Create new token** section:
![HipChat token](/img/kapacitor/hipchat-token.png)
### Configuration
In the `[hipchat]` section of Kapacitor's configuration file, set:
- `enabled` to `true`
- `subdomain` in the `url` setting to your HipChat subdomain
The optional configuration settings are:
`room`
Set to your HipChat room.
This serves as the default chat ID if the TICKscript doesn't specify a chat ID.
`token`
Set to your HipChat [API access token](#hipchat-api-access-token).
This serves as the default token if the TICKscript doesn't specify an API access token.
`global`
Set to `true` to send all alerts to HipChat without needing to specify HipChat in TICKscripts.
`state-changes-only`
Set to `true` to only send an alert to HipChat if the alert state changes.
This setting only applies if the `global` setting is also set to `true`.
#### Sample configuration
```toml
[hipchat]
enabled = true
url = "https://my-subdomain.hipchat.com/v2/room"
room = "my-room"
token = "mytokentokentokentoken"
global = false
state-changes-only = false
```
#### TICKscript syntax
```js
|alert()
.hipChat()
.room('<HipChat-room>')
.token('<HipChat-API-token>')
```
The `.room()` and `.token()` specifications are optional.
If they aren't set in the TICKscript, they default to the `room` and
`token` settings in the `[hipchat]` section of the `kapacitor.conf`.
> If `global` is set to `true` in the configuration file, there's no
> need to specify `.hipChat()` in the TICKscript.
> Kapacitor sends all alerts to HipChat by default.
`.room('<HipChat-room>')`
Sets the HipChat room.
`.token('<HipChat-API-token>')`
Sets the HipChat [API access token](#hipchat-api-access-token).
### Examples
#### Send alerts to the HipChat room set in the configuration file
_**Configuration file**_
```toml
[hipchat]
enabled = true
url = "https://testtest.hipchat.com/v2/room"
room = "my-alerts"
token = "tokentokentokentokentoken"
global = false
state-changes-only = true
```
_**TICKscript**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 97)
.message('Hey, check your CPU')
.hipChat()
```
The setup sends `Hey, check your CPU` to the **my-alerts** room associated with
the `testest` subdomain.
#### Send alerts to the HipChat room set in the TICKscript
_**Configuration file**_
```toml
[hipchat]
enabled = true
url = "https://testtest.hipchat.com/v2/room"
room = "my-alerts"
token = "tokentokentokentokentoken"
global = false
state-changes-only = true
```
_**TICKscript**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 97)
.message('Hey, check your CPU')
.hipChat()
.room('random')
```
The setup sends `Hey, check your CPU` to the **random** room associated with
the `testest` subdomain.
Notice that `.room()` in the TICKscript overrides the `room` setting in the
configuration file.
## Telegram setup
[Telegram](https://telegram.org/) is a messaging app.
Configure Kapacitor to send alert messages to a Telegram bot.
### Requirements
To configure Kapacitor with Telegram, you need:
- A Telegram bot
- A Telegram API access token
- Your Telegram chat ID
#### Telegram bot
The following steps describe how to create a new Telegram bot.
1. Search for the **@BotFather** username in your Telegram application.
2. Click **Start** to begin a conversation with **@BotFather**.
3. Send `/newbot` to **@BotFather**.
**@BotFather** responds:
_Alright, a new bot. How are we going to call it? Please choose a name for your bot._
**@BotFather** will prompt you through the rest of the bot-creation process; feel
free to follow his directions or continue with our version of the steps below.
Both setups result in success!
4. Send your bot's name to **@BotFather**.
Your bot's name can be anything.
Note that this is not your bot's Telegram `@username`; you'll create the username
in step 5.
**@BotFather** responds:
_Good. Now let's choose a username for your bot. It must end in `bot`. Like this, for example: TetrisBot or tetris\_bot._
5. Send your bot's username to **@BotFather**.
Your bot's username must end in `bot`.
For example: `mushroomKap_bot`.
`BotFather` responds:
_Done! Congratulations on your new bot. You will find it at t.me/<bot-username>. You can now add a description, about section and profile picture for your bot, see /help for a list of commands. By the way, when you've finished creating your cool bot, ping our Bot Support if you want a better username for it. Just make sure the bot is fully operational before you do this._
Use this token to access the HTTP API:
<API-access-token>
For a description of the Bot API, see this page: https://core.telegram.org/bots/api
6. Begin a conversation with your bot.
Click on the `t.me/<bot-username>` link in **@BotFather**'s response
and click **Start** at the bottom of your Telegram application.
Your newly created bot will appear in the chat list on the left side of the application.
#### Telegram API access token
The following section describes how to identify or create the API access token.
Telegram's **@BotFather** bot sent you an API access token when you created your bot.
See the **@BotFather** response in step 5 of the previous section for where to find your token.
If you can't find the API access token, create a new token with the steps below.
1. Send `/token` to **@BotFather**
2. Select the relevant bot at the bottom of your Telegram application.
**@BotFather** responds with a new API access token:
You can use this token to access HTTP API:
<API-access-token>
For a description of the Bot API, see this page: https://core.telegram.org/bots/api
#### Telegram chat ID
The following steps describe how to identify your chat ID.
1. Paste the following link in your browser.
Replace `<API-access-token>` with the API access token that you identified
or created in the previous section:
`https://api.telegram.org/bot<API-access-token>/getUpdates?offset=0`
2. Send a message to your bot in the Telegram application.
The message text can be anything; your chat history must include at least
one message to get your chat ID.
3. Refresh your browser.
4. Identify the numerical chat ID in the JSON provided in the browser.
In the formatted example below, the chat ID is `123456789`.
```json
{
"ok": true,
"result": [
{
"update_id": 101010101,
"message": {
"message_id": 2,
"from": {
"id": 123456789,
"first_name": "Mushroom",
"last_name": "Kap"
},
"chat": {
"id": 123456789,
"first_name": "Mushroom",
"last_name": "Kap",
"type": "private"
},
"date": 1487183963,
"text": "hi"
}
}
]
}
```
### Configuration
In the `[telegram]` section of Kapacitor's configuration file set:
- `enabled` to `true`
- `token` to your [API access token](#telegram-api-access-token)
The default `url` setting (`https://api.telegram.org/bot`) requires no additional configuration.
The optional configuration settings are:
`chat_id`
Set to your Telegram [chat ID](#telegram-chat-id). This serves as the default chat ID if the TICKscript doesn't specify a chat ID.
`parse-mode`
Set to `Markdown` or `HTML` for Markdown-formatted or HTML-formatted alert messages.
The default `parse-mode` is `Markdown`.
`disable-web-page-preview`
Set to `true` to disable [link previews](https://telegram.org/blog/link-preview) in alert messages.
`disable-notification`
Set to `true` to disable notifications on iOS devices and disable sounds on Android devices.
When set to `true`, Android users continue to receive notifications.
`global`
Set to `true` to send all alerts to Telegram without needing to specify Telegram in TICKscripts.
`state-changes-only`
Set to `true` to only send an alert to Telegram if the alert state changes.
This setting only applies if the `global` setting is also set to `true`.
#### Sample configuration
```toml
[telegram]
enabled = true
url = "https://api.telegram.org/bot"
token = "abcdefghi:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
chat-id = "123456789"
parse-mode = Markdown
disable-web-page-preview = true
disable-notification = false
global = true
state-changes-only = true
```
#### TICKscript syntax
```js
|alert()
.telegram()
.chatId('<chat_id>')
.disableNotification()
.disableWebPagePreview()
.parseMode(['Markdown' | 'HTML'])
```
The `.chatId()`, `.disableNotification()`, `.disableWebPagePreview()`, and `.parseMode()` specifications are optional.
If they aren't set in the TICKscript, they default to the `chat-id`, `disable-notification`,
`disable-web-page-preview`, and `parse-mode` settings in the `[telegram]` section of the configuration file.
Note that if `global` is set to `true` in the configuration file, there's no need to specify
`.telegram()` in the TICKscript; Kapacitor sends all alerts to Telegram by default.
`.chatId('<chat_id>')`
Sets the Telegram [chat ID](#telegram-chat-id).
`.disableNotification()`
Disables notifications on iOS devices and disables sounds on Android devices.
Android users continue to receive notifications.
`.disableWebPagePreview()`
Disables [link previews](https://telegram.org/blog/link-preview) in alert messages.
`.parseMode(['Markdown' | 'HTML'])`
Sets `Markdown` or `HTML` as the format for alert messages.
### Examples
#### Send alerts to the Telegram chat ID set in the configuration file
_**Configuration file**_
```toml
[telegram]
enabled = true
url = "https://api.telegram.org/bot"
token = "abcdefghi:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
chat-id = "123456789"
parse-mode = "Markdown"
disable-web-page-preview = false
disable-notification = false
global = false
state-changes-only = false
```
_**TICKscript**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 97)
.message('Might want to check your CPU')
.telegram()
```
The setup sends `Might want to check your CPU` to the Telegram bot associated
with the chat ID `123456789` and API access token `abcdefghi:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`.
#### Send alerts to the Telegram chat ID set in the TICKscript
_**Configuration file**_
```toml
[telegram]
enabled = true
url = "https://api.telegram.org/bot"
token = "abcdefghi:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
chat-id = ""
parse-mode = "Markdown"
disable-web-page-preview = false
disable-notification = false
global = false
state-changes-only = false
```
_**TICKscript**_
```js
stream
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 97)
.message('Might want to check your CPU')
.telegram()
.chatId('123456789')
```
The setup sends `Might want to check your CPU` to the Telegram bot associated with the chat ID `123456789` and API access token `abcdefghi:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`.

View File

@ -0,0 +1,82 @@
---
title: Suppressing Kapacitor alerts based on hierarchy
description: Kapacitor's '.inhibit()' allows you to create hierarchical alerting architectures by suppressing alerts with matching tags in a specified alert category.
menu:
kapacitor_1_5:
name: Hierarchical alert suppression
identifier: hierarchical_alert_suppression
weight: 30
parent: guides
---
Kapacitor allows you to build out a robust monitoring and alerting solution with
multiple "levels" or "tiers" of alerts.
However, an issue arises when an event triggers both high-level and low-level alerts
and you end up getting multiple alerts from different contexts.
The [AlertNode's `.inhibit()`](/kapacitor/v1.5/nodes/alert_node/#inhibit) method
allows you to suppress other alerts when an alert is triggered.
For example, let's say you are monitoring a cluster of servers.
As part of your alerting architecture, you have host-level alerts such as CPU usage
alerts, RAM usage alerts, disk I/O, etc.
You also have cluster-level alerts that monitor network health, host uptime, etc.
If a CPU spike on a host in your cluster takes the machine offline, rather than
getting a host-level alert for the CPU spike _**and**_ a cluster-level alert for
the offline node, you'd get a single alert the alert that the node is offline.
The cluster-level alert would suppress the host-level alert.
## Using the `.inhibit()` method to suppress alerts
The `.inhibit()` method uses alert categories and tags to inhibit or suppress other alerts.
```js
// ...
|alert()
.inhibit('<category>', '<tags>')
```
`category`
The category for which this alert inhibits or suppresses alerts.
`tags`
A comma-delimited list of tags that must be matched in order for alerts to be
inhibited or suppressed.
### Example hierarchical alert suppression
The following TICKscripts represent two alerts in a layered alert architecture.
The first is a host specific CPU alert that triggers an alert to the `system_alerts`
category whenever idle CPU usage is less than 10%.
Streamed data points are grouped by the `host` tag, which identifies the host the
data point is coming from.
_**cpu\_alert.tick**_
```js
stream
|from()
.measurement('cpu')
.groupBy('host')
|alert()
.category('system_alerts')
.crit(lambda: "usage_idle" < 10.0)
```
The following TICKscript is a cluster-level alert that monitors the uptime of hosts in the cluster.
It uses the [`deadman()`](/kapacitor/v1.5/nodes/alert_node/#deadman) function to
create an alert when a host is unresponsive or offline.
The `.inhibit()` method in the deadman alert suppresses all alerts to the `system_alerts`
category that include a matching `host` tag, meaning they are from the same host.
_**host\_alert.tick**_
```js
stream
|from()
.measurement('uptime')
.groupBy('host')
|deadman(0.0, 1m)
.inhibit('system_alerts', 'host')
```
With this alert architecture, a host may be unresponsive due to a CPU bottleneck,
but because the deadman alert inhibits system alerts from the same host, you won't
get alert notifications for both the deadman and the high CPU usage; just the
deadman alert for that specific host.

View File

@ -0,0 +1,261 @@
---
title: Calculating rates across joined series + backfill
aliases:
- kapacitor/v1.5/examples/join_backfill/
menu:
kapacitor_1_5:
name: Calculating rates across series
weight: 10
parent: guides
---
Collecting a set of time series data where each time series is counting a particular event is a common scenario.
Using Kapacitor, multiple time series in a set can be joined and used to calculate a combined value, which can then be stored as a new time series.
This guide shows how to use a prepared data generator in python to combine two generated
time series into a new calculated measurement, then
store that measurement back into InfluxDB using Kapacitor.
It uses as its example a hypothetical high-volume website for which two measurements
are taken:
* `errors` -- the number of page views that had an error.
* `views` -- the number of page views that had no errror.
### The Data generator
Data for such a website can be primed and generated to InfluxDB using the Python
3 script rolled into [page.zip](/downloads/pages.zip)([sha256](/downloads/pages.zip.sha256)) and created for this purpose.
It leverages the [InfluxDB-Python](https://github.com/influxdata/influxdb-python) library.
See that Github project for instructions on how to install the library in Python.
Once unzipped, this script can be used to create a database called `pages`, which
uses the default retention policy `autogen`. It can be used to create a backlog
of data and then to set the generator running, walking along randomly generated
`view` and `error` counts.
It can be started with a backlog of two days worth of random data as follows:
```
$ ./pages_db.py --silent true pnr --start 2d
Created database pages
priming and running
data primed
generator now running. CTRL+C to stop
..........................................
```
Priming two days worth of data can take about a minute.
### Joining with batch data
Having simple counts may not be sufficient for a site administrator. More
important would be to know the percent of page views that are resulting in error.
The process is to select both existing measurements, join them and calculate an
error percentage. The error percentage can then be stored in
InfluxDB as a new measurement.
The two measurements, `errors` and `views`, need to be queried.
```javascript
// Get errors batch data
var errors = batch
|query('SELECT sum(value) FROM "pages"."autogen".errors')
.period(1h)
.every(1h)
.groupBy(time(1m), *)
.fill(0)
// Get views batch data
var views = batch
|query('SELECT sum(value) FROM "pages"."autogen".views')
.period(1h)
.every(1h)
.groupBy(time(1m), *)
.fill(0)
```
The join process skips points that do not have a matching point in time from the other source.
As a result it is important to both `groupBy` and `fill` the data while joining batch data.
Grouping the data by time ensures that each source has data points at consistent time periods.
Filling the data ensures every point will have a match with a sane default.
In this example the `groupBy` method uses the wildcard `*` to group results by all tags.
This can be made more specific by declaring individual tags, and since the generated
demo data contains only one tag, `page`, the `groupBy` statement could be written
as follows: `.groupBy(time(1m), 'page')`.
With two batch sources for each measurement they need to be joined like so.
```javascript
// Join errors and views
errors
|join(views)
.as('errors', 'views')
```
The data is joined by time, meaning that as pairs of batches arrive from each source
they are combined into a single batch. As a result the fields from each source
need to be renamed to properly namespace the fields. This is done via the
`.as('errors', 'views')` line. In this example each measurement has only one field
named `sum`. The joined fields are called `errors.sum` and `views.sum` respectively.
Now that the data is joined the percentage can be calculated.
Using the new names for the fields, the following expression can be used to calculate
the desired percentage.
```javascript
//Calculate percentage
|eval(lambda: "errors.sum" / ("views.sum" + "errors.sum"))
// Give the resulting field a name
.as('value')
```
Finally, this data is stored back into InfluxDB.
```javascript
|influxDBOut()
.database('pages')
.measurement('error_percent')
```
Here is the complete TICKscript for the batch task:
```javascript
dbrp "pages"."autogen"
// Get errors batch data
var errors = batch
|query('SELECT sum(value) FROM "pages"."autogen".errors')
.period(1h)
.every(1h)
.groupBy(time(1m), *)
.fill(0)
// Get views batch data
var views = batch
|query('SELECT sum(value) FROM "pages"."autogen".views')
.period(1h)
.every(1h)
.groupBy(time(1m), *)
.fill(0)
// Join errors and views
errors
|join(views)
.as('errors', 'views')
//Calculate percentage
|eval(lambda: ("errors.sum" / ("views.sum" + "errors.sum")) * 100)
// Give the resulting field a name
.as('value')
|influxDBOut()
.database('pages')
.measurement('error_percent')
```
### Backfill
Now for a fun little trick.
Using Kapacitor's record/replay actions, this TICKscript can be run on historical data.
First, save the above script as `error_percent.tick` and define it.
Then, create a recording for the past time frame we want to fill.
```bash
$ kapacitor define error_percent -tick error_percent.tick
$ kapacitor record batch -task error_percent -past 1d
```
Grab the recording ID and replay the historical data against the task.
Here specify the `-rec-time` flag to instruct Kapacitor to use the actual
time stored in the recording when processing the data instead of adjusting to the present time.
```bash
$ kapacitor replay -task error_percent -recording RECORDING_ID -rec-time
```
If the data set is too large to keep in one recording, define a specific range of time to record
and then replay each range individually.
```bash
rid=$(kapacitor record batch -task error_percent -start 2015-10-01 -stop 2015-10-02)
echo $rid
kapacitor replay -task error_percent -recording $rid -rec-time
kapacitor delete recordings $rid
```
Just loop through the above script for each time window and reconstruct all the historical data needed.
With that the `error_percent` for every minute will be backfilled for the historical data.
### Stream method
With the streaming case something similar can be done. Note that the command
`kapacitor record stream` does not include the same a historical option `-past`,
so backfilling using a _stream_ task directly in Kapacitor is not possible. If
backfilling is required, the command [`kapacitor record query`](#record-query-and-backfill-with-stream),
presented below, can also be used.
Never the less the same TICKscript semantics can be used with a _stream_ task
to calculate and store a new calculated value, such as `error_percent`, in real time.
The following is just such a TICKscript.
```javascript
dbrp "pages"."autogen"
// Get errors stream data
var errors = stream
|from()
.measurement('errors')
.groupBy(*)
|window()
.period(1m)
.every(1m)
|sum('value')
// Get views stream data
var views = stream
|from()
.measurement('views')
.groupBy(*)
|window()
.period(1m)
.every(1m)
|sum('value')
// Join errors and views
errors
|join(views)
.as('errors', 'views')
// Calculate percentage
|eval(lambda: "errors.sum" / ("views.sum" + "errors.sum") * 100.0)
// Give the resulting field a name
.as('value')
|influxDBOut()
.database('pages')
.measurement('error_percent')
```
### Record Query and backfill with stream
To provide historical data to stream tasks that process multiple measurements,
use [multiple statements](/influxdb/latest/query_language/data_exploration/#multiple-statements)
when recording the data.
First use `record query` following the pattern of this generic command:
```
kapacitor record query -query $'select field1,field2,field3 from "database_name"."autogen"."one" where time > \'YYYY-mm-ddTHH:MM:SSZ\' and time < \'YYYY-mm-ddTHH:MM:SSZ\' GROUP BY *; select field1,field2,field3 from "database_name"."autogen"."two" where time > \'YYYY-mm-ddTHH:MM:SSZ\' and time < \'YYYY-mm-ddTHH:MM:SSZ\' GROUP BY *' -type stream
```
For example:
```bash
$ kapacitor record query -query $'select value from "pages"."autogen"."errors" where time > \'2018-05-30T12:00:00Z\' and time < \'2018-05-31T12:00:00Z\' GROUP BY *; select value from "pages"."autogen"."views" where time > \'2018-05-30T12:00:00Z\' and time < \'2018-12-21T12:00:00Z\' GROUP BY *' -type stream
578bf299-3566-4813-b07b-744da6ab081a
```
The returned recording ID can then be used in a Kapacitor `replay` command using
the recorded time.
```bash
$ kapacitor replay -task error_percent_s -recording 578bf299-3566-4813-b07b-744da6ab081a -rec-time
c623f73c-cf2a-4fce-be4c-9ab89f0c6045
```

View File

@ -0,0 +1,291 @@
---
title: Live leaderboard of game scores
description: Tutorial on using Kapacitor stream processing and Chronograf to build a leaderboard for gamers to be able to see player scores in realtime. Historical data is also available for post-game analysis.
aliases:
- kapacitor/v1.5/examples/live_leaderboard/
menu:
kapacitor_1_5:
name: Live leaderboard
identifier: live_leaderboard
weight: 10
parent: guides
---
**If you do not have a running Kapacitor instance check out the [getting started guide](/kapacitor/v1.5/introduction/getting-started/)
to get Kapacitor up and running on localhost.**
Today we are game developers.
We host a several game servers each running an instance of the game code with about a hundred players per game.
We need to build a leaderboard so spectators can see the player's scores in real time.
We would also like to have historical data on leaders in order to do post game
analysis on who was leading for how long etc.
We will use Kapacitor's stream processing to do the heavy lifting for us.
The game servers can send a UDP packet anytime a player's score changes
or at least every 10 seconds if the score hasn't changed.
### Setup
**All snippets below can be found [here](https://github.com/influxdb/kapacitor/tree/master/examples/scores)**
Our first order of business is to configure Kapacitor to receive the stream of scores.
In this case the scores update too often to store all of them in InfluxDB so we will send them directly to Kapacitor.
Like InfluxDB you can configure a UDP listener.
Add this configuration section to the end of your Kapacitor configuration.
```
[[udp]]
enabled = true
bind-address = ":9100"
database = "game"
retention-policy = "autogen"
```
This configuration tells Kapacitor to listen on port `9100` for UDP packets in the line protocol format.
It will scope in incoming data to be in the `game.autogen` database and retention policy.
Start Kapacitor running with that added to the configuration.
Here is a simple bash script to generate random score data so we can test it without
messing with the real game servers.
```bash
#!/bin/bash
# default options: can be overriden with corresponding arguments.
host=${1-localhost}
port=${2-9100}
games=${3-10}
players=${4-100}
games=$(seq $games)
players=$(seq $players)
# Spam score updates over UDP
while true
do
for game in $games
do
game="g$game"
for player in $players
do
player="p$player"
score=$(($RANDOM % 1000))
echo "scores,player=$player,game=$game value=$score" > /dev/udp/$host/$port
done
done
sleep 0.1
done
```
Place the above script into a file `scores.sh` and run it:
```bash
chmod +x ./scores.sh
./scores.sh
```
Now we are spamming Kapacitor with our fake score data.
We can just leave that running since Kapacitor will drop
the incoming data until it has a task that wants it.
### Defining the Kapacitor task
What does a leaderboard need to do?
1. Get the most recent score per player per game.
1. Calculate the top X player scores per game.
1. Publish the results.
1. Store the results.
To complete step one we need to buffer the incoming stream and return the most recent score update per player per game.
Our [TICKscript](/kapacitor/v1.5/tick/) will look like this:
```js
var topPlayerScores = stream
|from()
.measurement('scores')
// Get the most recent score for each player per game.
// Not likely that a player is playing two games but just in case.
.groupBy('game', 'player')
|window()
// keep a buffer of the last 11s of scores
// just in case a player score hasn't updated in a while
.period(11s)
// Emit the current score per player every second.
.every(1s)
// Align the window boundaries to be on the second.
.align()
|last('value')
```
Place this script in a file called `top_scores.tick`.
Now our `topPlayerScores` variable contains each player's most recent score.
Next to calculate the top scores per game we just need to group by game and run another map reduce job.
Let's keep the top 15 scores per game.
Add these lines to the `top_scores.tick` file.
```js
// Calculate the top 15 scores per game
var topScores = topPlayerScores
|groupBy('game')
|top(15, 'last', 'player')
```
The `topScores` variable now contains the top 15 player's score per game.
All we need to be able to build our leaderboard.
Kapacitor can expose the scores over HTTP via the [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/).
We will call our task `top_scores`; with the following addition the most recent scores will be available at
`http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores`.
```js
// Expose top scores over the HTTP API at the 'top_scores' endpoint.
// Now your app can just request the top scores from Kapacitor
// and always get the most recent result.
//
// http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores
topScores
|httpOut('top_scores')
```
Finally we want to store the top scores over time so we can do in depth analysis to ensure the best game play.
But we do not want to store the scores every second as that is still too much data.
First we will sample the data and store scores only every 10 seconds.
Also let's do some basic analysis ahead of time since we already have a stream of all the data.
For now we will just do basic gap analysis where we will store the gap between the top player and the 15th player.
Add these lines to `top_scores.tick` to complete our task.
```js
// Sample the top scores and keep a score once every 10s
var topScoresSampled = topScores
|sample(10s)
// Store top fifteen player scores in InfluxDB.
topScoresSampled
|influxDBOut()
.database('game')
.measurement('top_scores')
// Calculate the max and min of the top scores.
var max = topScoresSampled
|max('top')
var min = topScoresSampled
|min('top')
// Join the max and min streams back together and calculate the gap.
max
|join(min)
.as('max', 'min')
// Calculate the difference between the max and min scores.
// Rename the max and min fields to more friendly names 'topFirst', 'topLast'.
|eval(lambda: "max.max" - "min.min", lambda: "max.max", lambda: "min.min")
.as('gap', 'topFirst', 'topLast')
// Store the fields: gap, topFirst and topLast in InfluxDB.
|influxDBOut()
.database('game')
.measurement('top_scores_gap')
```
Since we are writing data back to InfluxDB create a database `game` for our results.
```
curl -G 'http://localhost:8086/query?' --data-urlencode 'q=CREATE DATABASE game'
```
Here is the complete task TICKscript if you don't want to copy paste as much :)
```js
dbrp "game"."autogen"
// Define a result that contains the most recent score per player.
var topPlayerScores = stream
|from()
.measurement('scores')
// Get the most recent score for each player per game.
// Not likely that a player is playing two games but just in case.
.groupBy('game', 'player')
|window()
// keep a buffer of the last 11s of scores
// just in case a player score hasn't updated in a while
.period(11s)
// Emit the current score per player every second.
.every(1s)
// Align the window boundaries to be on the second.
.align()
|last('value')
// Calculate the top 15 scores per game
var topScores = topPlayerScores
|groupBy('game')
|top(15, 'last', 'player')
// Expose top scores over the HTTP API at the 'top_scores' endpoint.
// Now your app can just request the top scores from Kapacitor
// and always get the most recent result.
//
// http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores
topScores
|httpOut('top_scores')
// Sample the top scores and keep a score once every 10s
var topScoresSampled = topScores
|sample(10s)
// Store top fifteen player scores in InfluxDB.
topScoresSampled
|influxDBOut()
.database('game')
.measurement('top_scores')
// Calculate the max and min of the top scores.
var max = topScoresSampled
|max('top')
var min = topScoresSampled
|min('top')
// Join the max and min streams back together and calculate the gap.
max
|join(min)
.as('max', 'min')
// calculate the difference between the max and min scores.
|eval(lambda: "max.max" - "min.min", lambda: "max.max", lambda: "min.min")
.as('gap', 'topFirst', 'topLast')
// store the fields: gap, topFirst, and topLast in InfluxDB.
|influxDBOut()
.database('game')
.measurement('top_scores_gap')
```
Define and enable our task to see it in action:
```bash
kapacitor define top_scores -tick top_scores.tick
kapacitor enable top_scores
```
First let's check that the HTTP output is working.
```bash
curl 'http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores'
```
You should have a JSON result of the top 15 players and their scores per game.
Hit the endpoint several times to see that the scores are updating once a second.
Now, let's check InfluxDB to see our historical data.
```bash
curl \
-G 'http://localhost:8086/query?db=game' \
--data-urlencode 'q=SELECT * FROM top_scores WHERE time > now() - 5m GROUP BY game'
curl \
-G 'http://localhost:8086/query?db=game' \
--data-urlencode 'q=SELECT * FROM top_scores_gap WHERE time > now() - 5m GROUP BY game'
```
Great!
The hard work is done.
All that is left is to configure the game server to send score updates to Kapacitor and update the spectator dashboard to pull scores from Kapacitor.

View File

@ -0,0 +1,181 @@
---
title: Load directory service
aliases:
- kapacitor/v1.5/examples/load_directory/
menu:
kapacitor_1_5:
name: Load directory service
identifier: load_directory
weight: 15
parent: guides
---
# File-based definitions of tasks, templates, and load handlers
The load directory service enables file-based definitions of Kapacitor tasks, templates, and topic handlers that are loaded on startup or when a SIGHUP signal is sent to the process.
## Configuration
The load directory service configuration is specified in the `[load]` section of the Kapacitor configuration file.
```
[load]
enabled = true
dir="/path/to/directory"
```
`dir` specifies the directory where the definition files are located.
The service will attempt to load the definitions from three subdirectories.
The `tasks` directory should contain task TICKscripts and the associated templated task definition files (either YAML or JSON).
The `templates` directory should contain templated TICKscripts.
The `handlers` directory will contain topic handler definitions in YAML or JSON.
## Tasks
Task files must be placed in the `tasks` subdirectory of the load service
directory. Task TICKscripts are specified based on the following scheme:
* `id` - the file name without the `.tick` extension
* `type` - determined by introspection of the task (stream or batch)
* `dbrp` - defined using the `dbrp` keyword followed by a specified database and retention policy
In the following example, the TICKscript will create a `stream` task named `my_task` for the dbrp `telegraf.autogen`.
```
// /path/to/directory/tasks/my_task.tick
dbrp "telegraf"."autogen"
stream
|from()
.measurement('cpu')
.groupBy(*)
|alert()
.warn(lambda: "usage_idle" < 20)
.crit(lambda: "usage_idle" < 10)
// Send alerts to the `cpu` topic
.topic('cpu')
```
## Task templates
Template files must be placed in the `templates` subdirectory of the load service directory.
Task templates are defined according to the following scheme:
* `id` - the file name without the tick extension
* `type` - determined by introspection of the task (stream or batch)
* `dbrp` - defined using the `dbrp` keyword followed by a specified database and retention policy
The following TICKscript example will create a `stream` template named `my_template` for the dbrp `telegaf.autogen`.
```
// /path/to/directory/templates/my_template.tick
dbrp "telegraf"."autogen"
var measurement string
var where_filter = lambda: TRUE
var groups = [*]
var field string
var warn lambda
var crit lambda
var window = 5m
var slack_channel = '#alerts'
stream
|from()
.measurement(measurement)
.where(where_filter)
.groupBy(groups)
|window()
.period(window)
.every(window)
|mean(field)
|alert()
.warn(warn)
.crit(crit)
.slack()
.channel(slack_channel)
```
### Templated tasks
Templated task files must be placed in the `tasks` subdirectory of the load service directory.
Templated tasks are defined according to the following scheme:
* `id` - filename without the `yaml`, `yml`, or `json` extension
* `dbrps` - required if not specified in template
* `template-id` - required
* `vars` - list of template vars
In this example, the templated task YAML file creates a `stream` task, named `my_templated_task`, for the dbrp `telegraf.autogen`.
```yaml
# /path/to/directory/tasks/my_templated_task.tick
dbrps:
- { db: "telegraf", rp: "autogen"}
template-id: my_template
vars:
measurement:
type: string
value: cpu
where_filter:
type: lambda
value: "\"cpu\" == 'cpu-total'"
groups:
type: list
value:
- type: string
value: host
- type: string
value: dc
field:
type: string
value : usage_idle
warn:
type: lambda
value: "\"mean\" < 30.0"
crit:
type: lambda
value: "\"mean\" < 10.0"
window:
type: duration
value : 1m
slack_channel:
type: string
value: "#alerts_testing"
```
The same task can also be created using JSON, as in this example:
```json
{
"dbrps": [{"db": "telegraf", "rp": "autogen"}],
"template-id": "my_template",
"vars": {
"measurement": {"type" : "string", "value" : "cpu" },
"where_filter": {"type": "lambda", "value": "\"cpu\" == 'cpu-total'"},
"groups": {"type": "list", "value": [{"type":"string", "value":"host"},{"type":"string", "value":"dc"}]},
"field": {"type" : "string", "value" : "usage_idle" },
"warn": {"type" : "lambda", "value" : "\"mean\" < 30.0" },
"crit": {"type" : "lambda", "value" : "\"mean\" < 10.0" },
"window": {"type" : "duration", "value" : "1m" },
"slack_channel": {"type" : "string", "value" : "#alerts_testing" }
}
}
```
## Topic handlers
Topic handler files must be placed in the `handlers` subdirectory of the load service directory.
```
id: handler-id
topic: cpu
kind: slack
match: changed() == TRUE
options:
channel: '#alerts'
```

View File

@ -0,0 +1,17 @@
---
title: Reference TICKscripts
aliases:
- kapacitor/v1.5/examples/reference_scripts/
menu:
kapacitor_1_5:
name: Reference TICKscripts
identifier: reference_scripts
weight: 20
parent: guides
---
The Kapacitor repository has a number of [example TICKscripts](https://github.com/influxdata/kapacitor/tree/master/examples/telegraf).
These scripts use common [Telegraf plugins](https://github.com/influxdata/telegraf/tree/master/plugins/inputs)
as the data source and show how to build common alerts.
Telegraf plugins with example scripts include "cpu", "disk", "mem", and
"netstat" metrics from the [`system` plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system).

View File

@ -0,0 +1,207 @@
---
title: Handling Kapacitor alerts during scheduled downtime
description: This guide walks through building Kapacitor TICKscripts that gracefully handle scheduled downtime without triggering unnecessary alerts.
menu:
kapacitor_1_5:
name: Handling scheduled downtime
parent: guides
weight: 100
---
In many cases, infrastructure downtime is necessary to perform system maintenance.
This type of downtime is typically scheduled beforehand, but can trigger unnecessary
alerts if the affected hosts are monitored by Kapacitor.
This guide walks through creating TICKscripts that gracefully handle scheduled downtime
without triggering alerts.
## Sideload
Avoid unnecessary alerts during scheduled downtime by using the
[`sideload`](/kapacitor/v1.5/nodes/sideload_node) node to load information from
files in the filesystem and set fields and tags on data points which can then be used in alert logic.
The `sideload` node adds fields and tags to points based on hierarchical data
from various file-based sources.
Kapacitor searches the specified files for a given field or tag key.
If it finds the field or tag key in the loaded files, it uses the value in the files to
set the field or tag on data points.
If it doesn't find the field or tag key, it sets them to the default value defined
in the [`field` or `tag` properties](#field).
### Relevant sideload properties
The following properties of `sideload` are relevant to gracefully handling scheduled downtime:
#### source
`source` specifies a directory in which source files live.
#### order
`order` specifies both files that are loaded and searched and the order
in which they are loaded and searched.
_Filepaths are relative to the `source` directory.
Files should be either JSON or YAML._
#### field
`field` defines a field key that Kapacitor should search for and the default value
it should use if it doesn't find a matching field key in the loaded files.
#### tag
`tag` defines a tag key that Kapacitor should search for and the default value
it should use if it doesn't find a matching tag key in the loaded files.
## Setup
With the `sideload` function, you can create what is essentially a white- or
black-list of hosts to ignore during scheduled downtime.
For this example, assume that maintenance will happen on both individual hosts
and hostgroups, both of which are included as tags on each point in the data set.
_In most cases, this can be done simply by host, but to illustrate how the `order`
property works, we'll use both host and hostgroup._
### Sideload source files
On the host on which Kapacitor is running, create a source directory that will
house the JSON or YAML files.
For example, `/usr/kapacitor/scheduled-maintenance`
(_It can be whatever you want as long as the `kapacitord` process can access it)._
Inside this directory, create a file for each host or host group that will be
offline during the scheduled downtime.
For the sake of organization, create `hosts` and `hostgroups` directories
and store the YAML or JSON files in each.
The names of each file should match a value of a `host` or `hostgroup` tag
for hosts that will be taken offline.
For this example, assume the **host1**, **host2**, **host3** hosts and the
**cluster7** and **cluster8** hostgroups will be taken offline.
Create a file for each of these hosts and host groups in their respective directories:
```
/usr/
└── kapacitor/
└── scheduled-maintenance/
├── hosts/
│ ├── host1.yml
│ ├── host2.yml
│ └── host3.yml
└── hostgroups/
├── cluster7.yml
└── cluster8.yml
```
> You only need to create files for hosts or hostgroups that will be offline.
The contents of the file should contain one or more key-value pairs.
The key is the field or tag key that will be set on each matching point.
The value is the field or tag value that will be set on matching points.
For this example, set the `maintenance` field to `true`.
Each of the source files will look like the following:
###### host1.yml
```yaml
maintenance: true
```
## TICKscript
Create a TICKscript that uses the `sideload` node to load in the maintenance state where ever it is needed.
### Define the sideload source
The `source` should use the `file://` URL protocol to reference the absolute path
of the directory containing the files that should be loaded.
```js
|sideload()
.source('file:///usr/kapacitor/scheduled-maintenance')
```
### Define the sideload order
The `order` property has access to template data which should be used to populate
the filepaths for loaded files (relative to the [`source`](#define-the-sideload-source)).
This allows Kapacitor to dynamically search for files based on the tag name used in the template.
In this case, use the `host` and `hostgroup` tags.
Kapacitor will iterate through the different values for each tag and search for
matching files in the source directory.
```js
|sideload()
.source('file:///usr/kapacitor/scheduled-maintenance')
.order('hosts/{{.host}}.yml' , 'hostgroups/{{.hostgroup}}.yml')
```
The order of file path templates in the `order` property define
the precedence in which file paths are checked.
Those listed first, from left to right, are checked first.
### Define the sideload field
The `field` property requires two arguments:
```js
|sideload()
// ...
.field('<key>', <default-value>)
```
###### key
The key that Kapacitor looks for in the source files and the field for which it
defines a value on each data point.
###### default-value
The default value used if no matching file and key are found in the source files.
In this example, use the `maintenance` field and set the default value to `FALSE`.
This assumes hosts are not undergoing maintenance by default.
```js
|sideload()
.source('file:///usr/kapacitor/scheduled-maintenance')
.order('hosts/{{.host}}.yml' , 'hostgroups/{{.hostgroup}}.yml')
.field('maintenance', FALSE)
```
> You can use the `tag` property instead of `field` if you prefer to set a tag
> on each data point rather than a field.
### Update alert logic
The `sideload` node will now set the `maintenance` field on every data point processed by the TICKscript.
For those that have `host` or `hostgroup` tags matching the filenames of the source files,
the `maintenance` field will be set to the value defined in the source file.
Update the alert logic in your TICKscript to ensure `maintenance` is **not** `true`
before sending an alert:
```js
stream
// ...
|alert()
.crit(lambda: !"maintenance" AND "usage_idle" < 30)
.warn(lambda: !"maintenance" AND "usage_idle" < 50)
.info(lambda: !"maintenance" AND "usage_idle" < 70)
```
### Full TICKscript example
```js
stream
|from()
.measurement('cpu')
.groupBy(*)
// Use sideload to maintain the host maintenance state.
// By default we assume a host is not undergoing maintenance.
|sideload()
.source('file:///usr/kapacitor/scheduled-maintenance')
.order('hosts/{{.host}}.yml' , 'hostgroups/{{.hostgroup}}.yml')
.field('maintenance', FALSE)
|alert()
// Add the `!"maintenance"` condition to the alert.
.crit(lambda: !"maintenance" AND "usage_idle" < 30)
.warn(lambda: !"maintenance" AND "usage_idle" < 50)
.info(lambda: !"maintenance" AND "usage_idle" < 70)
```
## Prepare for scheduled downtime
[Define a new Kapacitor task](/kapacitor/v1.5/working/cli_client/#tasks-and-task-templates) using your updated TICKscript.
As your scheduled downtime begins, update the `maintenance` value in the appropriate
host and host group source files and reload sideload to avoid alerts being triggered
for those specific hosts and host groups.

View File

@ -0,0 +1,655 @@
---
title: Writing socket-based user-defined functions (UDFs)
aliases:
- kapacitor/v1.5/examples/socket_udf/
menu:
kapacitor_1_5:
name: Writing socket-based UDFs
identifier: socket_udf
weight: 40
parent: guides
---
In [another example](/kapacitor/v1.5/guides/anomaly_detection/) we saw how to write a process based UDF for custom anomaly detection workloads.
In this example we are going to learn how to write a simple socket based UDF.
## What is a user-defined function (UDF)?
A UDF is a user defined function that can communicate with Kapacitor to process data.
Kapacitor will send it data and the UDF can respond with new or modified data.
A UDF can be written in any language that has [protocol buffer](https://developers.google.com/protocol-buffers/) support.
## What is the difference between a socket UDF and a process UDF?
* A process UDF, is a child process of Kapacitor that communicates over STDIN/STDOUT with Kapacitor and is completely managed by Kapacitor.
* A socket UDF is process external to Kapacitor that communicates over a configured unix domain socket. The process itself is not managed by Kapacitor.
Using a process UDF can be simpler than a socket UDF because Kapacitor will spawn the process and manage everything for you.
On the other hand you may want more control over the UDF process itself and rather expose only a socket to Kapacitor.
One use case that is common is running Kapacitor in a Docker container and the UDF in another container that exposes the socket via a Docker volume.
In both cases the protocol is the same the only difference is the transport mechanism.
Also note that since multiple Kapacitor tasks can use the same UDF, for a process based UDF a new child process will be spawned for each use of the UDF.
In contrast for a socket based UDF, a new connection will be made to the socket for each use of the UDF.
If you have many uses of the same UDF it may be better to use a socket UDF to keep the number of running processes low.
## Writing a UDF
A UDF communicates with Kapacitor via a protocol buffer request/response system.
We provide implementations of that communication layer in both Go and Python.
Since the other example used Python we will use the Go version here.
Our example is going to implement a `mirror` UDF which simply reflects all data it receives back to the Kapacitor server.
This example is actually part of the test suite and a Python and Go implementation can be found [here](https://github.com/influxdata/kapacitor/tree/master/udf/agent/examples/mirror).
### Lifecycle
Before we write any code lets look at the lifecycle of a socket UDF:
1. The UDF process is started, independently from Kapacitor.
2. The process listens on a unix domain socket.
3. Kapacitor connects to the socket and queries basic information about the UDFs options.
4. A Kapacitor task is enabled that uses the UDF and Kapacitor makes a new connection to the socket.
5. The task reads and writes data over the socket connection.
6. If the task is stopped for any reason the socket connection is closed.
### The Main method
We need to write a program that starts up and listens on a socket.
The following code is a main function that listens on a socket at
a default path, or on a custom path specified as the `-socket` flag.
```go
package main
import (
"flag"
"log"
"net"
)
var socketPath = flag.String("socket", "/tmp/mirror.sock", "Where to create the unix socket")
func main() {
flag.Parse()
// Create unix socket
addr, err := net.ResolveUnixAddr("unix", *socketPath)
if err != nil {
log.Fatal(err)
}
l, err := net.ListenUnix("unix", addr)
if err != nil {
log.Fatal(err)
}
// More to come here...
}
```
Place the above code in a scratch directory called `main.go`.
This above code can be run via `go run main.go`, but at this point it will exit immediately after listening on the socket.
### The Agent
As mentioned earlier, Kapacitor provides an implementation of the communication layer for UDFs called the `agent`.
Our code need only implement an interface in order to take advantage of the `agent` logic.
The interface we need to implement is as follows:
```go
// The Agent calls the appropriate methods on the Handler as it receives requests over a socket.
//
// Returning an error from any method will cause the Agent to stop and an ErrorResponse to be sent.
// Some *Response objects (like SnapshotResponse) allow for returning their own error within the object itself.
// These types of errors will not stop the Agent and Kapacitor will deal with them appropriately.
//
// The Handler is called from a single goroutine, meaning methods will not be called concurrently.
//
// To write Points/Batches back to the Agent/Kapacitor use the Agent.Responses channel.
type Handler interface {
// Return the InfoResponse. Describing the properties of this Handler
Info() (*agent.InfoResponse, error)
// Initialize the Handler with the provided options.
Init(*agent.InitRequest) (*agent.InitResponse, error)
// Create a snapshot of the running state of the handler.
Snapshot() (*agent.SnapshotResponse, error)
// Restore a previous snapshot.
Restore(*agent.RestoreRequest) (*agent.RestoreResponse, error)
// A batch has begun.
BeginBatch(*agent.BeginBatch) error
// A point has arrived.
Point(*agent.Point) error
// The batch is complete.
EndBatch(*agent.EndBatch) error
// Gracefully stop the Handler.
// No other methods will be called.
Stop()
}
```
### The Handler
Let's define our own type so we can start implementing the `Handler` interface.
Update the `main.go` file as follows:
```go
package main
import (
"flag"
"log"
"net"
"github.com/influxdata/kapacitor/udf/agent"
)
// Mirrors all points it receives back to Kapacitor
type mirrorHandler struct {
// We need a reference to the agent so we can write data
// back to Kapacitor.
agent *agent.Agent
}
func newMirrorHandler(agent *agent.Agent) *mirrorHandler {
return &mirrorHandler{agent: agent}
}
var socketPath = flag.String("socket", "/tmp/mirror.sock", "Where to create the unix socket")
func main() {
flag.Parse()
// Create unix socket
addr, err := net.ResolveUnixAddr("unix", *socketPath)
if err != nil {
log.Fatal(err)
}
l, err := net.ListenUnix("unix", addr)
if err != nil {
log.Fatal(err)
}
// More to come here...
}
```
Now let's add in each of the methods needed to initialize the UDF.
These next methods implement the behavior described in Step 3 of the UDF Lifecycle above,
where Kapacitor connects to the socket in order to query basic information about the UDF.
Add these methods to the `main.go` file:
```go
// Return the InfoResponse. Describing the properties of this UDF agent.
func (*mirrorHandler) Info() (*agent.InfoResponse, error) {
info := &agent.InfoResponse{
// We want a stream edge
Wants: agent.EdgeType_STREAM,
// We provide a stream edge
Provides: agent.EdgeType_STREAM,
// We expect no options.
Options: map[string]*agent.OptionInfo{},
}
return info, nil
}
// Initialze the handler based of the provided options.
func (*mirrorHandler) Init(r *agent.InitRequest) (*agent.InitResponse, error) {
// Since we expected no options this method is trivial
// and we return success.
init := &agent.InitResponse{
Success: true,
Error: "",
}
return init, nil
}
```
For now, our simple mirroring UDF doesn't need any options, so these methods are trivial.
At the end of this example we will modify the code to accept a custom option.
Now that Kapacitor knows which edge types and options our UDF uses, we need to implement the methods
for handling data.
Add this method to the `main.go` file which sends back every point it receives to Kapacitor via the agent:
```go
func (h *mirrorHandler) Point(p *agent.Point) error {
// Send back the point we just received
h.agent.Responses <- &agent.Response{
Message: &agent.Response_Point{
Point: p,
},
}
return nil
}
```
Notice that the `agent` has a channel for responses, this is because your UDF can send data to Kapacitor
at any time, so it does not need to be in a response to receive a point.
As a result, we need to close the channel to let the `agent` know
that we will not be sending any more data, which can be done via the `Stop` method.
Once the `agent` calls `Stop` on the `handler`, no other methods will be called and the `agent` won't stop until
the channel is closed.
This gives the UDF the chance to flush out any remaining data before it's shutdown:
```go
// Stop the handler gracefully.
func (h *mirrorHandler) Stop() {
// Close the channel since we won't be sending any more data to Kapacitor
close(h.agent.Responses)
}
```
Even though we have implemented the majority of the handler implementation, there are still a few missing methods.
Specifically, the methods around batching and snapshot/restores are missing, but, since we don't need them, we will just give them trivial implementations:
```go
// Create a snapshot of the running state of the process.
func (*mirrorHandler) Snapshot() (*agent.SnapshotResponse, error) {
return &agent.SnapshotResponse{}, nil
}
// Restore a previous snapshot.
func (*mirrorHandler) Restore(req *agent.RestoreRequest) (*agent.RestoreResponse, error) {
return &agent.RestoreResponse{
Success: true,
}, nil
}
// Start working with the next batch
func (*mirrorHandler) BeginBatch(begin *agent.BeginBatch) error {
return errors.New("batching not supported")
}
func (*mirrorHandler) EndBatch(end *agent.EndBatch) error {
return nil
}
```
### The Server
At this point we have a complete implementation of the `Handler` interface.
In step #4 of the Lifecycle above, Kapacitor makes a new connection to the UDF for each use in a task. Since it's possible that our UDF process can handle multiple connections simultaneously, we need a mechanism for creating a new `agent` and `handler` per connection.
A `server` is provided for this purpose, which expects an implementation of the `Accepter` interface:
```go
type Accepter interface {
// Accept new connections from the listener and handle them accordingly.
// The typical action is to create a new Agent with the connection as both its in and out objects.
Accept(net.Conn)
}
```
Here is a simple `accepter` that creates a new `agent` and `mirrorHandler`
for each new connection. Add this to the `main.go` file:
```go
type accepter struct {
count int64
}
// Create a new agent/handler for each new connection.
// Count and log each new connection and termination.
func (acc *accepter) Accept(conn net.Conn) {
count := acc.count
acc.count++
a := agent.New(conn, conn)
h := newMirrorHandler(a)
a.Handler = h
log.Println("Starting agent for connection", count)
a.Start()
go func() {
err := a.Wait()
if err != nil {
log.Fatal(err)
}
log.Printf("Agent for connection %d finished", count)
}()
}
```
Now with all the pieces in place, we can update our `main` function to
start up the `server`. Replace the previously provided `main` function with:
```go
func main() {
flag.Parse()
// Create unix socket
addr, err := net.ResolveUnixAddr("unix", *socketPath)
if err != nil {
log.Fatal(err)
}
l, err := net.ListenUnix("unix", addr)
if err != nil {
log.Fatal(err)
}
// Create server that listens on the socket
s := agent.NewServer(l, &accepter{})
// Setup signal handler to stop Server on various signals
s.StopOnSignals(os.Interrupt, syscall.SIGTERM)
log.Println("Server listening on", addr.String())
err = s.Serve()
if err != nil {
log.Fatal(err)
}
log.Println("Server stopped")
}
```
## Start the UDF
At this point we are ready to start the UDF.
Here is the complete `main.go` file for reference:
```go
package main
import (
"errors"
"flag"
"log"
"net"
"os"
"syscall"
"github.com/influxdata/kapacitor/udf/agent"
)
// Mirrors all points it receives back to Kapacitor
type mirrorHandler struct {
agent *agent.Agent
}
func newMirrorHandler(agent *agent.Agent) *mirrorHandler {
return &mirrorHandler{agent: agent}
}
// Return the InfoResponse. Describing the properties of this UDF agent.
func (*mirrorHandler) Info() (*agent.InfoResponse, error) {
info := &agent.InfoResponse{
Wants: agent.EdgeType_STREAM,
Provides: agent.EdgeType_STREAM,
Options: map[string]*agent.OptionInfo{},
}
return info, nil
}
// Initialze the handler based of the provided options.
func (*mirrorHandler) Init(r *agent.InitRequest) (*agent.InitResponse, error) {
init := &agent.InitResponse{
Success: true,
Error: "",
}
return init, nil
}
// Create a snapshot of the running state of the process.
func (*mirrorHandler) Snapshot() (*agent.SnapshotResponse, error) {
return &agent.SnapshotResponse{}, nil
}
// Restore a previous snapshot.
func (*mirrorHandler) Restore(req *agent.RestoreRequest) (*agent.RestoreResponse, error) {
return &agent.RestoreResponse{
Success: true,
}, nil
}
// Start working with the next batch
func (*mirrorHandler) BeginBatch(begin *agent.BeginBatch) error {
return errors.New("batching not supported")
}
func (h *mirrorHandler) Point(p *agent.Point) error {
// Send back the point we just received
h.agent.Responses <- &agent.Response{
Message: &agent.Response_Point{
Point: p,
},
}
return nil
}
func (*mirrorHandler) EndBatch(end *agent.EndBatch) error {
return nil
}
// Stop the handler gracefully.
func (h *mirrorHandler) Stop() {
close(h.agent.Responses)
}
type accepter struct {
count int64
}
// Create a new agent/handler for each new connection.
// Count and log each new connection and termination.
func (acc *accepter) Accept(conn net.Conn) {
count := acc.count
acc.count++
a := agent.New(conn, conn)
h := newMirrorHandler(a)
a.Handler = h
log.Println("Starting agent for connection", count)
a.Start()
go func() {
err := a.Wait()
if err != nil {
log.Fatal(err)
}
log.Printf("Agent for connection %d finished", count)
}()
}
var socketPath = flag.String("socket", "/tmp/mirror.sock", "Where to create the unix socket")
func main() {
flag.Parse()
// Create unix socket
addr, err := net.ResolveUnixAddr("unix", *socketPath)
if err != nil {
log.Fatal(err)
}
l, err := net.ListenUnix("unix", addr)
if err != nil {
log.Fatal(err)
}
// Create server that listens on the socket
s := agent.NewServer(l, &accepter{})
// Setup signal handler to stop Server on various signals
s.StopOnSignals(os.Interrupt, syscall.SIGTERM)
log.Println("Server listening on", addr.String())
err = s.Serve()
if err != nil {
log.Fatal(err)
}
log.Println("Server stopped")
}
```
Run `go run main.go` to start the UDF.
If you get an error about the socket being in use,
just delete the socket file and try running the UDF again.
## Configure Kapacitor to Talk to the UDF
Now that our UDF is ready, we need to tell Kapacitor
where our UDF socket is, and give it a name so that we can use it.
Add this to your Kapacitor configuration file:
```
[udf]
[udf.functions]
[udf.functions.mirror]
socket = "/tmp/mirror.sock"
timeout = "10s"
```
## Start Kapacitor
Start up Kapacitor and you should see it connect to your UDF in both the Kapacitor logs and the UDF process logs.
## Try it out
Take an existing task and add `@mirror()` at any point in the TICKscript pipeline to see it in action.
Here is an example TICKscript, which will need to be saved to a file:
```js
dbrp "telegraf"."autogen"
stream
|from()
.measurement('cpu')
@mirror()
|alert()
.crit(lambda: "usage_idle" < 30)
```
Define the above alert from your terminal like so:
```sh
kapacitor define mirror_udf_example -tick path/to/above/script.tick
```
Start the task:
```sh
kapacitor enable mirror_udf_example
```
Check the status of the task:
```sh
kapacitor show mirror_udf_example
```
## Adding a Custom Field
Now let's change the UDF to add a field to the data.
We can use the `Info/Init` methods to define and consume an option on the UDF, so let's specify the name of the field to add.
Update the `mirrorHandler` type and the methods `Info` and `Init` as follows:
```go
// Mirrors all points it receives back to Kapacitor
type mirrorHandler struct {
agent *agent.Agent
name string
value float64
}
// Return the InfoResponse. Describing the properties of this UDF agent.
func (*mirrorHandler) Info() (*agent.InfoResponse, error) {
info := &agent.InfoResponse{
Wants: agent.EdgeType_STREAM,
Provides: agent.EdgeType_STREAM,
Options: map[string]*agent.OptionInfo{
"field": {ValueTypes: []agent.ValueType{
agent.ValueType_STRING,
agent.ValueType_DOUBLE,
}},
},
}
return info, nil
}
// Initialze the handler based of the provided options.
func (h *mirrorHandler) Init(r *agent.InitRequest) (*agent.InitResponse, error) {
init := &agent.InitResponse{
Success: true,
Error: "",
}
for _, opt := range r.Options {
switch opt.Name {
case "field":
h.name = opt.Values[0].Value.(*agent.OptionValue_StringValue).StringValue
h.value = opt.Values[1].Value.(*agent.OptionValue_DoubleValue).DoubleValue
}
}
if h.name == "" {
init.Success = false
init.Error = "must supply field"
}
return init, nil
}
```
Now we can set the field with its name and value on the points.
Update the `Point` method:
```go
func (h *mirrorHandler) Point(p *agent.Point) error {
// Send back the point we just received
if p.FieldsDouble == nil {
p.FieldsDouble = make(map[string]float64)
}
p.FieldsDouble[h.name] = h.value
h.agent.Responses <- &agent.Response{
Message: &agent.Response_Point{
Point: p,
},
}
return nil
}
```
Restart the UDF process and try it out again.
Specify which field name and value to use with the `.field(name, value)` method.
You can add a `|log()` after the `mirror` UDF to see that the new field has indeed been created.
```js
dbrp "telegraf"."autogen"
stream
|from()
.measurement('cpu')
@mirror()
.field('mycustom_field', 42.0)
|log()
|alert()
.cirt(lambda: "usage_idle" < 30)
```
## Summary
At this point, you should be able to write custom UDFs using either the socket or process-based methods.
UDFs have a wide range of uses, from custom downsampling logic as part of a continuous query,
custom anomaly detection algorithms, or simply a system to "massage" your data a bit.
### Next Steps
If you want to learn more, here are a few places to start:
* Modify the mirror UDF, to function like the [DefaultNode](/kapacitor/v1.5/nodes/default_node/).
Instead of always overwriting a field, only set it if the field is not absent.
Also add support for setting tags as well as fields.
* Change the mirror UDF to work on batches instead of streams.
This requires changing the edge type in the `Info` method as well as implementing the `BeginBatch` and `EndBatch` methods.
* Take a look at the other [examples](https://github.com/influxdata/kapacitor/tree/master/udf/agent/examples) and modify one to do something similar to one of your existing requirements.

View File

@ -0,0 +1,131 @@
---
title: Triggering alerts by comparing two measurements
description: Kapacitor allows you to create alerts triggered by comparisons between two or more measurements. This guide walks through how to join the measurements, trigger alerts, and create visualizations for the data comparison.
menu:
kapacitor_1_5:
name: Alerts based on two measurements
identifier: two-measurement-alert
weight: 20
parent: guides
---
Kapacitor allows you to create alerts based on two or more measurements.
In this guide, we are going to compare two measurements, `m1` and `m2`, and create
an alert whenever the two measurements are different.
As an added bonus, we'll also include a query that can be used to graph the percentage
difference between the two measurements.
## Comparing measurements and creating an alert
The following [TICKscript](/kapacitor/latest/tick/) streams the `m1` and `m2` measurements,
joins them, compares them, and triggers an alert if the two measurements are different.
```js
var window_size = 1m
// Stream m1
var m1 = stream
|from()
.measurement('m1')
|window()
.period(window_size)
.every(window_size)
.align()
|count('value')
.as('value')
// Stream m2
var m2 = stream
|from()
.measurement('m2')
|window()
.period(window_size)
.every(window_size)
.align()
|count('value')
.as('value')
// Join m1 and m2
var data = m1
|join(m2)
.as('m1', 'm2')
// Compare the joined stream and alert when m1 and m2 values are different
data
|alert()
.crit(lambda: "m1.value" != "m2.value")
.message('values were not equal m1 value is {{ index .Fields "m1.value" }} m2 value is {{ index .Fields "m2.value" }}')
```
## Graphing the percentage difference between the measurements
Use the `data` stream defined in the TICKscript above to calculate the difference
between `m1` and `m2`, transform it into a float, divide that difference by the
actual values of `m1` and `m2`, then multiply them by 100.
This will give you the percentage difference for each.
Store the difference as new fields in the `diffs` measurement:
```js
data
// Calculate the difference between m1 and m2
|eval(lambda: "m1.value" - "m2.value")
.as('value_diff')
.keep()
// Calculate the % difference of m1 and m2
|eval(lambda: (float("value_diff") / float("m1.value")) * 100.0, lambda: (float("value_diff") / float("m2.value")) * 100.0)
.as('diff_percentage_m1', 'diff_percentage_m2')
// Store the calculated differences in the 'diffs' measurement
|influxDBOut()
.measurement('diffs')
.database('mydb')
.create()
```
This can be used to create visualizations similar to:
<img src='/img/kapacitor/comparing-two-measurements.png' alt='Graphing the percentage difference between two measurements' style='width: 100%; max-width: 800px;'>
## The full TICKscript
Below is the entire, uncommented TICKscript:
```js
var window_size = 1m
var m1 = stream
|from()
.measurement('m1')
|window()
.period(window_size)
.every(window_size)
.align()
|count('value')
.as('value')
var m2 = stream
|from()
.measurement('m2')
|window()
.period(window_size)
.every(window_size)
.align()
|count('value')
.as('value')
var data = m1
|join(m2)
.as('m1', 'm2')
data
|alert()
.crit(lambda: "m1.value" != "m2.value")
.message('values were not equal m1 value is {{ index .Fields "m1.value" }} m2 value is {{ index .Fields "m2.value" }}')
data
|eval(lambda: "m1.value" - "m2.value")
.as('value_diff')
.keep()
|eval(lambda: (float("value_diff") / float("m1.value")) * 100.0, lambda: (float("value_diff") / float("m2.value")) * 100.0)
.as('diff_percentage_m1', 'diff_percentage_m2')
|influxDBOut()
.measurement('diffs')
.database('mydb')
.create()
```

View File

@ -0,0 +1,16 @@
---
title: Introducing Kapacitor
aliases:
- /kapacitor/v1.5/introduction/downloading
menu:
kapacitor_1_5:
name: Introduction
weight: 10
---
To get up and running with Kapacitor, complete the following tasks:
## Download Kapacitor
For information about downloading Kapacitor, visit the [InfluxData downloads page](https://portal.influxdata.com/downloads).
{{< children hlevel="h2">}}

View File

@ -0,0 +1,524 @@
---
title: Getting started with Kapacitor
weight: 20
menu:
kapacitor_1_5:
parent: Introduction
---
Use Kapacitor to import (stream or batch) time series data, and then transform, analyze, and act on the data. To get started using Kapacitor, use Telegraf to collect system metrics on your local machine and store them in InfluxDB. Then, use Kapacitor to process your system data.
- [Overview](#overview)
- [Start InfluxDB and collect Telegraf data](#start-influxdb-and-collect-telegraf-data)
- [Start Kapacitor](#start-kapacitor)
- Kapacitor tasks
- [Execute a task](#execute-a-task)
- [Trigger an alert from stream data](#trigger-alerts-from-stream-data)
- [Example alert on CPU usage](#example-alert-on-cpu-usage)
- [Gotcha - single versus double quotes](#gotcha-single-versus-double-quotes)
- [Extending TICKscripts](#extending-tickscripts)
- [A real world example](#a-real-world-example)
- [Trigger an alert from batch data](#trigger-alerts-from-batch-data)
- [Load tasks](#load-tasks-with-kapacitor)
## Overview
Kapacitor tasks define work to do on a set of data using [TICKscript](/kapacitor/v1.5/tick/) syntax. Kapacitor tasks include:
- `stream` tasks. A stream task replicates data written to InfluxDB in Kapacitor. Offloads query overhead and requires Kapacitor to store the data on disk.
- `batch` tasks. A batch task queries and processes data for a specified interval.
To get started, do the following:
1. If you haven't already, [download and install the InfluxData TICK stack (OSS)](/platform/install-and-deploy/install/oss-install).
2. [Start InfluxDB and start Telegraf](#start-influxdb-and-collect-telegraf-data). By default, Telegraf starts sending system metrics to InfluxDB and creates a 'telegraf' database.
3. Start Kapacitor.
> **Note:** Example commands in the following procedures are written for Linux.
## Start InfluxDB and collect Telegraf data
1. Start InfluxDB by running the following command:
```bash
$ sudo systemctl start influxdb
```
2. In the Telegraf configuration file (`/etc/telegraf/telegraf.conf`), configure `[[outputs.influxd]]` to specify how to connect to InfluxDB and the destination database.
```sh
[[outputs.influxdb]]
## InfluxDB url is required and must be in the following form: http/udp "://" host [ ":" port]
## Multiple urls can be specified as part of the same cluster; only ONE url is written to each interval.
## InfluxDB url
urls = ["http://localhost:8086"]
## The target database for metrics is required (Telegraf creates if one doesn't exist).
database = "telegraf"
```
3. Run the following command to start Telegraf:
```
$ sudo systemctl start telegraf
```
InfluxDB and Telegraf are now running on localhost.
4. After a minute, run the following command to use the InfluxDB API to query for the Telegraf data:
```bash
$ curl -G 'http://localhost:8086/query?db=telegraf' --data-urlencode 'q=SELECT mean(usage_idle) FROM cpu'
```
Results similar to the following appear:
```
{"results":[{"statement_id":0,"series":[{"name":"cpu","columns":["time","mean"],"values":[["1970-01-01T00:00:00Z",91.82304336748372]]}]}]}
```
## Start Kapacitor
1. Run the following command to generate a Kapacitor configuration file:
```bash
kapacitord config > kapacitor.conf
```
By default, the Kapacitor configuration file is saved in `/etc/kapacitor/kapacitor.conf`. If you save the file to another location, specify the location when starting Kapacitor.
> The Kapacitor configuration is a [TOML](https://github.com/toml-lang/toml) file. Inputs configured for InfluxDB also work for Kapacitor.
2. Start the Kapacitor service:
```bash
$ sudo systemctl start kapacitor
```
Because InfluxDB is running on `http://localhost:8086`, Kapacitor finds it during start up and creates several [subscriptions](/kapacitor/v1.5/administration/subscription-management/) on InfluxDB.
Subscriptions tell InfluxDB to send data to Kapacitor.
3. (Optional) To view log data, run the following command:
```
$ sudo tail -f -n 128 /var/log/kapacitor/kapacitor.log
```
Kapacitor listens on an HTTP port and posts data to InfluxDB. Now, InfluxDB streams data from Telegraf to Kapacitor.
### Execute a task
- At the beginning of a TICKscript, specify the database and retention policy
that contain data:
```js
dbrp "telegraf"."autogen"
// ...
```
When Kapacitor receives data from a database and retention policy that matches those
specified, Kapacitor executes the TICKscript.
> Kapacitor supports executing tasks based on database and retention policy (no other conditions).
## Trigger alerts from stream data
Triggering an alert is a common Kapacitor use case. The database and retention policy to alert on must be defined.
##### Example alert on CPU usage
1. Copy the following TICKscript into a file called `cpu_alert.tick`:
```js
dbrp "telegraf"."autogen"
stream
// Select the CPU measurement from the `telegraf` database.
|from()
.measurement('cpu')
// Triggers a critical alert when the CPU idle usage drops below 70%
|alert()
.crit(lambda: int("usage_idle") < 70)
// Write each alert to a file.
.log('/tmp/alerts.log')
```
2. In the command line, use the `kapacitor` CLI to define the task using the `cpu_alert.tick` TICKscript:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
```
> If the database and retention policy aren't included in the TICKscript (for example, `dbrp "telegraf"."autogen"`), use the `kapacitor define` command with the `-dbrp` flag followed by <DBNAME>"."<RETENTION_POLICY>" to specify them when adding the task.
3. (Optional) Use the `list` command to verify the alert has been created:
```
$ kapacitor list tasks
ID Type Status Executing Databases and Retention Policies
cpu_alert stream disabled false ["telegraf"."autogen"]
```
4. (Optional) Use the `show` command to view details about the task:
```
$ kapacitor show cpu_alert
ID: cpu_alert
Error:
Template:
Type: stream
Status: disabled
Executing: false
...
```
4. To ensure log files and communication channels aren't spammed with alerts, [test the task](#test-the-task).
5. Enable the task to start processing the live data stream:
```bash
kapacitor enable cpu_alert
```
Alerts are written to the log in real time.
6. Run the `show` command to verify the task is receiving data and behaving as expected:
```bash
$ kapacitor show cpu_alert
|from()
// Information about the state of the task and any error it may have encountered.
ID: cpu_alert
Error:
Type: stream
Status: Enabled
Executing: true
Created: 04 May 16 21:01 MDT
Modified: 04 May 16 21:04 MDT
LastEnabled: 04 May 16 21:03 MDT
Databases Retention Policies: [""."autogen"]
// Displays the version of the TICKscript that Kapacitor has stored in its local database.
TICKscript:
stream
// Select just the cpu me
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 70)
// Whenever we get an alert write it to a file.
.log('/tmp/alerts.log')
DOT:
digraph asdf {
graph [throughput="0.00 points/s"];
stream0 [avg_exec_time_ns="0" ];
stream0 -> from1 [processed="12"];
from1 [avg_exec_time_ns="0" ];
from1 -> alert2 [processed="12"];
alert2 [alerts_triggered="0" avg_exec_time_ns="0" ];
}
```
Returns a [graphviz dot](http://www.graphviz.org) formatted tree that shows the data processing pipeline defined by the TICKscript and key-value associative array entries with statistics about each node and links along an edge to the next node also including associative array statistical information. The *processed* key in the link/edge members indicates the number of data points that have passed along the specified edge of the graph.
In the example above, the `stream0` node (aka the `stream` var from the TICKscript) has sent 12 points to the `from1` node.
The `from1` node has also sent 12 points on to the `alert2` node. Since Telegraf is configured to send `cpu` data, all 12 points match the database/measurement criteria of the `from1` node and are passed on.
> If necessary, install graphviz on Debian or RedHat using the package provided by the OS provider. The packages offered on the graphviz site are not up-to-date.
Now that the task is running with live data, here is a quick hack to use 100% of one core to generate some artificial cpu activity:
```bash
while true; do i=0; done
```
##### Test the task
Complete the following steps to ensure log files and communication channels aren't spammed with alerts.
1. Record the data stream:
```bash
kapacitor record stream -task cpu_alert -duration 60s
```
If a connection error appears, for example: `getsockopt: connection refused` (Linux) or `connectex: No connection could be made...` (Windows),
verify the Kapacitor service is running (see [Installing and Starting Kapacitor](#installing-and-starting-kapacitor)).
If Kapacitor is running, check the firewall settings of the host machine and ensure that port `9092` is accessible.
Also, check messages in `/var/log/kapacitor/kapacitor.log`. If there's an issue with the `http` or other configuration in `/etc/kapacitor/kapacitor.conf`, the issue appears in the log.
If the Kapacitor service is running on another host machine, set the `KAPACITOR_URL` environment variable in the local shell to the Kapacitor endpoint on the remote machine.
2. Retrieve the returned ID and assign the ID to a bash variable to use later (the actual UUID returned is different):
```bash
rid=cd158f21-02e6-405c-8527-261ae6f26153
```
3. Confirm the recording captured some data by running:
```bash
kapacitor list recordings $rid
```
The output should appear like:
```
ID Type Status Size Date
cd158f21-02e6-405c-8527-261ae6f26153 stream finished 2.2 kB 04 May 16 11:44 MDT
```
If the size is more than a few bytes, data has been captured.
If Kapacitor isn't receiving data, check each layer: Telegraf → InfluxDB → Kapacitor.
Telegraf logs errors if it cannot communicate to InfluxDB.
InfluxDB logs an error about `connection refused` if it cannot send data to Kapacitor.
Run the query `SHOW SUBSCRIPTIONS` against InfluxDB to find the endpoint that InfluxDB is using to send data to Kapacitor.
In the following example, InfluxDB must be running on localhost:8086:
```
$ curl -G 'http://localhost:8086/query?db=telegraf' --data-urlencode 'q=SHOW SUBSCRIPTIONS'
{"results":[{"statement_id":0,"series":[{"name":"_internal","columns":["retention_policy","name","mode","destinations"],"values":[["monitor","kapacitor-ef3b3f9d-0997-4c0b-b1b6-5d0fb37fe509","ANY",["http://localhost:9092"]]]},{"name":"telegraf","columns":["retention_policy","name","mode","destinations"],"values":[["autogen","kapacitor-ef3b3f9d-0997-4c0b-b1b6-5d0fb37fe509","ANY",["http://localhost:9092"]]]}]}]}
```
4. Use `replay` to test the recorded data for a specific task:
```bash
kapacitor replay -recording $rid -task cpu_alert
```
> Use the flag `-real-clock` to set the replay time by deltas between the timestamps. Time is measured on each node by the data points it receives.
5. Review the log for alerts:
```bash
sudo cat /tmp/alerts.log
```
Each JSON line represents one alert, and includes the alert level and data that triggered the alert.
> If the host machine is busy, it may take awhile to log alerts.
6. (Optional) Modify the task to be really sensitive to ensure the alerts are working.
In the TICKscript, change the lamda function `.crit(lambda: "usage_idle" < 70)` to `.crit(lambda: "usage_idle" < 100)`, and run the `define` command with just the `TASK_NAME` and `-tick` arguments:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
```
Every data point received during the recording triggers an alert.
7. Replay the modified task to verify the results.
```bash
kapacitor replay -recording $rid -task cpu_alert
```
Once the `alerts.log` results verify that the task is working, change the `usage_idle` threshold back to a more reasonable level and redefine the task once more using the `define` command as shown in step 6.
### Gotcha - single versus double quotes
Single quotes and double quotes in TICKscripts do very different things:
Note the following example:
```js
var data = stream
|from()
.database('telegraf')
.retentionPolicy('autogen')
.measurement('cpu')
// NOTE: Double quotes on server1
.where(lambda: "host" == "server1")
```
The result of this search will always be empty, because double quotes were used around "server1". This means that Kapacitor will search for a series where the field "host" is equal to the value held in _the field_ "server1". This is probably not what was intended. More likely the intention was to search for a series where tag "host" has _the value_ 'server1', so single quotes should be used. Double quotes denote data fields, single quotes string values. To match the _value_, the tick script above should look like this:
```js
var data = stream
|from()
.database('telegraf')
.retentionPolicy('autogen')
.measurement('cpu')
// NOTE: Single quotes on server1
.where(lambda: "host" == 'server1')
```
### Extending TICKscripts
The TICKscript below will compute the running mean and compare current values to it.
It will then trigger an alert if the values are more than 3 standard deviations away from the mean.
Replace the `cpu_alert.tick` script with the TICKscript below:
```js
stream
|from()
.measurement('cpu')
|alert()
// Compare values to running mean and standard deviation
.crit(lambda: sigma("usage_idle") > 3)
.log('/tmp/alerts.log')
```
Just like that, a dynamic threshold can be created, and, if cpu usage drops in the day or spikes at night, an alert will be issued.
Try it out.
Use `define` to update the task TICKscript.
```bash
kapacitor define cpu_alert -tick cpu_alert.tick
```
>**Note:** If a task is already enabled, redefining the task with the `define` command automatically reloads (`reload`) the task.
To define a task without reloading it, use `-no-reload`
Now tail the alert log:
```bash
sudo tail -f /tmp/alerts.log
```
There should not be any alerts triggering just yet.
Next, start a while loop to add some load:
```bash
while true; do i=0; done
```
An alert trigger should be written to the log shortly, once enough artificial load has been created.
Leave the loop running for a few minutes.
After canceling the loop, another alert should be issued indicating that cpu usage has again changed.
Using this technique, alerts can be generated for the raising and falling edges of cpu usage, as well as any outliers.
### A real world example
Now that the basics have been covered, here is a more real world example.
Once the metrics from several hosts are streaming to Kapacitor, it is possible to do something like: Aggregate and group
the cpu usage for each service running in each datacenter, and then trigger an alert
based off the 95th percentile.
In addition to just writing the alert to a log, Kapacitor can
integrate with third party utilities: currently Slack, PagerDuty, HipChat, VictorOps and more are supported. The alert can also be sent by email, be posted to a custom endpoint or can trigger the execution of a custom script.
Custom message formats can also be defined so that alerts have the right context and meaning.
The TICKscript for this would look like the following example.
*Example - TICKscript for stream on multiple service cpus and alert on 95th percentile*
```js
stream
|from()
.measurement('cpu')
// create a new field called 'used' which inverts the idle cpu.
|eval(lambda: 100.0 - "usage_idle")
.as('used')
|groupBy('service', 'datacenter')
|window()
.period(1m)
.every(1m)
// calculate the 95th percentile of the used cpu.
|percentile('used', 95.0)
|eval(lambda: sigma("percentile"))
.as('sigma')
.keep('percentile', 'sigma')
|alert()
.id('{{ .Name }}/{{ index .Tags "service" }}/{{ index .Tags "datacenter"}}')
.message('{{ .ID }} is {{ .Level }} cpu-95th:{{ index .Fields "percentile" }}')
// Compare values to running mean and standard deviation
.warn(lambda: "sigma" > 2.5)
.crit(lambda: "sigma" > 3.0)
.log('/tmp/alerts.log')
// Post data to custom endpoint
.post('https://alerthandler.example.com')
// Execute custom alert handler script
.exec('/bin/custom_alert_handler.sh')
// Send alerts to slack
.slack()
.channel('#alerts')
// Sends alerts to PagerDuty
.pagerDuty()
// Send alerts to VictorOps
.victorOps()
.routingKey('team_rocket')
```
Something so simple as defining an alert can quickly be extended to apply to a much larger scope.
With the above script, an alert will be triggered if any service in any datacenter deviates more than 3
standard deviations away from normal behavior as defined by the historical 95th percentile of cpu usage, and will do so within 1 minute!
For more information on how alerting works, see the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) docs.
## Trigger alerts from batch data
In addition to processing data in streams, Kapacitor can also periodically query InfluxDB and process data in batches.
While triggering an alert based off cpu usage is more suited for the streaming case, the basic idea
of how `batch` tasks work is demonstrated here by following the same use case.
##### Example alert on batch data
This TICKscript does roughly the same thing as the earlier stream task, but as a batch task:
```js
dbrp "telegraf"."autogen"
batch
|query('''
SELECT mean(usage_idle)
FROM "telegraf"."autogen"."cpu"
''')
.period(5m)
.every(5m)
.groupBy(time(1m), 'cpu')
|alert()
.crit(lambda: "mean" < 70)
.log('/tmp/batch_alerts.log')
```
1. Copy the script above into the file `batch_cpu_alert.tick`.
2. Define the task:
```bash
kapacitor define batch_cpu_alert -tick batch_cpu_alert.tick
```
3. Verify its creation:
```bash
$ kapacitor list tasks
ID Type Status Executing Databases and Retention Policies
batch_cpu_alert batch disabled false ["telegraf"."autogen"]
cpu_alert stream enabled true ["telegraf"."autogen"]
```
4. Record the result of the query in the task (note, the actual UUID differs):
```bash
kapacitor record batch -task batch_cpu_alert -past 20m
# Save the id again
rid=b82d4034-7d5c-4d59-a252-16604f902832
```
This records the last 20 minutes of batches using the query in the `batch_cpu_alert` task.
In this case, since the `period` is 5 minutes, the last 4 batches are recorded and saved.
5. Replay the batch recording the same way:
```bash
kapacitor replay -recording $rid -task batch_cpu_alert
```
6. Check the alert log to make sure alerts were generated as expected.
The `sigma` based alert above can also be adapted for working with batch data.
Play around and get comfortable with updating, testing, and running tasks in Kapacitor.
## Load tasks with Kapacitor
To load a task with Kapacitor, save the TICKscript in a _load_ directory specified in `kapacitor.conf`. TICKscripts must include the database and retention policy declaration `dbrp`.
TICKscripts in the load directory are automatically loaded when Kapacitor starts and do not need to be added with the kapacitor define command.
For more information, see [Load Directory](/kapacitor/v1.5/guides/load_directory/).

View File

@ -0,0 +1,433 @@
---
title: Docker Install
weight: 70
menu:
kapacitor_1_5:
parent: Introduction
---
## Getting Started with TICK and Docker Compose
This short tutorial will demonstrate starting TICK stack components (InfluxDB, Telegraf, Kapacitor) with Docker Compose and then using that stack to learn the rudiments of working with Kapacitor and the [TICKscript](/kapacitor/v1.5/tick/) domain specific language (DSL). The following discussion is based on the tutorial project package (named tik-docker-tutorial.tar.gz) that can be downloaded from [this location](/downloads/tik-docker-tutorial.tar.gz). It will create a running deployment of these applications that can be used for an initial evaluation and testing of Kapacitor. Chronograf is currently not included in the package.
This tutorial depends on Docker Compose 3.0 to deploy the latest Docker 17.0+ compatible images of InfluxDB, Telegraf and Kapacitor.
To use this package Docker and Docker Compose should be installed on the host machine where it will run.
Docker installation is covered at the [Docker website](https://docs.docker.com/engine/installation/).
Docker Compose installation is also covered at the [Docker website](https://docs.docker.com/compose/install/).
In order to keep an eye on the log files, this document will describe running the reference package in two separate consoles. In the first console Docker Compose will be run. The second will be used to issue commands to demonstrate basic Kapacitor functionality.
As of this writing, the package has only been tested on Linux(Ubuntu 16.04). It contains a `docker-compose.yml` and directories for configuration a test files.
*Demo Package Contents*
```
.
├── docker-compose.yml
├── etc
│   ├── kapacitor
│   │   └── kapacitor.conf
│   └── telegraf
│   └── telegraf.conf
├── home
│   └── kapacitor
│   ├── cpu_alert_batch.tick
│   └── cpu_alert_stream.tick
├── README.md
└── var
└── log
└── kapacitor
└── README.md
```
Please clone or copy the package to the host machine and open two consoles to its install location before continuing.
### Loading the stack with Docker Compose
The core of the package is the `docker-compose.yml` file, which Docker Compose uses to pull the Docker images and then create and run the Docker containers.
Standard Unix style directories have also been prepared. These are mapped into the docker containers to make it easy to access scripts and logs in the demonstrations that follow. One important directory is the volume `var/log/kapacitor`. Here the `kapacitor.log` and later the `alert-*.log` files will be made available for inspection.
In the first console, in the root directory of the package, to start the stack and leave the logs visible run the following:
```
$ docker-compose up
```
*Logs in standard console streams*
```
Starting tik_influxdb_1 ...
Starting tik_telegraf_1 ...
Starting tik_telegraf_1
Starting tik_influxdb_1
Starting tik_kapacitor_1 ...
Starting tik_influxdb_1 ... done
Attaching to tik_telegraf_1, tik_kapacitor_1, tik_influxdb_1
kapacitor_1 |
kapacitor_1 | '##:::'##::::'###::::'########:::::'###:::::'######::'####:'########::'#######::'########::
kapacitor_1 | ##::'##::::'## ##::: ##.... ##:::'## ##:::'##... ##:. ##::... ##..::'##.... ##: ##.... ##:
kapacitor_1 | ##:'##::::'##:. ##:: ##:::: ##::'##:. ##:: ##:::..::: ##::::: ##:::: ##:::: ##: ##:::: ##:
kapacitor_1 | #####::::'##:::. ##: ########::'##:::. ##: ##:::::::: ##::::: ##:::: ##:::: ##: ########::
kapacitor_1 | ##. ##::: #########: ##.....::: #########: ##:::::::: ##::::: ##:::: ##:::: ##: ##.. ##:::
kapacitor_1 | ##:. ##:: ##.... ##: ##:::::::: ##.... ##: ##::: ##:: ##::::: ##:::: ##:::: ##: ##::. ##::
kapacitor_1 | ##::. ##: ##:::: ##: ##:::::::: ##:::: ##:. ######::'####:::: ##::::. #######:: ##:::. ##:
kapacitor_1 | ..::::..::..:::::..::..:::::::::..:::::..:::......:::....:::::..::::::.......:::..:::::..::
kapacitor_1 |
kapacitor_1 | 2017/08/17 08:46:55 Using configuration at: /etc/kapacitor/kapacitor.conf
influxdb_1 |
influxdb_1 | 8888888 .d888 888 8888888b. 888888b.
influxdb_1 | 888 d88P" 888 888 "Y88b 888 "88b
influxdb_1 | 888 888 888 888 888 888 .88P
influxdb_1 | 888 88888b. 888888 888 888 888 888 888 888 888 8888888K.
influxdb_1 | 888 888 "88b 888 888 888 888 Y8bd8P' 888 888 888 "Y88b
influxdb_1 | 888 888 888 888 888 888 888 X88K 888 888 888 888
influxdb_1 | 888 888 888 888 888 Y88b 888 .d8""8b. 888 .d88P 888 d88P
influxdb_1 | 8888888 888 888 888 888 "Y88888 888 888 8888888P" 8888888P"
influxdb_1 |
influxdb_1 | [I] 2017-08-17T08:46:55Z InfluxDB starting, version 1.3.3, branch HEAD, commit e37afaf09bdd91fab4713536c7bdbdc549ee7dc6
influxdb_1 | [I] 2017-08-17T08:46:55Z Go version go1.8.3, GOMAXPROCS set to 8
influxdb_1 | [I] 2017-08-17T08:46:55Z Using configuration at: /etc/influxdb/influxdb.conf
influxdb_1 | [I] 2017-08-17T08:46:55Z Using data dir: /var/lib/influxdb/data service=store
influxdb_1 | [I] 2017-08-17T08:46:56Z reading file /var/lib/influxdb/wal/_internal/monitor/1/_00001.wal, size 235747 engine=tsm1 service=cacheloader
influxdb_1 | [I] 2017-08-17T08:46:56Z reading file /var/lib/influxdb/wal/telegraf/autogen/2/_00001.wal, size 225647 engine=tsm1 service=cacheloader
telegraf_1 | 2017/08/17 08:46:55 I! Using config file: /etc/telegraf/telegraf.conf
telegraf_1 | 2017-08-17T08:46:56Z I! Starting Telegraf (version 1.3.3)
telegraf_1 | 2017-08-17T08:46:56Z I! Loaded outputs: influxdb
telegraf_1 | 2017-08-17T08:46:56Z I! Loaded inputs: inputs.kernel inputs.mem inputs.processes inputs.swap inputs.system inputs.cpu inputs.disk inputs.diskio
telegraf_1 | 2017-08-17T08:46:56Z I! Tags enabled: host=f1ba76bcbbcc
telegraf_1 | 2017-08-17T08:46:56Z I! Agent Config: Interval:10s, Quiet:false, Hostname:"f1ba76bcbbcc", Flush Interval:10s
influxdb_1 | [I] 2017-08-17T08:46:56Z reading file /var/lib/influxdb/wal/_internal/monitor/1/_00002.wal, size 0 engine=tsm1 service=cacheloader
influxdb_1 | [I] 2017-08-17T08:46:56Z /var/lib/influxdb/data/_internal/monitor/1 opened in 228.044556ms service=store
...
```
### Verifying the stack
The console logs should be similar to the above sample. In the second console the status can be confirmed by using docker directly.
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f1ba76bcbbcc telegraf:latest "/entrypoint.sh te..." 43 minutes ago Up 2 minutes 8092/udp, 8125/udp, 8094/tcp tik_telegraf_1
432ce34e3b00 kapacitor:latest "/entrypoint.sh ka..." 43 minutes ago Up 2 minutes 9092/tcp tik_kapacitor_1
2060eca01bb7 influxdb:latest "/entrypoint.sh in..." 43 minutes ago Up 2 minutes 8086/tcp tik_influxdb_1
```
Take note of the container names, especially for Kapacitor. If the Kapacitor container name in the current deployment is not the same(i.e. `tik_kapacitor_1`), be sure to replace it in the Docker command line examples below. This also applies to the InfluxDB container name (`tik_influxdb_1`) which is used in the next example.
### What is running?
At this point there should be running on the host machine: InfluxDB, Telegraf and Kapacitor. Telegraf is configured using the configuration file `etc/telegraf/telegraf.conf`. Kapacitor is configured using the file `etc/kapacitor/kapacitor.conf`. A bridge network has been defined in the `docker-compose.yml` file. This bridge network features a simple name resolution service, that allows the container names to be used as the server names in the configuration files just mentioned.
The running configuration can be further inspected by using the `influx` command line client directly from the InfluxDB Container.
```
$ docker exec -it tik_influxdb_1 influx --precision rfc3339
Connected to http://localhost:8086 version 1.3.3
InfluxDB shell version: 1.3.3
> show databases
name: databases
name
----
_internal
telegraf
> use telegraf
Using database telegraf
> show subscriptions
name: telegraf
retention_policy name mode destinations
---------------- ---- ---- ------------
autogen kapacitor-dc455e9d-b306-4687-aa39-f146a250dd76 ANY [http://kapacitor:9092]
name: _internal
retention_policy name mode destinations
---------------- ---- ---- ------------
monitor kapacitor-dc455e9d-b306-4687-aa39-f146a250dd76 ANY [http://kapacitor:9092]
> exit
```
## Kapacitor Alerts and the TICKscript
The top level nodes of a TICKscript define the mode by which the underlying node chain is to be executed. They can be setup so that Kapacitor receives processed data in a steady stream, or so that it triggers the processing of a batch of data points, from which it will receive the results.
### Setting up a live stream CPU alert
To create an alert stream it is necessary to:
* declare the desired functionality in a TICKscript
* define the actual alert task in Kapacitor
* test the alert task by recording a sample of stream activity and then playing it back
* enable the alert
An initial script has been prepared in the `home/kapacitor` directory, which is mapped as a volume into the Kapacitor container (`home/kapacitor/cpu_alert_stream.tick`).
This simple script touches upon just the basics of the rich domain specific TICKscript language. It is self-descriptive and should be easily understood.
*cpu_alert_stream.tick*
```
stream
// Select just the cpu measurement from our example database.
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 70)
// Whenever we get an alert write it to a file
.log('/var/log/kapacitor/alerts-stream.log')
```
Note that the `alerts-stream.log` file is written to a volume mapped back to the package directory tree `./var/log/kapacitor`. This will simplify log inspection.
The TICKscript can then be used over Docker to define a new alert in the Kapacitor container.
```
$ docker exec tik_kapacitor_1 sh -c "cd /home/kapacitor && kapacitor define cpu_alert_stream -type stream -tick ./cpu_alert_stream.tick -dbrp telegraf.autogen"
```
Verify that the alert has been created with the following.
```
$ docker exec tik_kapacitor_1 kapacitor show cpu_alert_stream
ID: cpu_alert_stream
Error:
Template:
Type: stream
Status: disabled
Executing: false
Created: 17 Aug 17 09:30 UTC
Modified: 17 Aug 17 09:30 UTC
LastEnabled: 01 Jan 01 00:00 UTC
Databases Retention Policies: ["telegraf"."autogen"]
TICKscript:
stream
// Select just the cpu measurement from our example database.
|from()
.measurement('cpu')
|alert()
.crit(lambda: "usage_idle" < 70)
// Whenever we get an alert write it to a file.
.log('/var/log/kapacitor/alerts-stream.log')
DOT:
digraph cpu_alert_stream {
stream0 -> from1;
from1 -> alert2;
}
```
#### Test the stream alert using 'record'
Before an alert is enabled, it is prudent to check its behavior. A test run of how the alert stream will behave can be done using the Kapacitor 'record' command. This will return a UUID that can then be used as a reference to list and replay what was captured in the test run.
```
$ docker exec tik_kapacitor_1 kapacitor record stream -task cpu_alert_stream -duration 60s
fd7d7081-c985-433e-87df-97ab0c267161
```
During the minute that this test run is being recorded, in order to force one or more CPUs to have a low idle measurement, which will trigger an alert, it will be useful to execute a process that will generate some artificial load. For example in a third console, the following might be executed.
```shell
while true; do i=0; done;
```
List the recording with the following command:
```
$ docker exec tik_kapacitor_1 kapacitor list recordings fd7d7081-c985-433e-87df-97ab0c267161
ID Type Status Size Date
fd7d7081-c985-433e-87df-97ab0c267161 stream finished 1.9 kB 17 Aug 17 09:34 UTC
```
#### Rerunning a recording of a stream alert
When a recording is rerun, alerts are written to the `alerts-stream.log` as they will occur when the alert will be enabled. Replay the recording as follows:
```
docker exec tik_kapacitor_1 kapacitor replay -recording fd7d7081-c985-433e-87df-97ab0c267161 -task cpu_alert_stream
c8cd033f-a79e-46a6-bb5d-81d2f56722b2
```
Check the contents of the local `var/log/kapacitor` directory.
```
$ ls -1 var/log/kapacitor/
alerts-stream.log
kapacitor.log
README.md
```
Check the contents of the `alerts-stream.log`.
```
$ sudo less -X var/log/kapacitor/alerts-stream.log
{"id":"cpu:nil","message":"cpu:nil is CRITICAL","details":"{...}\n","time":"2017-08-17T09:36:09.693216014Z","duration":0,"level":"CRITICAL","data":{...
```
#### Enable the alert stream
Once it is clear that the new alert will not be generating spam, and that it will actually catch meaningful information, it can be enabled in Kapacitor.
```
$ docker exec tik_kapacitor_1 kapacitor enable cpu_alert_stream
```
Verify that it has been enabled by once again showing the task.
```
$ docker exec tik_kapacitor_1 kapacitor show cpu_alert_stream
ID: cpu_alert_stream
Error:
Template:
Type: stream
Status: enabled
Executing: true
...
```
If the alert stream will no longer be needed it can likewise be disabled.
```
$ docker exec tik_kapacitor_1 kapacitor disable cpu_alert_stream
```
### Setting up a batch CPU alert
The second mode for setting up a TICKscript node chain is batch processing. A batch process can be executed periodically over a window of time series data points.
To create a batch process it is necessary to:
* declare the desired functionality, window or time period to be sampled, and run frequency in a TICKscript
* define the actual alert task in Kapacitor
* test the alert task by recording a data point sample and then playing it back
* enable the alert
It may have already been noted that an example batch TICKscript has been created in the directory `home/kapacitor`.
As with the stream based TICKscript, the contents are self-descriptive and should be easily understood.
*cpu_alert_batch.tick*
```
batch
|query('''
SELECT usage_idle
FROM "telegraf"."autogen"."cpu"
''')
.period(5m)
.every(5m)
|alert()
.crit(lambda: "usage_idle" < 70)
.log('/var/log/kapacitor/alerts-batch.log')
```
Here again the `alerts-batch.log` will be written to a directory mapped as a volume into the Kapacitor container.
The TICKscript can then be used over Docker to define a new alert in the Kapacitor container.
```
$ docker exec tik_kapacitor_1 sh -c "cd /home/kapacitor && kapacitor define cpu_alert_batch -type batch -tick ./cpu_alert_batch.tick -dbrp telegraf.autogen"
```
Verify that the task has been created.
```
$ docker exec tik_kapacitor_1 kapacitor show cpu_alert_batch
ID: cpu_alert_batch
Error:
Template:
Type: batch
Status: disabled
Executing: false
Created: 17 Aug 17 12:41 UTC
Modified: 17 Aug 17 13:06 UTC
LastEnabled: 01 Jan 01 00:00 UTC
Databases Retention Policies: ["telegraf"."autogen"]
TICKscript:
batch
|query('''
SELECT usage_idle
FROM "telegraf"."autogen"."cpu"
''')
.period(5m)
.every(5m)
|alert()
.crit(lambda: "usage_idle" < 70)
.log('/var/log/kapacitor/alerts-batch.log')
DOT:
digraph cpu_alert_batch {
query1 -> alert2;
}
```
#### Test the batch alert using 'record'
As with the stream alert, it would be advisable to test the alert task before enabling it.
Prepare some alert triggering data points by creating artificial CPU load. For example in a third console the following might be run for a minute or two.
```shell
while true; do i=0; done;
```
A test run of how the alert batch will behave can be generated using the Kapacitor 'record' command.
```
docker exec tik_kapacitor_1 kapacitor record batch -task cpu_alert_batch -past 5m
b2c46972-8d01-4fab-8088-56fd51fa577c
```
List the recording with the following command.
```
$ docker exec tik_kapacitor_1 kapacitor list recordings b2c46972-8d01-4fab-8088-56fd51fa577c
ID Type Status Size Date
b2c46972-8d01-4fab-8088-56fd51fa577c batch finished 2.4 kB 17 Aug 17 13:06 UTC
```
#### Rerunning a recording of a batch alert
When the recording is rerun, alerts are written to the `alerts-batch.log` as they occurred when uncovered during batch processing. Replay the recording as follows:
```
$ docker exec tik_kapacitor_1 kapacitor replay -recording b2c46972-8d01-4fab-8088-56fd51fa577c -task cpu_alert_batch
0cc65a9f-7dba-4a02-a118-e95b4fccf123
```
Check the contents of the local `var/log/kapacitor` directory.
```
$ ls -1 var/log/kapacitor/
alerts-batch.log
alerts-stream.log
kapacitor.log
README.md
README.md
```
Check the contents of the `alerts-batch.log`.
```
$ sudo less -X var/log/kapacitor/alerts-batch.log
{"id":"cpu:nil","message":"cpu:nil is CRITICAL","details":"{...}\n","time":"2017-08-17T13:07:00.156730835Z","duration":0,"level":"CRITICAL","data":{...
```
#### Enable the batch alert
Once it is clear that the new alert will not be generating spam, and that it will actually catch meaningful information, it can be enabled in Kapacitor.
```
$ docker exec tik_kapacitor_1 kapacitor enable cpu_alert_batch
```
Verify that it has been enabled by once again showing the task.
```
$ docker exec tik_kapacitor_1 kapacitor show cpu_alert_batch
ID: cpu_alert_batch
Error:
Template:
Type: batch
Status: enabled
Executing: true
Created: 17 Aug 17 12:41 UTC
...
```
If the alert stream will no longer be needed it can likewise be disabled.
```
$ docker exec tik_kapacitor_1 kapacitor disable cpu_alert_batch
```
### Summary
This short tutorial has covered the most basic steps in starting up the TICK stack with Docker and checking the most elementary feature of Kapacitor: configuring and testing alerts triggered by changes in data written to InfluxDB. This installation can be used to further explore Kapacitor and its integration with InfluxDB and Telegraf.
### Shutting down the stack
There are two ways in which the stack can be taken down.
* Either, in the first console hit CTRL + C
* Or, in the second console run `$ docker-compose down --volumes`

View File

@ -0,0 +1,131 @@
---
title: Installing Kapacitor
weight: 10
menu:
kapacitor_1_5:
parent: Introduction
---
This page provides directions for installing, starting, and configuring Kapacitor.
## Requirements
Installation of the InfluxDB package may require `root` or administrator privileges in order to complete successfully.
### Networking
Kapacitor listens on TCP port `9092` for all API and write
calls.
Kapacitor may also bind to randomized UDP ports
for handling of InfluxDB data via subscriptions.
## Installation
Kapacitor has two binaries:
* kapacitor: a CLI program for calling the Kapacitor API.
* kapacitord: the Kapacitor server daemon.
You can download the binaries directly from the
[downloads](https://portal.influxdata.com/downloads) page.
> **Note:** Windows support is experimental.
### Starting the Kapacitor service
For packaged installations, please see the respective sections below
for your operating system. For non-packaged installations (tarballs or
from source), you will need to start the Kapacitor application
manually by running:
```
./kapacitord -config <PATH TO CONFIGURATION>
```
#### macOS (using Homebrew)
To have `launchd` start Kapacitor at login:
```
ln -sfv /usr/local/opt/kapacitor/*.plist ~/Library/LaunchAgents
```
Then to load Kapacitor now:
```
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.kapacitor.plist
```
Or, if you don't want/need `lanchctl`, you can just run:
```
kapacitord -config /usr/local/etc/kapacitor.conf
```
#### Linux - SysV or Upstart systems
To start the Kapacitor service, run:
```
sudo service kapacitor start
```
#### Linux - systemd systems
To start the Kapacitor service, run:
```
sudo systemctl start kapacitor
```
## Configuration
An example configuration file can be found [here](https://github.com/influxdb/kapacitor/blob/master/etc/kapacitor/kapacitor.conf).
Kapacitor can also provide an example configuration for you using this command:
```bash
kapacitord config
```
To generate a new configuration file, run:
```
kapacitord config > kapacitor.generated.conf
```
### Shared secret
If using [Kapacitor v1.5.3](/kapacitor/v1.5/about_the_project/releasenotes-changelog/#v1-5-3-2019-06-18)
or newer and InfluxDB with [authentication enabled](/influxdb/v1.7/administration/authentication_and_authorization/),
set the `[http].shared-secret` option in your Kapacitor configuration file to the
shared secret of your InfluxDB instances.
```toml
# ...
[http]
# ...
shared-secret = "youramazingsharedsecret"
```
If not set, set to an empty string, or does not match InfluxDB's shared-secret,
the integration with InfluxDB will fail and Kapacitor will not start.
### Time zone
To display alerts notifications using a preferred time zone, either change the time zone
of the host on which Kapacitor is running or set the Kapacitor process' `TZ` environment variable.
#### systemd
Add the environment variable using `systemctl edit kapacitor`:
```
[Service]
Environment="TZ=Asia/Shanghai"
```
#### docker
Set the environment variable using the `-e` flag when starting the container (`-e TZ=Asia/Shanghai`)
or in your `docker-compose.yml`.

View File

@ -0,0 +1,64 @@
---
title: TICKscript nodes overview
aliases:
- kapacitor/v1.5/nodes/source_batch_node/
- kapacitor/v1.5/nodes/source_stream_node/
- kapacitor/v1.5/nodes/map_node/
- kapacitor/v1.5/nodes/reduce_node/
menu:
kapacitor_1_5_ref:
name: TICKscript nodes
identifier: nodes
weight: 40
---
> ***Note:*** Before continuing, please make sure you have read the
> [TICKscript Language Specification](/kapacitor/v1.5/tick/).
Nodes represent process invocation units that either take data as a batch or a point-by-point stream, and then alter the data, store the data, or trigger some other activity based on changes in the data (e.g., an alert).
The property methods for these two nodes define the type of task that you are running, either
[stream](/kapacitor/v1.5/introduction/getting-started/#triggering-alerts-from-stream-data)
or
[batch](/kapacitor/v1.5/introduction/getting-started/#triggering-alerts-from-batch-data).
Below is a complete list of the available nodes. For each node, the associated property methods are described.
## Available nodes
* [AlertNode](/kapacitor/v1.5/nodes/alert_node)
* [BarrierNode](/kapacitor/v1.5/nodes/barrier_node)
* [BatchNode](/kapacitor/v1.5/nodes/batch_node)
* [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node)
* [CombineNode](/kapacitor/v1.5/nodes/combine_node)
* [DefaultNode](/kapacitor/v1.5/nodes/default_node)
* [DeleteNode](/kapacitor/v1.5/nodes/delete_node)
* [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node)
* [EC2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node)
* [EvalNode](/kapacitor/v1.5/nodes/eval_node)
* [FlattenNode](/kapacitor/v1.5/nodes/flatten_node)
* [FromNode](/kapacitor/v1.5/nodes/from_node)
* [GroupByNode](/kapacitor/v1.5/nodes/group_by_node)
* [HTTPOutputNode](/kapacitor/v1.5/nodes/http_out_node)
* [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node)
* [InfluxDBOutputNode](/kapacitor/v1.5/nodes/influx_d_b_out_node)
* [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node)
* [JoinNode](/kapacitor/v1.5/nodes/join_node)
* [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node)
* [Kapacitor LoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node)
* [LogNode](/kapacitor/v1.5/nodes/log_node)
* [NoOpNode](/kapacitor/v1.5/nodes/no_op_node)
* [QueryNode](/kapacitor/v1.5/nodes/query_node)
* [SampleNode](/kapacitor/v1.5/nodes/sample_node)
* [ShiftNode](/kapacitor/v1.5/nodes/shift_node)
* [SideloadNode](/kapacitor/v1.5/nodes/sideload_node)
* [StateCountNode](/kapacitor/v1.5/nodes/state_count_node)
* [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node)
* [StatsNode](/kapacitor/v1.5/nodes/stats_node)
* [StreamNode](/kapacitor/v1.5/nodes/stream_node)
* [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node)
* [UDF (User Defined Function)Node](/kapacitor/v1.5/nodes/u_d_f_node)
* [UnionNode](/kapacitor/v1.5/nodes/union_node)
* [WhereNode](/kapacitor/v1.5/nodes/where_node)
* [WindowNode](/kapacitor/v1.5/nodes/window_node)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,934 @@
---
title: BarrierNode
description: BarrierNode emits a barrier with the current time, according to the system clock, and allows pipelines to be forced in the absence of data traffic. The barrier emitted will be based on either idle time since the last received message or on a periodic timer based on the system clock.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: BarrierNode
identifier: barrier_node
weight: 20
parent: nodes
---
The `barrier` node emits a barrier based on one of the following:
- Idle time since the last data point was received
- Periodic timer based on the system time
Barriers let you execute pipelines without data traffic. Data points received after a specified barrier are dropped.
##### Example barrier based on idle time
```js
stream
|from()
.measurement('cpu')
|barrier()
.idle(5s)
.delete(TRUE)
|window()
.period(10s)
.every(5s)
|top(10, 'value')
//Post the top 10 results over the last 10s updated every 5s.
|httpPost('http://example.com/api/top10')
```
> **Note:** In .delete(TRUE), TRUE must be uppercase.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **barrier&nbsp;(&nbsp;)** | Create a new Barrier node that emits a BarrierMessage periodically |
### Property Methods
| Setters | Description |
|:---|:---|
| **[idle](#idle)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | Emit barrier based on idle time since the last received message. Must be greater than zero. |
| **[period](#period)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | Emit barrier based on periodic timer. The timer is based on system clock rather than message time. Must be greater than zero. |
| **[delete](#delete)&nbsp;(&nbsp;`value`&nbsp;`Boolean`)** | Delete the group after processing each barrier. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Idle
Emit barrier based on idle time since the last received message.
Must be greater than zero.
```js
barrier.idle(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Period
Emit barrier based on periodic timer. The timer is based on system
clock rather than message time.
Must be greater than zero.
```js
barrier.period(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Delete indicates that the group should be deleted after processing each barrier.
This includes the barrier node itself, meaning that if `delete` is `true`, the barrier
is triggered only once for each group and the barrier node forgets about the group.
The group will be created again if a new point is received for the group.
This is useful if you have increasing cardinality over time as once a barrier is
triggered for a group it is then deleted, freeing any resources managing the group.
```js
barrier.delete()
```
{{% note %}}
`delete` will free system resources used for managing groups and can help to maintain
the overall performance of Kapacitor, but these gains are minimal.
For information about optimizing tasks, see [How can I optimize Kapacitor tasks?](/kapacitor/v1.5/troubleshooting/frequently-asked-questions/#how-can-i-optimize-kapacitor-tasks)
{{% /note %}}
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
barrier.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
barrier|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
barrier|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
barrier|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
barrier|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
barrier|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
barrier|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
barrier|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
barrier|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
barrier|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
barrier|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
barrier|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
barrier|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
barrier|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
barrier|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
barrier|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
barrier|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
barrier|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
barrier|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
barrier|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
barrier|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
barrier|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
barrier|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
barrier|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
barrier|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
barrier|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
barrier|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
barrier|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
barrier|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
barrier|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
barrier|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
barrier|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
barrier|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
barrier|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
barrier|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
barrier|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
barrier|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
barrier|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
barrier|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
barrier|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
barrier|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
barrier|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
barrier|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
barrier|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
barrier|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
barrier|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
barrier|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
barrier|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
barrier|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
barrier|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
barrier|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,202 @@
---
title: BatchNode
description: BatchNode handles creating several child QueryNodes. Each call to `query` creates a child batch node that can further be configured. The `batch` variable in batch tasks is an instance of BatchNode.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: BatchNode
identifier: batch_node
weight: 4
parent: nodes
---
The `batch` node handles the creation of several child QueryNodes.
Each call to [`query`](/kapacitor/v1.5/nodes/query_node) creates a child batch node that
can further be configured. *See [QueryNode](/kapacitor/v1.5/nodes/query_node/)*.
The `batch` variable in batch tasks is an instance of
a [BatchNode.](/kapacitor/v1.5/nodes/batch_node/)
> A **QueryNode** is required when using **BatchNode**.
> It defines the source and schedule for batch data and should be used before
> any other [chaining methods](#chaining-methods-1).
Example:
```js
var errors = batch
|query('SELECT value from errors')
...
var views = batch
|query('SELECT value from views')
...
```
Available Statistics:
* query_errors: number of errors when querying
* batches_queried: number of batches returned from queries
* points_queried: total number of points in batches
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **batch** | Has no constructor signature. |
### Property Methods
| Setters | Description |
|:--------|:------------|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Deadman](#deadman),
[Query](#query),
[Stats](#stats)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
batch.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = batch
|query()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = batch
|query()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = batch
|query()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = batch
|query()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
batch|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Query
The query to execute. Must not contain a time condition
in the `WHERE` clause or contain a `GROUP BY` clause.
The time conditions are added dynamically according to the period, offset and schedule.
The `GROUP BY` clause is added dynamically according to the dimensions
passed to the `groupBy` method.
```js
batch|query(q string)
```
Returns: [QueryNode](/kapacitor/v1.5/nodes/query_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
batch|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,868 @@
---
title: ChangeDetectNode (Kapacitor TICKscript node)
description: ChangeDetectNode creates a new node that only emits new points if different from the previous point.
menu:
kapacitor_1_5_ref:
name: ChangeDetectNode
identifier: change_detect_node
weight: 40
parent: nodes
---
The `changeDetect` node creates a new node that emits new points only if different from the previous point.
The `changeDetect` node can monitor multiple fields.
##### Example changeDetect node
```js
stream
|from().measurement('packets')
|changeDetect('field_a','field_b')
```
### Constructor
| Chaining Method | Description |
|:--------------- |:----------- |
| **changeDetect&nbsp;(&nbsp;`fields`&nbsp;`...string`)** | Create a new node that emits new points only if different from the previous point |
### Property methods
| Setters | Description |
|:------- |:----------- |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppresses all error logging events from this node. |
### Chaining methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
changeDetect.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
changeDetect|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
changeDetect|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
changeDetect|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point..
```js
changeDetect|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
changeDetect|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
changeDetect|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
changeDetect|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
changeDetect|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
changeDetect|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
changeDetect|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
changeDetect|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
changeDetect|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
changeDetect|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
changeDetect|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
changeDetect|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
changeDetect|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
changeDetect|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
changeDetect|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
changeDetect|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
changeDetect|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
changeDetect|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
changeDetect|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
changeDetect|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an InfluxDB output node that will store the incoming data into InfluxDB.
```js
changeDetect|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
changeDetect|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a Kubernetes cluster.
```js
changeDetect|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an Kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
changeDetect|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
changeDetect|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
changeDetect|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
changeDetect|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
changeDetect|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector..
If you want the median point, use `.percentile(field, 50.0)`.
```js
changeDetect|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
changeDetect|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
changeDetect|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
changeDetect|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
changeDetect|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
changeDetect|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
changeDetect|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
changeDetect|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
changeDetect|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
changeDetect|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
changeDetect|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
changeDetect|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
changeDetect|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
changeDetect|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
changeDetect|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
changeDetect|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
changeDetect|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
changeDetect|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
changeDetect|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,951 @@
---
title: CombineNode
description: CombineNode combines data from a single node with itself. Points with the same time are grouped and then combinations are created. The size of the combinations is defined by how many expressions are given.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: CombineNode
identifier: combine_node
weight: 40
parent: nodes
---
The `combine` node combines data from a single node with itself.
Points with the same time are grouped and then combinations are created.
The size of the combinations is defined by how many expressions are given.
Combinations are order-independent and will never include the same point multiple times.
In the following example, data points for the `login` service are combined with
the data points from all other services:
```js
stream
|from()
.measurement('request_latency')
|combine(lambda: "service" == 'login', lambda: TRUE)
.as('login', 'other')
// points that are within 1 second are considered the same time.
.tolerance(1s)
// delimiter for new field and tag names
.delimiter('.')
// Change group by to be new other.service tag
|groupBy('other.service')
// Both the "value" fields from each data point have been prefixed
// with the respective names 'login' and 'other'.
|eval(lambda: "login.value" / "other.value")
.as('ratio')
...
```
In the following example, all combination pairs are created:
```js
|combine(lambda: TRUE, lambda: TRUE)
.as('login', 'other')
```
In the following example, all combinations triples are created:
```js
|combine(lambda: TRUE, lambda: TRUE, lambda: TRUE)
.as('login', 'other', 'another')
```
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **combine&nbsp;(&nbsp;`expressions`&nbsp;`...ast.LambdaNode`)** | Combine this node with itself. The data is combined on timestamp. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[as](#as)&nbsp;(&nbsp;`names`&nbsp;`...string`)** | Prefix names for all fields from the respective nodes. Each field from the parent nodes will be prefixed with the provided name and a '.'. See the example above. |
| **[delimiter](#delimiter)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The delimiter between the As names and existing field an tag keys. Can be the empty string, but you are responsible for ensuring conflicts are not possible if you use the empty string. |
| **[max](#max)&nbsp;(&nbsp;`value`&nbsp;`int64`)** | Maximum number of possible combinations. Since the number of possible combinations can grow very rapidly you can set a maximum number of combinations allowed. If the max is crossed, an error is logged and the combinations are not calculated. Default: 10,000 |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[tolerance](#tolerance)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | The maximum duration of time that two incoming points can be apart and still be considered to be equal in time. The joined data point's time will be rounded to the nearest multiple of the tolerance duration. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### As
Prefix names for all fields from the respective nodes.
Each field from the parent nodes will be prefixed with the provided name and a `.`.
See the example above.
The names cannot have a dot `.` character.
```js
combine.as(names ...string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delimiter
The delimiter between the As names and existing field an tag keys.
Can be the empty string, but you are responsible for ensuring conflicts are not possible if you use the empty string.
```js
combine.delimiter(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Maximum number of possible combinations.
Since the number of possible combinations can grow very rapidly,
you can set a maximum number of combinations allowed.
If the max is exceeded, an error is logged and the combinations are not calculated.
**Default:** 10,000
```js
combine.max(value int64)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
combine.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tolerance
The maximum duration of time that two incoming points
can be apart and still be considered to be equal in time.
The joined data point's time will be rounded to the nearest
multiple of the tolerance duration.
```js
combine.tolerance(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
combine|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
combine|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
combine|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
combine|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
combine|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
combine|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
combine|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
combine|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
combine|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
combine|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
combine|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
combine|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
combine|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
combine|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
combine|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
combine|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
combine|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
combine|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
combine|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
combine|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
combine|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
combine|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
combine|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
combine|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
combine|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a Kubernetes cluster.
```js
combine|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
combine|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
combine|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
combine|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
combine|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
combine|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
combine|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
combine|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
combine|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
combine|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
combine|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
combine|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
combine|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
combine|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
combine|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
combine|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
combine|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
combine|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
combine|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
combine|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
combine|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
combine|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
combine|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
combine|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,903 @@
---
title: DefaultNode
description: DefaultNode sets defaults of fields and tags on data points.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: DefaultNode
identifier: default_node
weight: 50
parent: nodes
---
The `default` node sets defaults fields and tags on data points.
Example:
```js
stream
|default()
.field('value', 0.0)
.tag('host', '')
```
The above example will set the field `value` to float64(0) if it does not already exist
It will also set the tag `host` to string("") if it does not already exist.
Available Statistics:
* fields_defaulted: number of fields that were missing
* tags_defaulted: number of tags that were missing
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **default&nbsp;(&nbsp;)** | Create a node that can set defaults for missing tags or fields. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[field](#field)&nbsp;(&nbsp;`name`&nbsp;`string`,&nbsp;`value`&nbsp;`interface{}`)** | Define a field default. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[tag](#tag)&nbsp;(&nbsp;`name`&nbsp;`string`,&nbsp;`value`&nbsp;`string`)** | Define a tag default. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Field
Define a field default.
```js
default.field(name string, value interface{})
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
default.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tag
Define a tag default.
```js
default.tag(name string, value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
default|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
default|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
default|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
default|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
default|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
default|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
default|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
default|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
default|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
default|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
default|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
default|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
default|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
default|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
default|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
default|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
default|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
default|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
default|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
default|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
default|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
default|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
default|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
default|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
default|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
default|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
default|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
default|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
default|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
default|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
default|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
default|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
default|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
default|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
default|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
default|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
default|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
default|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
default|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
default|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
default|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
default|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
default|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
default|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
default|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
default|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
default|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
default|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
default|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
default|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,901 @@
---
title: DeleteNode
description: DeleteNode deletes fields and tags from data points.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: DeleteNode
identifier: delete_node
weight: 60
parent: nodes
---
The `delete` node deletes fields and tags from data points.
Example:
```js
stream
|delete()
.field('value')
.tag('host')
```
The above example will remove the field `value` and the tag `host`, from each point.
Available Statistics:
* fields_deleted: number of fields that were deleted. Only counts if the field already existed.
* tags_deleted: number of tags that were deleted. Only counts if the tag already existed.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **delete&nbsp;(&nbsp;)** | Create a node that can delete tags or fields. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[field](#field)&nbsp;(&nbsp;`name`&nbsp;`string`)** | Delete a field. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[tag](#tag)&nbsp;(&nbsp;`name`&nbsp;`string`)** | Delete a tag. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Field
Delete a field.
```js
delete.field(name string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
delete.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tag
Delete a tag.
```js
delete.tag(name string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
delete|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
delete|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
delete|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
delete|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
delete|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
delete|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
delete|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
delete|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
delete|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
delete|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
delete|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
delete|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
delete|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
delete|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
delete|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
delete|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
delete|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
delete|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
delete|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
delete|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
delete|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
delete|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
delete|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
delete|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
delete|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
delete|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
delete|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
delete|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
delete|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
delete|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
delete|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
delete|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
delete|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
delete|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
delete|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
delete|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
delete|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
delete|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
delete|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
delete|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
delete|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
delete|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
delete|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
delete|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
delete|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
delete|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
delete|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
delete|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
delete|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
delete|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,926 @@
---
title: DerivativeNode
description: DerivativeNode computes the derivative of a stream or batch. The derivative is computed on a single field and behaves similarly to the InfluxQL derivative function.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: DerivativeNode
identifier: derivative_node
weight: 70
parent: nodes
---
The `derivative` node computes the derivative of a stream or batch.
The derivative is computed on a single field
and behaves similarly to the InfluxQL derivative
function. Kapacitor has its own implementation
of the derivative function, and, as a result, is
not part of the normal InfluxQL functions.
Example:
```js
stream
|from()
.measurement('net_rx_packets')
|derivative('value')
.unit(1s) // default
.nonNegative()
...
```
Computes the derivative via:
(current - previous ) / ( time_difference / unit)
The derivative is computed for each point, and
because of boundary conditions the first point is
dropped.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **derivative&nbsp;(&nbsp;`field`&nbsp;`string`)** | Create a new node that computes the derivative of adjacent points. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[as](#as)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The new name of the derivative field. Default is the name of the field used when calculating the derivative. |
| **[nonNegative](#nonnegative)&nbsp;(&nbsp;)** | If called the derivative will skip negative results. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[unit](#unit)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | The time unit of the resulting derivative value. Default: 1s |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### As
The new name of the derivative field.
Default is the name of the field used
when calculating the derivative.
```js
derivative.as(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### NonNegative
If called the derivative will skip negative results.
```js
derivative.nonNegative()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
derivative.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Unit
The time unit of the resulting derivative value.
Default: 1s
```js
derivative.unit(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
derivative|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
derivative|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
derivative|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
derivative|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
derivative|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
derivative|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
derivative|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
derivative|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
derivative|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
derivative|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
derivative|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
derivative|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
derivative|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
derivative|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
derivative|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
derivative|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
derivative|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
derivative|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
derivative|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
derivative|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
derivative|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
derivative|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
derivative|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
derivative|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
derivative|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
derivative|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
derivative|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
derivative|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
derivative|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
derivative|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
derivative|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
derivative|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
derivative|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
derivative|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
derivative|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
derivative|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
derivative|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
derivative|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
derivative|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
derivative|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
derivative|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
derivative|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
derivative|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
derivative|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
derivative|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
derivative|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
derivative|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
derivative|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
derivative|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
derivative|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,993 @@
---
title: EvalNode
description: EvalNode evaluates expressions on each data point it receives. A list of expressions may be provided and will be evaluated in the order they are given. The results of expressions are available to later expressions in the list.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: EvalNode
identifier: eval_node
weight: 90
parent: nodes
---
The `eval` node evaluates expressions on each data point it receives.
A list of expressions may be provided and will be evaluated in the order they are given.
The results of expressions are available to later expressions in the list.
See the property [EvalNode.As](/kapacitor/v1.5/nodes/eval_node/#as) for details on how to reference the results.
Example:
```js
stream
|eval(lambda: "error_count" / "total_count")
.as('error_percent')
```
The above example will add a new field `error_percent` to each
data point with the result of `error_count / total_count` where
`error_count` and `total_count` are existing fields on the data point.
Available Statistics:
* eval_errors: number of errors evaluating any expressions.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **eval&nbsp;(&nbsp;`expressions`&nbsp;`...ast.LambdaNode`)** | Create an eval node that will evaluate the given transformation function to each data point. A list of expressions may be provided and will be evaluated in the order they are given. The results are available to later expressions. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[as](#as)&nbsp;(&nbsp;`names`&nbsp;`...string`)** | List of names for each expression. The expressions are evaluated in order. The result of an expression may be referenced by later expressions via the name provided. |
| **[keep](#keep)&nbsp;(&nbsp;`fields`&nbsp;`...string`)** | If called the existing fields will be preserved in addition to the new fields being set. If not called then only new fields are preserved. (Tags are always preserved regardless how `keep` is used.) |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[tags](#tags)&nbsp;(&nbsp;`names`&nbsp;`...string`)** | Convert the result of an expression into a tag. The result must be a string. Use the `string()` expression function to convert types. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### As
List of names for each expression.
The expressions are evaluated in order. The result
of an expression may be referenced by later expressions
via the name provided.
Example:
```js
stream
|eval(lambda: "value" * "value", lambda: 1.0 / "value2")
.as('value2', 'inv_value2')
```
The above example calculates two fields from the value and names them
`value2` and `inv_value2` respectively.
```js
eval.as(names ...string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Keep
If called the existing fields will be preserved in addition
to the new fields being set.
If not called then only new fields are preserved. (Tags are
always preserved regardless how `keep` is used.)
Optionally, intermediate values can be discarded
by passing a list of field names to be kept.
Only fields in the list will be retained, the rest will be discarded.
If no list is given then all fields are retained.
Example:
```js
stream
|eval(lambda: "value" * "value", lambda: 1.0 / "value2")
.as('value2', 'inv_value2')
.keep('value', 'inv_value2')
```
In the above example the original field `value` is preserved.
The new field `value2` is calculated and used in evaluating
`inv_value2` but is discarded before the point is sent on to child nodes.
The resulting point has only two fields: `value` and `inv_value2`.
```js
eval.keep(fields ...string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
eval.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tags
Convert the result of an expression into a tag.
The result must be a string.
Use the `string()` expression function to convert types.
Example:
```js
stream
|eval(lambda: string(floor("value" / 10.0)))
.as('value_bucket')
.tags('value_bucket')
```
The above example calculates an expression from the field `value`, casts it as a string, and names it `value_bucket`.
The `value_bucket` expression is then converted from a field on the point to a tag `value_bucket` on the point.
Example:
```js
stream
|eval(lambda: string(floor("value" / 10.0)))
.as('value_bucket')
.tags('value_bucket')
.keep('value') // keep the original field `value` as well
```
The above example calculates an expression from the field `value`, casts it as a string, and names it `value_bucket`.
The `value_bucket` expression is then converted from a field on the point to a tag `value_bucket` on the point.
The `keep` property preserves the original field `value`.
Tags are always kept since creating a tag implies you want to keep it.
```js
eval.tags(names ...string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
eval|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
eval|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
eval|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
eval|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
eval|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
eval|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
eval|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
eval|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
eval|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
eval|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
eval|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
eval|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
eval|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
eval|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
eval|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
eval|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
eval|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
eval|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
eval|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
eval|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
eval|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
eval|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
eval|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
eval|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
eval|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
eval|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
eval|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
eval|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
eval|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
eval|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
eval|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
eval|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
eval|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
eval|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
eval|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
eval|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
eval|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
eval|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
eval|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
eval|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
eval|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
eval|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
eval|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
eval|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
eval|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
eval|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
eval|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
eval|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
eval|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
eval|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,961 @@
---
title: FlattenNode
description: FlattenNode flattens a set of points on specific dimensions.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: FlattenNode
identifier: flatten_node
weight: 100
parent: nodes
---
The `flatten` node flattens a set of points on specific dimensions.
For example, given two points:
```
m,host=A,port=80 bytes=3512
m,host=A,port=443 bytes=6723
```
Flattening the points on `port` results in a single point:
```
m,host=A 80.bytes=3512,443.bytes=6723
```
Example:
```js
|flatten()
.on('port')
```
If flattening on multiple dimensions, the order is preserved:
```
m,host=A,port=80 bytes=3512
m,host=A,port=443 bytes=6723
m,host=B,port=443 bytes=7243
```
Flattening the points on `host` and `port` would result in a single point:
```
m A.80.bytes=3512,A.443.bytes=6723,B.443.bytes=7243
```
Example:
```js
|flatten()
.on('host', 'port')
```
Since flattening points creates dynamically named fields in general it is expected
that the resultant data is passed to a UDF or similar for custom processing.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **flatten&nbsp;(&nbsp;)** | Flatten points with similar times into a single point. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[delimiter](#delimiter)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The delimiter between field name parts |
| **[dropOriginalFieldName](#droporiginalfieldname)&nbsp;(&nbsp;`drop`&nbsp;`...bool`)** | DropOriginalFieldName indicates whether the original field name should be dropped when constructing the final field name. |
| **[on](#on)&nbsp;(&nbsp;`dims`&nbsp;`...string`)** | Specify the dimensions on which to flatten the points. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[tolerance](#tolerance)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | The maximum duration of time that two incoming points can be apart and still be considered to be equal in time. The joined data point's time will be rounded to the nearest multiple of the tolerance duration. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Delimiter
The delimiter between field name parts
```js
flatten.delimiter(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### DropOriginalFieldName
DropOriginalFieldName indicates whether the original field name should
be dropped when constructing the final field name.
```js
flatten.dropOriginalFieldName(drop ...bool)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### On
Specify the dimensions on which to flatten the points.
```js
flatten.on(dims ...string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
flatten.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tolerance
The maximum duration of time that two incoming points
can be apart and still be considered to be equal in time.
The joined data point's time will be rounded to the nearest
multiple of the tolerance duration.
```js
flatten.tolerance(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
flatten|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
flatten|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
flatten|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
flatten|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
flatten|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
flatten|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
flatten|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
flatten|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
flatten|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
flatten|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
flatten|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
flatten|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
flatten|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
flatten|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
flatten|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
flatten|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
flatten|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
flatten|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
flatten|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
flatten|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
flatten|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
flatten|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
flatten|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
flatten|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
flatten|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
flatten|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
flatten|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
flatten|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
flatten|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
flatten|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
flatten|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
flatten|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
flatten|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
flatten|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
flatten|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
flatten|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
flatten|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
flatten|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
flatten|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
flatten|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
flatten|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
flatten|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
flatten|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
flatten|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
flatten|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
flatten|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
flatten|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
flatten|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
flatten|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
flatten|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,929 @@
---
title: GroupByNode
description: GroupByNode groups incoming data. Each group is then processed independently for the rest of the pipeline. Only tags that are dimensions in the grouping will be preserved; all other tags are dropped.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: GroupByNode
identifier: group_by_node
weight: 120
parent: nodes
---
The `groupBy` node will group the incoming data.
Each group is then processed independently for the rest of the pipeline.
#### groupBy with aggregated data
When using `groupBy` with aggregated data, only tags that are dimensions in the grouping are preserved.
All other tags are dropped. With data that is not being aggregated, all tags are preserved.
Example:
```js
stream
|groupBy('service', 'datacenter')
...
```
The above example groups the data along two dimensions `service` and `datacenter`.
Groups are dynamically created as new data arrives and each group is processed
independently.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **groupBy&nbsp;(&nbsp;`tag`&nbsp;`...interface{}`)** | Group the data by a set of tags. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[byMeasurement](#bymeasurement)&nbsp;(&nbsp;)** | If set will include the measurement name in the group ID. Along with any other group by dimensions. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[Exclude](#exclude),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### ByMeasurement
If set will include the measurement name in the group ID.
Along with any other group by dimensions.
Example:
```js
...
|groupBy('host')
.byMeasurement()
```
The above example groups points by their host tag and measurement name.
If you want to remove the measurement name from the group ID,
then groupBy all existing dimensions but without specifying 'byMeasurement'.
Example:
```js
|groupBy(*)
```
The above removes the group by measurement name if any.
```js
groupBy.byMeasurement()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
groupBy.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
groupBy|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
groupBy|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
groupBy|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
groupBy|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
groupBy|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
groupBy|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
groupBy|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
groupBy|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
groupBy|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
groupBy|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
groupBy|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
groupBy|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
groupBy|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
groupBy|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
groupBy|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
groupBy|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Exclude
Exclude removes any tags from the group.
```js
groupBy|exclude(dims ...string)
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
groupBy|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
groupBy|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
groupBy|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
groupBy|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
groupBy|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
groupBy|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
groupBy|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
groupBy|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
groupBy|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
groupBy|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
groupBy|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
groupBy|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
groupBy|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
groupBy|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
groupBy|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
groupBy|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
groupBy|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
groupBy|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
groupBy|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
groupBy|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
groupBy|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
groupBy|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
groupBy|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
groupBy|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
groupBy|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
groupBy|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
groupBy|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
groupBy|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
groupBy|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
groupBy|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
groupBy|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
groupBy|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
groupBy|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
groupBy|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,884 @@
---
title: HTTPOutNode
description: HTTPOutNode caches the most recent data for each group it has received. The cached data is available at the given endpoint, which is the relative path from the API endpoint of the running task.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: HTTPOutNode
identifier: http_out_node
weight: 130
parent: nodes
---
The `httpOut` node acts as a simple passthrough and caches the most recent data for each group it has received.
Because of this, any available chaining method can be used to handle the cached data.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
Example:
```js
stream
|window()
.period(10s)
.every(5s)
|top('value', 10)
//Publish the top 10 results over the last 10s updated every 5s.
|httpOut('top10')
```
Beware of adding a final slash / to the URL. This will result in a 404 error for a
task that does not exist.
Note that the example script above comes from the
[scores](https://github.com/influxdata/kapacitor/tree/master/examples/scores) example.
See the complete scores example for a concrete demonstration.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **httpOut&nbsp;(&nbsp;`endpoint`&nbsp;`string`)** | Create an HTTP output node that caches the most recent data it has received. The cached data is available at the given endpoint. The endpoint is the relative path from the API endpoint of the running task. For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is `top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
httpOut.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
httpOut|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
httpOut|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
httpOut|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
httpOut|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
httpOut|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
httpOut|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
httpOut|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
httpOut|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
httpOut|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
httpOut|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
httpOut|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
httpOut|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
httpOut|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
httpOut|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
httpOut|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
httpOut|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
httpOut|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
httpOut|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
httpOut|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
httpOut|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
httpOut|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
httpOut|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
httpOut|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
httpOut|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
httpOut|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
httpOut|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
httpOut|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
httpOut|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
httpOut|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
httpOut|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
httpOut|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
httpOut|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
httpOut|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
httpOut|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
httpOut|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
httpOut|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
httpOut|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
httpOut|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
httpOut|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
httpOut|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
httpOut|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
httpOut|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
httpOut|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
httpOut|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
httpOut|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
httpOut|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
httpOut|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
httpOut|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
httpOut|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
httpOut|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,977 @@
---
title: HTTPPostNode
description: HTTPPostNode takes the incoming data stream and will POST it to an HTTP endpoint. That endpoint may be specified as a positional argument, or as an endpoint property method on httpPost. Multiple endpoint property methods may be specified.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: HTTPPostNode
identifier: http_post_node
weight: 140
parent: nodes
---
The `httpPost` node will take the incoming data stream and POST it to an HTTP endpoint.
That endpoint may be specified as a positional argument, or as an endpoint property
method on httpPost. Multiple endpoint property methods may be specified.
Example:
```js
stream
|window()
.period(10s)
.every(5s)
|top('value', 10)
//Post the top 10 results over the last 10s updated every 5s.
|httpPost('http://example.com/api/top10')
```
Example:
```js
stream
|window()
.period(10s)
.every(5s)
|top('value', 10)
//Post the top 10 results over the last 10s updated every 5s.
|httpPost()
.endpoint('example')
```
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **httpPost&nbsp;(&nbsp;`url`&nbsp;`...string`)** | Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint. HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an endpoint property method. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[captureResponse](#captureresponse)&nbsp;(&nbsp;)** | CaptureResponse indicates that the HTTP response should be read and logged if the status code was not an 2xx code. |
| **[codeField](#codefield)&nbsp;(&nbsp;`value`&nbsp;`string`)** | CodeField is the name of the field in which to place the HTTP status code. If the HTTP request fails at a layer below HTTP, (i.e. rejected TCP connection), then the status code is set to 0. |
| **[endpoint](#endpoint)&nbsp;(&nbsp;`endpoint`&nbsp;`string`)** | Name of the endpoint to be used, as is defined in the configuration file. |
| **[header](#header)&nbsp;(&nbsp;`k`&nbsp;`string`,&nbsp;`v`&nbsp;`string`)** | Add a header to the POST request |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[timeout](#timeout)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | Timeout for HTTP Post |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### CaptureResponse
CaptureResponse indicates that the HTTP response should be read and logged if
the status code was not an 2xx code.
```js
httpPost.captureResponse()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CodeField
CodeField is the name of the field in which to place the HTTP status code.
If the HTTP request fails at a layer below HTTP, (i.e. rejected TCP connection), then the status code is set to 0.
```js
httpPost.codeField(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Endpoint
Name of the endpoint to be used, as is defined in the configuration file.
Example:
```js
stream
|httpPost()
.endpoint('example')
```
```js
httpPost.endpoint(endpoint string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Header
Add a header to the POST request
Example:
```js
stream
|httpPost()
.endpoint('example')
.header('my', 'header')
```
```js
httpPost.header(k string, v string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
httpPost.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Timeout
Timeout for HTTP Post
```js
httpPost.timeout(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
httpPost|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
httpPost|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
httpPost|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
httpPost|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
httpPost|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
httpPost|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
httpPost|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
httpPost|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
httpPost|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
httpPost|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
httpPost|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
httpPost|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
httpPost|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
httpPost|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
httpPost|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
httpPost|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
httpPost|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
httpPost|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
httpPost|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
httpPost|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
httpPost|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
httpPost|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
httpPost|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
httpPost|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
httpPost|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
httpPost|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
httpPost|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
httpPost|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
httpPost|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
httpPost|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
httpPost|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
httpPost|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
httpPost|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
httpPost|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
httpPost|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
httpPost|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
httpPost|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
httpPost|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
httpPost|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
httpPost|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
httpPost|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
httpPost|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
httpPost|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
httpPost|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
httpPost|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
httpPost|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
httpPost|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
httpPost|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
httpPost|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
httpPost|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,324 @@
---
title: InfluxDBOutNode
description: InfluxDBOutNode writes data to an InfluxDB database as it is received.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: InfluxDBOutNode
identifier: influx_d_b_out_node
weight: 150
parent: nodes
---
The `influxDBOut` node writes data to InfluxDB as it is received.
Example:
```js
stream
|from()
.measurement('requests')
|eval(lambda: "errors" / "total")
.as('error_percent')
// Write the transformed data to InfluxDB
|influxDBOut()
.database('mydb')
.retentionPolicy('myrp')
.measurement('errors')
.tag('kapacitor', 'true')
.tag('version', '0.2')
```
Available Statistics:
* points_written: number of points written to InfluxDB
* write_errors: number of errors attempting to write to InfluxDB
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **influxDBOut&nbsp;(&nbsp;)** | Create an influxdb output node that will store the incoming data into InfluxDB. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[buffer](#buffer)&nbsp;(&nbsp;`value`&nbsp;`int64`)** | Number of points to buffer when writing to InfluxDB. Default: 1000 |
| **[cluster](#cluster)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the InfluxDB instance to connect to. If empty the configured default will be used. |
| **[create](#create)&nbsp;(&nbsp;)** | Create indicates that both the database and retention policy will be created, when the task is started. If the retention policy name is empty then no retention policy will be specified and the default retention policy name will be created. |
| **[database](#database)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the database. |
| **[flushInterval](#flushinterval)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | Write points to InfluxDB after interval even if buffer is not full. Default: 10s |
| **[measurement](#measurement)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the measurement. |
| **[precision](#precision)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The precision to use when writing the data. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[retentionPolicy](#retentionpolicy)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the retention policy. |
| **[tag](#tag)&nbsp;(&nbsp;`key`&nbsp;`string`,&nbsp;`value`&nbsp;`string`)** | Add a static tag to all data points. Tag can be called more then once. |
| **[writeConsistency](#writeconsistency)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The write consistency to use when writing the data. |
### Chaining Methods
[Deadman](#deadman),
[Stats](#stats)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Buffer
Number of points to buffer when writing to InfluxDB.
Default: 1000
```js
influxDBOut.buffer(value int64)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Cluster
The name of the InfluxDB instance to connect to.
If empty the configured default will be used.
```js
influxDBOut.cluster(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Create
Create indicates that both the database and retention policy
will be created, when the task is started.
If the retention policy name is empty then no
retention policy will be specified and
the default retention policy name will be created.
If the database already exists nothing happens.
```js
influxDBOut.create()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Database
The name of the database.
```js
influxDBOut.database(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### FlushInterval
Write points to InfluxDB after interval even if buffer is not full.
Default: 10s
```js
influxDBOut.flushInterval(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Measurement
The name of the measurement.
```js
influxDBOut.measurement(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Precision
The precision to use when writing the data.
```js
influxDBOut.precision(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
influxDBOut.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### RetentionPolicy
The name of the retention policy.
```js
influxDBOut.retentionPolicy(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tag
Add a static tag to all data points.
Tag can be called more then once.
```js
influxDBOut.tag(key string, value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### WriteConsistency
The write consistency to use when writing the data.
```js
influxDBOut.writeConsistency(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
influxDBOut|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
influxDBOut|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,911 @@
---
title: InfluxQLNode
description: InfluxQLNode performs the available function from the InfluxQL language. The function can be performed on a stream or batch edge. The resulting edge is dependent on the function.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: InfluxQLNode
identifier: influx_q_l_node
weight: 160
parent: nodes
---
The `influxQL` node performs [InfluxQL functions](/influxdb/v1.5/query_language/functions/).
The function can be performed on a stream or batch edge.
The resulting edge is dependent on the function.
For a stream edge, all points with the same time are accumulated into the function.
For a batch edge, all points in the batch are accumulated into the function.
Example:
```js
stream
|window()
.period(10s)
.every(10s)
// Sum the values for each 10s window of data.
|sum('value')
```
Note: Derivative has its own implementation as a [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/) instead of as part of the
InfluxQL functions.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **influxQL** | Has no constructor signature. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[as](#as)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the field, defaults to the name of function used (i.e. .mean -> 'mean') |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[usePointTimes](#usepointtimes)&nbsp;(&nbsp;)** | Use the time of the selected point instead of the time of the batch. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### As
The name of the field, defaults to the name of
function used (i.e. .mean -> 'mean')
```js
influxQL.as(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
influxQL.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### UsePointTimes
Use the time of the selected point instead of the time of the batch.
Only applies to selector functions like first, last, top, bottom, etc.
Aggregation functions always use the batch time.
```js
influxQL.usePointTimes()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
influxQL|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
influxQL|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
influxQL|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
influxQL|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
influxQL|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
influxQL|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
influxQL|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below or equal to threshold "<=" in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below or equal to 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below or equal to 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below or equal to 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below or equal to 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
influxQL|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
influxQL|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
influxQL|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
influxQL|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
influxQL|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
influxQL|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
influxQL|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
influxQL|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
influxQL|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
influxQL|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
influxQL|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
influxQL|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
influxQL|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
influxQL|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
influxQL|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
influxQL|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
influxQL|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
influxQL|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
influxQL|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
influxQL|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
influxQL|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
influxQL|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
influxQL|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
influxQL|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
influxQL|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
influxQL|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
influxQL|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
influxQL|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
influxQL|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
influxQL|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
influxQL|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
influxQL|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
influxQL|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
influxQL|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
influxQL|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
influxQL|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
influxQL|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
influxQL|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
influxQL|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
influxQL|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
influxQL|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
influxQL|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
influxQL|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,255 @@
---
title: KapacitorLoopbackNode
description: KapacitorLoopbackNode writes data back into the Kapacitor stream.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: KapacitorLoopback
identifier: kapacitor_loopback_node
weight: 190
parent: nodes
---
The `kapacitorLoopback` node writes data back into the Kapacitor stream.
To write data to a remote Kapacitor instance use the [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/).
Example:
```js
|kapacitorLoopback()
.database('mydb')
.retentionPolicy('myrp')
.measurement('errors')
.tag('kapacitor', 'true')
.tag('version', '0.2')
```
{{% note %}}
#### Beware of infinite loops
It is possible to create infinite loops using the KapacitorLoopback node.
Take care to ensure you do not chain tasks together creating a loop.
{{% /note %}}
{{% warn %}}
#### Avoid name collisions with multiple subscriptions
When using the KapacitorLoopback node, don't subscribe to identically named
databases and retention policies in multiple InfluxDB instances or clusters.
If Kapacitor is subscribed to multiple instances of InfluxDB, make each database
and retention policy combination unique. For example:
```
influxdb_1
└─ db1/rp1
influxdb_2
└─ db2/rp2
```
{{% /warn %}}
Available Statistics:
* `points_written`: number of points written back to Kapacitor
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **kapacitorLoopback&nbsp;(&nbsp;)** | Create an kapacitor loopback node that will send data back into Kapacitor as a stream. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[database](#database)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the database. |
| **[measurement](#measurement)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the measurement. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[retentionPolicy](#retentionpolicy)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The name of the retention policy. |
| **[tag](#tag)&nbsp;(&nbsp;`key`&nbsp;`string`,&nbsp;`value`&nbsp;`string`)** | Add a static tag to all data points. Tag can be called more than once. |
### Chaining Methods
[Deadman](#deadman),
[Stats](#stats)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Database
The name of the database.
```js
kapacitorLoopback.database(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Measurement
The name of the measurement.
```js
kapacitorLoopback.measurement(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
kapacitorLoopback.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### RetentionPolicy
The name of the retention policy.
```js
kapacitorLoopback.retentionPolicy(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tag
Add a static tag to all data points.
Tag can be called more than once.
```js
kapacitorLoopback.tag(key string, value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
kapacitorLoopback|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
kapacitorLoopback|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,900 @@
---
title: LogNode
description: LogNode logs all data that passes through the node.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: LogNode
identifier: log_node
weight: 200
parent: nodes
---
The `log` node logs all data that passes through it.
Example:
```js
stream.from()...
|window()
.period(10s)
.every(10s)
|log()
|count('value')
```
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **log&nbsp;(&nbsp;)** | Create a node that logs all data it receives. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[level](#level)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The level at which to log the data. One of: DEBUG, INFO, WARN, ERROR Default: INFO |
| **[prefix](#prefix)&nbsp;(&nbsp;`value`&nbsp;`string`)** | Optional prefix to add to all log messages |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Level
The level at which to log the data.
One of: DEBUG, INFO, WARN, ERROR
Default: INFO
```js
log.level(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Prefix
Optional prefix to add to all log messages
```js
log.prefix(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
log.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
log|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
log|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
log|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
log|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
log|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
log|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
log|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
log|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
log|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
log|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
log|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
log|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
log|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
log|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
log|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
log|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
log|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
log|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
log|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
log|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
log|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
log|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
log|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
log|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
log|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
log|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
log|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
log|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
log|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
log|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
log|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
log|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
log|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
log|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
log|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
log|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
log|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
log|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
log|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
log|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
log|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
log|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
log|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
log|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
log|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
log|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
log|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
log|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
log|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
log|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,867 @@
---
title: NoOpNode
dnote: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: NoOpNode
identifier: no_op_node
weight: 210
parent: nodes
---
The `noOp` node does not perform any operation.
> Do not use this node in a TICKscript. There should be no need for it.
If a node does not have any children, then its emitted count remains zero.
Using a [NoOpNode](/kapacitor/v1.5/nodes/no_op_node/) is a work around so that statistics are accurately reported
for nodes with no real children.
A [NoOpNode](/kapacitor/v1.5/nodes/no_op_node/) is automatically appended to any node that is a source for a [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
and does not have any children.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **noOp** | Has no constructor signature. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
noOp.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
noOp|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
noOp|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
noOp|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
noOp|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
noOp|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
noOp|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
noOp|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
noOp|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
noOp|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
noOp|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
noOp|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
noOp|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
noOp|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
noOp|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
noOp|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
noOp|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
noOp|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
noOp|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
noOp|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
noOp|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
noOp|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
noOp|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
noOp|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
noOp|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
noOp|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
noOp|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
noOp|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
noOp|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
noOp|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
noOp|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
noOp|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
noOp|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
noOp|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
noOp|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
noOp|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
noOp|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
noOp|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
noOp|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
noOp|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
noOp|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
noOp|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
noOp|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
noOp|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
noOp|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
noOp|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
noOp|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
noOp|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
noOp|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
noOp|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
noOp|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,881 @@
---
title: SampleNode
description: SampleNode samples points or batches. One point will be emitted every count or duration specified.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: SampleNode
identifier: sample_node
weight: 230
parent: nodes
---
The `sample` node samples points or batches.
One point will be emitted every count or duration specified.
Example:
```js
stream
|sample(3)
```
Keep every third data point or batch.
Example:
```js
stream
|sample(10s)
```
Keep only samples that land on the 10s boundary.
See [FromNode.Truncate,](/kapacitor/v1.5/nodes/from_node/#truncate) [QueryNode.GroupBy](/kapacitor/v1.5/nodes/query_node/#groupby) time or [WindowNode.Align](/kapacitor/v1.5/nodes/window_node/#align)
for ensuring data is aligned with a boundary.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **sample&nbsp;(&nbsp;`rate`&nbsp;`interface{}`)** | Create a new node that samples the incoming points or batches. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
sample.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
sample|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
sample|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
sample|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
sample|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
sample|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
sample|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
sample|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
sample|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
sample|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
sample|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
sample|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
sample|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
sample|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
sample|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
sample|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
sample|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
sample|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
sample|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
sample|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
sample|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
sample|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
sample|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
sample|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
sample|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
sample|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
sample|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
sample|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
sample|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
sample|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
sample|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
sample|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
sample|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
sample|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
sample|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
sample|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
sample|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
sample|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
sample|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
sample|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
sample|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
sample|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
sample|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
sample|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
sample|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
sample|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
sample|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
sample|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
sample|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
sample|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
sample|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,880 @@
---
title: ShiftNode
description: ShiftNode shifts points and batches in time. This is useful for comparing batches or points from different times.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: ShiftNode
identifier: shift_node
weight: 240
parent: nodes
---
The `shift` node shifts points and batches in time.
This is useful for comparing batches or points from different times.
Example:
```js
stream
|shift(5m)
```
Shift all data points 5m forward in time.
Example:
```js
stream
|shift(-10s)
```
Shift all data points 10s backward in time.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **shift&nbsp;(&nbsp;`shift`&nbsp;`time.Duration`)** | Create a new node that shifts the incoming points or batches in time. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
shift.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
shift|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
shift|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
shift|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
shift|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
shift|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
shift|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
shift|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
shift|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
shift|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
shift|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
shift|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
shift|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
shift|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
shift|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
shift|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
shift|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
shift|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
shift|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
shift|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
shift|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
shift|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
shift|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
shift|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
shift|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
shift|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
shift|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
shift|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
shift|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
shift|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
shift|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
shift|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
shift|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
shift|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
shift|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
shift|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
shift|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
shift|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
shift|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
shift|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
shift|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
shift|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
shift|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
shift|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
shift|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
shift|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
shift|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
shift|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
shift|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
shift|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
shift|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,933 @@
---
title: SideloadNode
description: SideloadNode adds fields and tags to points based on hierarchical data from various sources.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: SideloadNode
identifier: sideload_node
weight: 250
parent: nodes
---
The `sideload` node adds fields and tags to points based on hierarchical data from various sources.
Example:
```js
|sideload()
.source('file:///path/to/dir')
.order('host/{{.host}}.yml', 'hostgroup/{{.hostgroup}}.yml')
.field('cpu_threshold', 0.0)
.tag('foo', 'unknown')
```
Add a field `cpu_threshold` and a tag `foo` to each point based on the value loaded from the hierarchical source.
The list of templates in the `.order()` property are evaluated using the points tags.
The files paths are checked then checked in order for the specified keys and the first value that is found is used.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **sideload&nbsp;(&nbsp;)** | Create a node that can load data from external sources |
### Property Methods
| Setters | Description |
|:---|:---|
| **[field](#field)&nbsp;(&nbsp;`f`&nbsp;`string`,&nbsp;`v`&nbsp;`interface{}`)** | Field is the name of a field to load from the source and its default value. The type loaded must match the type of the default value. Otherwise an error is recorded and the default value is used. |
| **[order](#order)&nbsp;(&nbsp;`order`&nbsp;`...string`)** | Order is a list of paths that indicate the hierarchical order. The paths are relative to the source and can have template markers like `{{.tagname}}` that will be replaced with the tag value of the point. The paths are then searched in order for the keys and the first value that is found is used. This allows for values to be overridden based on a hierarchy of tags. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[source](#source)&nbsp;(&nbsp;`value`&nbsp;`string`)** | Source for the data, currently only `file://` based sources are supported |
| **[tag](#tag)&nbsp;(&nbsp;`t`&nbsp;`string`,&nbsp;`v`&nbsp;`string`)** | Tag is the name of a tag to load from the source and its default value. The loaded values must be strings, otherwise an error is recorded and the default value is used. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Field
Field is the name of a field to load from the source and its default value.
The type loaded must match the type of the default value.
Otherwise an error is recorded and the default value is used.
```js
sideload.field(f string, v interface{})
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Order
Order is a list of paths that indicate the hierarchical order.
The paths are relative to the source and can have template markers like `{{.tagname}}` that will be replaced with the tag value of the point.
The paths are then searched in order for the keys and the first value that is found is used.
This allows for values to be overridden based on a hierarchy of tags.
```js
sideload.order(order ...string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
sideload.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Source
Source for the data, currently only `file://` based sources are supported
```js
sideload.source(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Tag
Tag is the name of a tag to load from the source and its default value.
The loaded values must be strings, otherwise an error is recorded and the default value is used.
```js
sideload.tag(t string, v string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
sideload|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
sideload|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
sideload|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
sideload|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
sideload|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
sideload|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
sideload|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
sideload|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
sideload|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
sideload|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
sideload|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
sideload|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
sideload|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
sideload|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
sideload|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
sideload|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
sideload|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
sideload|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
sideload|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
sideload|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
sideload|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
sideload|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
sideload|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
sideload|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
sideload|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
sideload|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
sideload|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
sideload|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
sideload|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
sideload|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
sideload|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
sideload|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
sideload|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
sideload|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
sideload|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
sideload|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
sideload|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
sideload|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
sideload|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
sideload|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
sideload|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
sideload|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
sideload|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
sideload|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
sideload|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
sideload|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
sideload|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
sideload|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
sideload|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
sideload|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,898 @@
---
title: StateCountNode
description: StateCountNode computes the number of consecutive points in a given state (defined using a lambda expression).
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: StateCountNode
identifier: state_count_node
weight: 260
parent: nodes
---
The `stateCount` node computes the number of consecutive points in a given state.
The state is defined via a lambda expression. For each consecutive point for
which the expression evaluates as true, the state count will be incremented
When a point evaluates as false, the state count is reset.
The state count will be added as an additional `int64` field to each point.
If the expression evaluates as false, the value will be -1.
If the expression generates an error during evaluation, the point is discarded, and does not affect the state count.
Example:
```js
stream
|from()
.measurement('cpu')
|where(lambda: "cpu" == 'cpu-total')
|groupBy('host')
|stateCount(lambda: "usage_idle" <= 10)
|alert()
// Warn after 1 point
.warn(lambda: "state_count" >= 1)
// Critical after 5 points
.crit(lambda: "state_count" >= 5)
```
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **stateCount&nbsp;(&nbsp;`expression`&nbsp;`ast.LambdaNode`)** | Create a node that tracks number of consecutive points in a given state. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[as](#as)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The new name of the resulting duration field. Default: 'state_count' |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### As
The new name of the resulting duration field.
Default: 'state_count'
```js
stateCount.as(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
stateCount.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
stateCount|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
stateCount|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
stateCount|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
stateCount|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
stateCount|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
stateCount|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
stateCount|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
stateCount|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
stateCount|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
stateCount|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
stateCount|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
stateCount|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
stateCount|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
stateCount|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
stateCount|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
stateCount|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
stateCount|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
stateCount|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
stateCount|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
stateCount|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
stateCount|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
stateCount|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
stateCount|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
stateCount|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
stateCount|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
stateCount|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
stateCount|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
stateCount|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
stateCount|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
stateCount|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
stateCount|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
stateCount|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
stateCount|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
stateCount|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
stateCount|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
stateCount|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
stateCount|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
stateCount|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
stateCount|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
stateCount|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
stateCount|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
stateCount|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
stateCount|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
stateCount|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
stateCount|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
stateCount|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
stateCount|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
stateCount|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
stateCount|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
stateCount|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,922 @@
---
title: StateDurationNode
description: StateDurationNode computes the duration of a given state (defined using a lambda expression).
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: StateDurationNode
identifier: state_duration_node
weight: 270
parent: nodes
---
The `stateDuration` node computes the duration of a given state.
The state is defined via a lambda expression. For each consecutive point for
which the expression evaluates as `true`, the state duration will be
incremented by the duration between points. When a point evaluates as `false`,
the state duration is reset.
The state duration will be added as an additional `float64` field to each point.
If the expression evaluates as false, the value will be `-1`.
If the expression generates an error during evaluation, the point is discarded, and does not affect the state duration.
Example:
```js
stream
|from()
.measurement('cpu')
|where(lambda: "cpu" == 'cpu-total')
|groupBy('host')
|stateDuration(lambda: "usage_idle" <= 10)
.unit(1m)
|alert()
// Warn after 1 minute
.warn(lambda: "state_duration" >= 1)
// Critical after 5 minutes
.crit(lambda: "state_duration" >= 5)
```
Note that as the first point in the given state has no previous point, its
state duration will be 0.
> Currently, the StateDurationNode only emits a point when it receives data.
It does not assume the previous evaluation if no data is received at the "expected"
interval or data resolution.
If no data is sent, the StateDurationNode cannot evaluate the state and cannot calculate a duration.
> More information about this is available in this [comment thread](https://github.com/influxdata/kapacitor/issues/1757) on Github.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **stateDuration&nbsp;(&nbsp;`expression`&nbsp;`ast.LambdaNode`)** | Create a node that tracks duration in a given state. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[as](#as)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The new name of the resulting duration field. Default: 'state_duration' |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[unit](#unit)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | The time unit of the resulting duration value. Default: 1s. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### As
The new name of the resulting duration field.
Default: 'state_duration'
```js
stateDuration.as(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
stateDuration.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Unit
The time unit of the resulting duration value.
Default: 1s.
```js
stateDuration.unit(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
stateDuration|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
stateDuration|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
stateDuration|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
stateDuration|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
stateDuration|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
stateDuration|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
stateDuration|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
stateDuration|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
stateDuration|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
stateDuration|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
stateDuration|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
stateDuration|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
stateDuration|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
stateDuration|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
stateDuration|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
stateDuration|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
stateDuration|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
stateDuration|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
stateDuration|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
stateDuration|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
stateDuration|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
stateDuration|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
stateDuration|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
stateDuration|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
stateDuration|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
stateDuration|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
stateDuration|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
stateDuration|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
stateDuration|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
stateDuration|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
stateDuration|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
stateDuration|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
stateDuration|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
stateDuration|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
stateDuration|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
stateDuration|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
stateDuration|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
stateDuration|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
stateDuration|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
stateDuration|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
stateDuration|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
stateDuration|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
stateDuration|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
stateDuration|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
stateDuration|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
stateDuration|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
stateDuration|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
stateDuration|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
stateDuration|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
stateDuration|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,909 @@
---
title: StatsNode
description: StreamNode emits internal statistics about another node at a given interval.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: StatsNode
identifier: stats_node
weight: 280
parent: nodes
---
The `stats` node emits internal statistics about the another node at a given interval.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the other node is receiving.
As a result the [StatsNode](/kapacitor/v1.5/nodes/stats_node/) is a root node in the task pipeline.
The currently available internal statistics:
* emitted: the number of points or batches this node has sent to its children.
Each stat is available as a field in the data stream.
The stats are in groups according to the original data.
Meaning that if the source node is grouped by the tag 'host' as an example,
then the counts are output per host with the appropriate 'host' tag.
Since its possible for groups to change when crossing a node only the emitted groups
are considered.
Example:
```js
var data = stream
|from()...
// Emit statistics every 1 minute and cache them via the HTTP API.
data
|stats(1m)
|httpOut('stats')
// Continue normal processing of the data stream
data...
```
{{% warn %}}
<strong>WARNING:</strong> It is not recommended to join the stats stream with the original data stream.
Since they operate on different clocks you could potentially create a deadlock.
This is a limitation of the current implementation and may be removed in the future.
{{% /warn %}}
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **stats&nbsp;(&nbsp;`interval`&nbsp;`time.Duration`)** | Create a new stream of data that contains the internal statistics of the node. The interval represents how often to emit the statistics based on real time. This means the interval time is independent of the times of the data points the source node is receiving. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[align](#align)&nbsp;(&nbsp;)** | Round times to the StatsNode.Interval value. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Align
Round times to the StatsNode.Interval value.
```js
stats.align()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
stats.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
stats|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
stats|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
stats|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
stats|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
stats|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
stats|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
stats|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
stats|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
stats|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
stats|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
stats|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
stats|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
stats|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
stats|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
stats|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
stats|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
stats|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
stats|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
stats|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
stats|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
stats|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
stats|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
stats|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
stats|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
stats|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
stats|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
stats|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
stats|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
stats|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
stats|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
stats|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
stats|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
stats|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
stats|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
stats|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
stats|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
stats|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
stats|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
stats|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
stats|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
stats|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
stats|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
stats|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
stats|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
stats|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
stats|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
stats|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
stats|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
stats|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
stats|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,202 @@
---
title: StreamNode
description: StreamNode represents the source of data being streamed to Kapacitor through any of its inputs.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: StreamNode
identifier: stream_node
weight: 5
parent: nodes
---
The `stream` node represents the source of data being
streamed to Kapacitor via any of its inputs.
The `stream` variable in stream tasks is an instance of
a [StreamNode.](/kapacitor/v1.5/nodes/stream_node/)
[StreamNode.From](/kapacitor/v1.5/nodes/stream_node/#from) is the method/property of this node.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **stream** | Has no constructor signature. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Deadman](#deadman),
[From](#from),
[Stats](#stats)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
stream.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
stream|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### From
Creates a new [FromNode](/kapacitor/v1.5/nodes/from_node/) that can be further
filtered using the Database, RetentionPolicy, Measurement and Where properties.
From can be called multiple times to create multiple
independent forks of the data stream.
Example:
```js
// Select the 'cpu' measurement from just the database 'mydb'
// and retention policy 'myrp'.
var cpu = stream
|from()
.database('mydb')
.retentionPolicy('myrp')
.measurement('cpu')
// Select the 'load' measurement from any database and retention policy.
var load = stream
|from()
.measurement('load')
// Join cpu and load streams and do further processing.
cpu
|join(load)
.as('cpu', 'load')
...
```
```js
stream|from()
```
Returns: [FromNode](/kapacitor/v1.5/nodes/from_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
stream|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,913 @@
---
title: UDFNode
description: UDFNode runs a user defined function (UDF) in a separate process.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: UDFNode
identifier: u_d_f_node
weight: 310
parent: nodes
---
The `udf` node can run a User Defined Function (UDF) in a separate process.
A UDF is a custom script or binary that can communicate via Kapacitor's UDF RPC protocol.
The path and arguments to the UDF program are specified in Kapacitor's configuration.
Using TICKscripts you can invoke and configure your UDF for each task.
See the [README.md](https://github.com/influxdata/kapacitor/tree/master/udf/agent/)
for details on how to write your own UDF.
UDFs are configured via Kapacitor's main configuration file.
Example:
```js
[udf]
[udf.functions]
# Example moving average UDF.
[udf.functions.movingAverage]
prog = "/path/to/executable/moving_avg"
args = []
timeout = "10s"
```
UDFs are first class objects in TICKscripts and are referenced via their configuration name.
Example:
```js
// Given you have a UDF that computes a moving average
// The UDF can define what its options are and then can be
// invoked via a TICKscript like so:
stream
|from()...
@movingAverage()
.field('value')
.size(100)
.as('mavg')
|httpOut('movingaverage')
```
> **NOTE:** The UDF process runs as the same user as the Kapacitor daemon.
As a result, make sure the user is properly secured, as well as the configuration file.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **uDF** | Has no constructor signature. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[uDFName](#udfname)&nbsp;(&nbsp;`value`&nbsp;`string`)** | |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
uDF.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### UDFName
```js
uDF.uDFName(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
uDF|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
uDF|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
uDF|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
uDF|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
uDF|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
uDF|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
uDF|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
uDF|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
uDF|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
uDF|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
uDF|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
uDF|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
uDF|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
uDF|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
uDF|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
uDF|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
uDF|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
uDF|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
uDF|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
uDF|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
uDF|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
uDF|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
uDF|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
uDF|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
uDF|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
uDF|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
uDF|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
uDF|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
uDF|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
uDF|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
uDF|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
uDF|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
uDF|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
uDF|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
uDF|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
uDF|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
uDF|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
uDF|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
uDF|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
uDF|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
uDF|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
uDF|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
uDF|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
uDF|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
uDF|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
uDF|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
uDF|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
uDF|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
uDF|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
uDF|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,896 @@
---
title: UnionNode
description: UnionNode takes the union of all of its parents as a simple pass-through. Data points received from each parent are passed to children nodes without modification.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: UnionNode
identifier: union_node
weight: 320
parent: nodes
---
The `union` node takes the union of all of its parents as a simple pass through.
Data points received from each parent are passed onto children nodes without modification.
Example:
```js
var logins = stream
|from()
.measurement('logins')
var logouts = stream
|from()
.measurement('logouts')
var frontpage = stream
|from()
.measurement('frontpage')
// Union all user actions into a single stream
logins
|union(logouts, frontpage)
.rename('user_actions')
...
```
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **union&nbsp;(&nbsp;`node`&nbsp;`...Node`)** | Perform the union of this node and all other given nodes. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
| **[rename](#rename)&nbsp;(&nbsp;`value`&nbsp;`string`)** | The new name of the stream. If empty the name of the left node (i.e. `leftNode.union(otherNode1, otherNode2)`) is used. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
union.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Rename
The new name of the stream.
If empty the name of the left node
(i.e. `leftNode.union(otherNode1, otherNode2)`) is used.
```js
union.rename(value string)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
union|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
union|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
union|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
union|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
union|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
union|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
union|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
union|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
union|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
union|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
union|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
union|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
union|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
union|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
union|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
union|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
union|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
union|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
union|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
union|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
union|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
union|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
union|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
union|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
union|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
union|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
union|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
union|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
union|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
union|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
union|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
union|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
union|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
union|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
union|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
union|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
union|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
union|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
union|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
union|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
union|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
union|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
union|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
union|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
union|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
union|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
union|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
union|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
union|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
union|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,876 @@
---
title: WhereNode
description: WhereNode filters a data stream by a given expression.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: WhereNode
identifier: where_node
weight: 330
parent: nodes
---
The `where` node filters a data stream by a given expression.
Example:
```js
var sums = stream
|from()
.groupBy('service', 'host')
|sum('value')
//Watch particular host for issues.
sums
|where(lambda: "host" == 'h001.example.com')
|alert()
.crit(lambda: TRUE)
.email().to('user@example.com')
```
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **where&nbsp;(&nbsp;`expression`&nbsp;`ast.LambdaNode`)** | Create a new node that filters the data stream by a given expression. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Quiet
Suppress all error logging events from this node.
```js
where.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
where|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
where|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
where|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
where|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
where|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
where|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
where|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
where|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
where|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
where|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
where|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
where|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
where|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
where|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
where|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
where|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
where|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
where|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
where|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
where|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
where|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
where|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
where|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
where|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
where|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
where|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
where|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
where|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
where|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
where|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
where|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
where|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
where|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
where|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
where|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
where|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
where|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
where|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
where|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
where|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
where|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
where|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
where|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
where|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
where|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
where|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
where|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
where|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
where|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
where|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,967 @@
---
title: WindowNode
description: WindowNode caches data within a moving time range.
note: Auto generated by tickdoc
menu:
kapacitor_1_5_ref:
name: WindowNode
identifier: window_node
weight: 340
parent: nodes
---
The `window` node caches data within a moving time range.
The `period` property of `window` defines the time range covered by `window`.
The `every` property of `window` defines the frequency at which the window
is emitted to the next node in the pipeline.
The `align` property of `window` defines how to align the window edges.
(By default, the edges are defined relative to the first data point the `window`
node receives.)
Example:
```js
stream
|window()
.period(10m)
.every(5m)
|httpOut('recent')
```
This example emits the last `10 minute` period every `5 minutes` to the pipeline's `httpOut` node.
Because `every` is less than `period`, each time the window is emitted it contains `5 minutes` of
new data and `5 minutes` of the previous period's data.
> **NOTE:** Because no `align` property is defined, the `window` edge is defined relative to the first data point.
### Constructor
| Chaining Method | Description |
|:---------|:---------|
| **window&nbsp;(&nbsp;)** | Create a new node that windows the stream by time. |
### Property Methods
| Setters | Description |
|:---|:---|
| **[align](#align)&nbsp;(&nbsp;)** | If the `align` property is not used to modify the `window` node, then the window alignment is assumed to start at the time of the first data point it receives. If `align` property is set, the window time edges will be truncated to the `every` property (For example, if a data point's time is 12:06 and the `every` property is `5m` then the data point's window will range from 12:05 to 12:10). |
| **[every](#every)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | How often the current window is emitted into the pipeline. If equal to zero, then every new point will emit the current window. |
| **[everyCount](#everycount)&nbsp;(&nbsp;`value`&nbsp;`int64`)** | EveryCount determines how often the window is emitted based on the count of points. A value of 1 means that every new point will emit the window. |
| **[fillPeriod](#fillperiod)&nbsp;(&nbsp;)** | FillPeriod instructs the WindowNode to wait till the period has elapsed before emitting the first batch. This only applies if the period is greater than the every value. |
| **[period](#period)&nbsp;(&nbsp;`value`&nbsp;`time.Duration`)** | The period, or length in time, of the window. |
| **[periodCount](#periodcount)&nbsp;(&nbsp;`value`&nbsp;`int64`)** | PeriodCount is the number of points per window. |
| **[quiet](#quiet)&nbsp;(&nbsp;)** | Suppress all error logging events from this node. |
### Chaining Methods
[Alert](#alert),
[Barrier](#barrier),
[Bottom](#bottom),
[ChangeDetect](#changedetect),
[Combine](#combine),
[Count](#count),
[CumulativeSum](#cumulativesum),
[Deadman](#deadman),
[Default](#default),
[Delete](#delete),
[Derivative](#derivative),
[Difference](#difference),
[Distinct](#distinct),
[Ec2Autoscale](#ec2autoscale),
[Elapsed](#elapsed),
[Eval](#eval),
[First](#first),
[Flatten](#flatten),
[GroupBy](#groupby),
[HoltWinters](#holtwinters),
[HoltWintersWithFit](#holtwinterswithfit),
[HttpOut](#httpout),
[HttpPost](#httppost),
[InfluxDBOut](#influxdbout),
[Join](#join),
[K8sAutoscale](#k8sautoscale),
[KapacitorLoopback](#kapacitorloopback),
[Last](#last),
[Log](#log),
[Max](#max),
[Mean](#mean),
[Median](#median),
[Min](#min),
[Mode](#mode),
[MovingAverage](#movingaverage),
[Percentile](#percentile),
[Sample](#sample),
[Shift](#shift),
[Sideload](#sideload),
[Spread](#spread),
[StateCount](#statecount),
[StateDuration](#stateduration),
[Stats](#stats),
[Stddev](#stddev),
[Sum](#sum),
[SwarmAutoscale](#swarmautoscale),
[Top](#top),
[Union](#union),
[Where](#where),
[Window](#window)
---
## Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the `.` operator.
### Align
Set the `align` property to truncate the window time edges to the `every` property. For example, if a data point's time is 12:06 and the `every` property is `5m` then the data point's window ranges from 12:05 to 12:10).
If the `align` property isn't used to modify the `window` node, the window alignment starts at the time the first data point is received.
```js
window.align()
```
> Note: When ingesting data at irregular intervals, we recommend using `window.align()` to group data.
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Every
How often the current window is emitted into the pipeline.
If equal to zero, then every new point will emit the current window.
```js
window.every(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### EveryCount
EveryCount determines how often the window is emitted based on the count of points.
A value of 1 means that every new point will emit the window.
```js
window.everyCount(value int64)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### FillPeriod
FillPeriod instructs the [WindowNode](/kapacitor/v1.5/nodes/window_node/) to wait till the period has elapsed before emitting the first batch.
This only applies if the period is greater than the every value.
```js
window.fillPeriod()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Period
The period, or length in time, of the window.
```js
window.period(value time.Duration)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### PeriodCount
PeriodCount is the number of points per window.
```js
window.periodCount(value int64)
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Quiet
Suppress all error logging events from this node.
```js
window.quiet()
```
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
## Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the `|` operator.
### Alert
Create an alert node, which can trigger alerts.
```js
window|alert()
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Barrier
Create a new Barrier node that emits a BarrierMessage periodically.
One BarrierMessage will be emitted every period duration.
```js
window|barrier()
```
Returns: [BarrierNode](/kapacitor/v1.5/nodes/barrier_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Bottom
Select the bottom `num` points for `field` and sort by any extra tags or fields.
```js
window|bottom(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### ChangeDetect
Create a new node that only emits new points if different from the previous point.
```js
window|changeDetect(field string)
```
Returns: [ChangeDetectNode](/kapacitor/v1.5/nodes/change_detect_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Combine
Combine this node with itself. The data is combined on timestamp.
```js
window|combine(expressions ...ast.LambdaNode)
```
Returns: [CombineNode](/kapacitor/v1.5/nodes/combine_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Count
Count the number of points.
```js
window|count(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### CumulativeSum
Compute a cumulative sum of each point that is received.
A point is emitted for every point collected.
```js
window|cumulativeSum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold: trigger alert if throughput drops below threshold in points/interval.
- Interval: how often to check the throughput.
- Expressions: optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
```
The above is equivalent to this example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
```
The `id` and `message` alert properties can be configured globally via the 'deadman' configuration section.
Since the [AlertNode](/kapacitor/v1.5/nodes/alert_node/) is the last piece it can be further modified as usual.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
```
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered.
Example:
```js
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
```
```js
window|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
```
Returns: [AlertNode](/kapacitor/v1.5/nodes/alert_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Default
Create a node that can set defaults for missing tags or fields.
```js
window|default()
```
Returns: [DefaultNode](/kapacitor/v1.5/nodes/default_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Delete
Create a node that can delete tags or fields.
```js
window|delete()
```
Returns: [DeleteNode](/kapacitor/v1.5/nodes/delete_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Derivative
Create a new node that computes the derivative of adjacent points.
```js
window|derivative(field string)
```
Returns: [DerivativeNode](/kapacitor/v1.5/nodes/derivative_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Difference
Compute the difference between points independent of elapsed time.
```js
window|difference(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Distinct
Produce batch of only the distinct points.
```js
window|distinct(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Ec2Autoscale
Create a node that can trigger autoscale events for a ec2 autoscalegroup.
```js
window|ec2Autoscale()
```
Returns: [Ec2AutoscaleNode](/kapacitor/v1.5/nodes/ec2_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Elapsed
Compute the elapsed time between points.
```js
window|elapsed(field string, unit time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Eval
Create an eval node that will evaluate the given transformation function to each data point.
A list of expressions may be provided and will be evaluated in the order they are given.
The results are available to later expressions.
```js
window|eval(expressions ...ast.LambdaNode)
```
Returns: [EvalNode](/kapacitor/v1.5/nodes/eval_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### First
Select the first point.
```js
window|first(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Flatten
Flatten points with similar times into a single point.
```js
window|flatten()
```
Returns: [FlattenNode](/kapacitor/v1.5/nodes/flatten_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### GroupBy
Group the data by a set of tags.
Can pass literal * to group by all dimensions.
Example:
```js
|groupBy(*)
```
```js
window|groupBy(tag ...interface{})
```
Returns: [GroupByNode](/kapacitor/v1.5/nodes/group_by_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWinters
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
```js
window|holtWinters(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HoltWintersWithFit
Compute the Holt-Winters (https://docs.influxdata.com/influxdb/latest/query_language/functions/#holt-winters) forecast of a data set.
This method also outputs all the points used to fit the data in addition to the forecasted data.
```js
window|holtWintersWithFit(field string, h int64, m int64, interval time.Duration)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpOut
Create an HTTP output node that caches the most recent data it has received.
The cached data is available at the given endpoint.
The endpoint is the relative path from the API endpoint of the running task.
For example, if the task endpoint is at `/kapacitor/v1/tasks/<task_id>` and endpoint is
`top10`, then the data can be requested from `/kapacitor/v1/tasks/<task_id>/top10`.
```js
window|httpOut(endpoint string)
```
Returns: [HTTPOutNode](/kapacitor/v1.5/nodes/http_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### HttpPost
Creates an HTTP Post node that POSTS received data to the provided HTTP endpoint.
HttpPost expects 0 or 1 arguments. If 0 arguments are provided, you must specify an
endpoint property method.
```js
window|httpPost(url ...string)
```
Returns: [HTTPPostNode](/kapacitor/v1.5/nodes/http_post_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### InfluxDBOut
Create an influxdb output node that will store the incoming data into InfluxDB.
```js
window|influxDBOut()
```
Returns: [InfluxDBOutNode](/kapacitor/v1.5/nodes/influx_d_b_out_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Join
Join this node with other nodes. The data is joined on timestamp.
```js
window|join(others ...Node)
```
Returns: [JoinNode](/kapacitor/v1.5/nodes/join_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### K8sAutoscale
Create a node that can trigger autoscale events for a kubernetes cluster.
```js
window|k8sAutoscale()
```
Returns: [K8sAutoscaleNode](/kapacitor/v1.5/nodes/k8s_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### KapacitorLoopback
Create an kapacitor loopback node that will send data back into Kapacitor as a stream.
```js
window|kapacitorLoopback()
```
Returns: [KapacitorLoopbackNode](/kapacitor/v1.5/nodes/kapacitor_loopback_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Last
Select the last point.
```js
window|last(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Log
Create a node that logs all data it receives.
```js
window|log()
```
Returns: [LogNode](/kapacitor/v1.5/nodes/log_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Max
Select the maximum point.
```js
window|max(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mean
Compute the mean of the data.
```js
window|mean(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Median
Compute the median of the data.
> **Note:** This method is not a selector.
If you want the median point, use `.percentile(field, 50.0)`.
```js
window|median(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Min
Select the minimum point.
```js
window|min(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Mode
Compute the mode of the data.
```js
window|mode(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### MovingAverage
Compute a moving average of the last window points.
No points are emitted until the window is full.
```js
window|movingAverage(field string, window int64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Percentile
Select a point at the given percentile. This is a selector function, no interpolation between points is performed.
```js
window|percentile(field string, percentile float64)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sample
Create a new node that samples the incoming points or batches.
One point will be emitted every count or duration specified.
```js
window|sample(rate interface{})
```
Returns: [SampleNode](/kapacitor/v1.5/nodes/sample_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Shift
Create a new node that shifts the incoming points or batches in time.
```js
window|shift(shift time.Duration)
```
Returns: [ShiftNode](/kapacitor/v1.5/nodes/shift_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sideload
Create a node that can load data from external sources.
```js
window|sideload()
```
Returns: [SideloadNode](/kapacitor/v1.5/nodes/sideload_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Spread
Compute the difference between `min` and `max` points.
```js
window|spread(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateCount
Create a node that tracks number of consecutive points in a given state.
```js
window|stateCount(expression ast.LambdaNode)
```
Returns: [StateCountNode](/kapacitor/v1.5/nodes/state_count_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### StateDuration
Create a node that tracks duration in a given state.
```js
window|stateDuration(expression ast.LambdaNode)
```
Returns: [StateDurationNode](/kapacitor/v1.5/nodes/state_duration_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stats
Create a new stream of data that contains the internal statistics of the node.
The interval represents how often to emit the statistics based on real time.
This means the interval time is independent of the times of the data points the source node is receiving.
```js
window|stats(interval time.Duration)
```
Returns: [StatsNode](/kapacitor/v1.5/nodes/stats_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Stddev
Compute the standard deviation.
```js
window|stddev(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Sum
Compute the sum of all values.
```js
window|sum(field string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### SwarmAutoscale
Create a node that can trigger autoscale events for a Docker swarm cluster.
```js
window|swarmAutoscale()
```
Returns: [SwarmAutoscaleNode](/kapacitor/v1.5/nodes/swarm_autoscale_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Top
Select the top `num` points for `field` and sort by any extra tags or fields.
```js
window|top(num int64, field string, fieldsAndTags ...string)
```
Returns: [InfluxQLNode](/kapacitor/v1.5/nodes/influx_q_l_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Union
Perform the union of this node and all other given nodes.
```js
window|union(node ...Node)
```
Returns: [UnionNode](/kapacitor/v1.5/nodes/union_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Where
Create a new node that filters the data stream by a given expression.
```js
window|where(expression ast.LambdaNode)
```
Returns: [WhereNode](/kapacitor/v1.5/nodes/where_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>
### Window
Create a new node that windows the stream by time.
NOTE: Window can only be applied to stream edges.
```js
window|window()
```
Returns: [WindowNode](/kapacitor/v1.5/nodes/window_node/)
<a class="top" href="javascript:document.getElementsByClassName('article-heading')[0].scrollIntoView();" title="top"><span class="icon arrow-up"></span></a>

View File

@ -0,0 +1,84 @@
---
title: TICKscript specification
menu:
kapacitor_1_5_ref:
name: TICKscript specification
identifier: specification
weight: 10
---
Introduction
------------
The TICKscript language is an invocation chaining language used to define data processing pipelines.
Notation
-------
The syntax is specified using Extended Backus-Naur Form ("EBNF").
EBNF is the same notation used in the [Go](http://golang.org/) programming language specification, which can be found [here](https://golang.org/ref/spec).
```
Production = production_name "=" [ Expression ] "." .
Expression = Alternative { "|" Alternative } .
Alternative = Term { Term } .
Term = production_name | token [ "…" token ] | Group | Option | Repetition .
Group = "(" Expression ")" .
Option = "[" Expression "]" .
Repetition = "{" Expression "}" .
```
Notation operators in order of increasing precedence:
```
| alternation
() grouping
[] option (0 or 1 times)
{} repetition (0 to n times)
```
Grammar
-------
The following is the EBNF grammar definition of TICKscript.
```
unicode_char = (* an arbitrary Unicode code point except newline *) .
digit = "0" … "9" .
ascii_letter = "A" … "Z" | "a" … "z" .
letter = ascii_letter | "_" .
identifier = ( letter ) { letter | digit } .
boolean_lit = "TRUE" | "FALSE" .
int_lit = "1" … "9" { digit }
letter = ascii_letter | "_" .
number_lit = digit { digit } { "." {digit} } .
duration_lit = int_lit duration_unit .
duration_unit = "u" | "µ" | "ms" | "s" | "m" | "h" | "d" | "w" .
string_lit = `'` { unicode_char } `'` .
star_lit = "*"
regex_lit = `/` { unicode_char } `/` .
operator_lit = "+" | "-" | "*" | "/" | "==" | "!=" |
"<" | "<=" | ">" | ">=" | "=~" | "!~" |
"AND" | "OR" .
Program = Statement { Statement } .
Statement = Declaration | Expression .
Declaration = "var" identifier "=" Expression .
Expression = identifier { Chain } | Function { Chain } | Primary .
Chain = "@" Function | "|" Function { Chain } | "." Function { Chain} | "." identifier { Chain } .
Function = identifier "(" Parameters ")" .
Parameters = { Parameter "," } [ Parameter ] .
Parameter = Expression | "lambda:" LambdaExpr | Primary .
Primary = "(" LambdaExpr ")" | number_lit | string_lit |
boolean_lit | duration_lit | regex_lit | star_lit |
LFunc | identifier | Reference | "-" Primary | "!" Primary .
Reference = `"` { unicode_char } `"` .
LambdaExpr = Primary operator_lit Primary .
LFunc = identifier "(" LParameters ")"
LParameters = { LParameter "," } [ LParameter ] .
LParameter = LambdaExpr | Primary .
```

View File

@ -0,0 +1,26 @@
---
title: TICKscript language reference
menu:
kapacitor_1_5_ref:
name: TICKscript language reference
identifier: tick
weight: 40
---
## What is in this section?
This section provides introductory information on working with TICKscript.
* [Introduction](/kapacitor/v1.5/tick/introduction/) - this document presents the fundamental concepts of working with TICKscript in Kapacitor and Chronograf.
* [TICKscript Syntax](/kapacitor/v1.5/tick/syntax/) - this covers the essentials of how TICKscript statements and structures are organized.
* [Lambda expressions](/kapacitor/v1.5/tick/expr/) - this section provides essential information about working with these argument types, which are commonly provided to TICKscript nodes.
* [TICKscript specification](/kapacitor/v1.5/reference/spec/) - (in the reference section) this introduces the specification defining TICKscript.
Outside of this section the following articles may also be of interest.
* [Getting started with Kapacitor](/kapacitor/v1.5/introduction/getting-started/) - an introduction to Kapacitor, which presents TICKscript basics.
* [Node overview](/kapacitor/v1.5/nodes/) - a catalog of the types of nodes available in TICKscript.
* [Guides](/kapacitor/v1.5/guides/) - a collection of intermediate to advanced solutions using the TICKscript language.
<br/>

View File

@ -0,0 +1,361 @@
---
title: Lambda expressions
menu:
kapacitor_1_5_ref:
identifier: expr
weight: 5
parent: tick
---
# Overview
TICKscript uses lambda expressions to define transformations on data points as
well as define Boolean conditions that act as filters. Lambda expressions wrap
mathematical operations, Boolean operations, internal function calls or a
combination of all three. TICKscript tries to be similar to InfluxQL in that
most expressions that you would use in an InfluxQL `WHERE` clause will work as
expressions in TICKscript, but with its own syntax:
* All field or tag identifiers must be double quoted.
* The comparison operator for equality is `==` not `=`.
All lambda expressions in TICKscript begin with the `lambda:` keyword.
```js
.where(lambda: "host" == 'server001.example.com')
```
In some nodes the results of a lambda expression can be captured into a new
field as a named result using the property setter `.as()`.
In this way they can be used in other nodes further down the pipeline.
<!--
Stateful
--------
-->
The internal functions of lambda expressions can be either stateless or
stateful. Stateful means that each time the function is evaluated the internal
state can change and will persist until the next evaluation.
<!-- This may seem odd as part of an expression language but it has a powerful use
case. Within the language a function can be defined that is essentially an
on-line/streaming algorithm and with each call the function state is updated. -->
For example the built-in function `sigma` calculates a running mean and standard
deviation and returns the number of standard deviations the current data point
is away from the mean.
**Example 1 &ndash; the sigma function**
```js
sigma("value") > 3.0
```
Each time that the expression is evaluated it updates the running statistics and
then returns the deviation. The simple expression in Example 1 evaluates to
`false` while the stream of data points it has received remains within `3.0`
standard deviations of the running mean. As soon as a value is processed that
is more than `3.0` standard deviations from the mean it evaluates to `true`.
Such an expression can be used inside of a TICKscript to define powerful
alerts, as illustrated in Example 2 below.
**Example 2 &ndash; TICKscript with lambda expression**
```js
stream
|from()
...
|alert()
// use an expression to define when an alert should go critical.
.crit(lambda: sigma("value") > 3.0)
```
**Note on inadvertent type casting**
Beware that numerical values declared in the TICKscript follow the parsing rules
for literals introduced in the
[Syntax](/kapacitor/v1.5/tick/syntax/#literal-values) document. They may not be
of a suitable type for the function or operation in which they will be used.
Numerical values that include a decimal will be interpreted as floats.
Numerical values without a decimal will be interpreted as integers. When
integers and floats are used within the same expression the integer values need
to use the `float()` type conversion function if a float result is desired.
Failure to observe this rule can yield unexpected results. For example, when
using a lambda expression to calculate a ratio between 0 and 1 of type float to
use in generating a percentage; and when the fields are of type integer, it might
be assumed that a subset field can be divided by the total field to get the
ratio( e.g. `subset/total * 100`). Such an integer by integer division will
result in an integer value of 0. Furthermore multiplication of the result of
such an operation by the literal `100` (an integer) will also result in 0.
Casting the integer values to float will result in a valid ratio in the range
between 0 and 1, and then multiplication by the literal `100.0` (a float) will
result in a valid percentage value. Correctly written, such an operation should
look like this:
`eval(lambda: float("total_error_responses")/float("total_responses") * 100.0)`.
If in the logs an error appears of the type `E! mismatched type to binary
operator...`, check to ensure that the fields on both sides of the operator are
of the same and the desired type.
In short, to ensure that the type of a field value is correct, use the built-in
type conversion functions (see [below](#above-header-type-conversion)).
# Built-in functions
### Stateful functions
##### Count
Count takes no arguments but returns the number of times the expression has been
evaluated.
```js
count() int64
```
##### Sigma
Computes the number of standard deviations a given value is away from the
running mean. Each time the expression is evaluated the running mean and
standard deviation are updated.
```js
sigma(value float64) float64
```
##### Spread
Computes the running range of all values passed into it. The range is the
difference between the maximum and minimum values received.
```js
spread(value float64) float64
```
<a id="above-header-type-conversion"></a>
### Stateless functions
#### Type conversion functions
##### Bool
Converts a string into a Boolean via Golang's
[strconv.ParseBool](https://golang.org/pkg/strconv/#ParseBool) function. Numeric
types can also be converted to a bool where a 0 -> false and 1 -> true.
```js
bool(value) bool
```
##### Int
Converts a string or float64 into an int64 via Golang's
[strconv.ParseInt](https://golang.org/pkg/strconv/#ParseInt) or simple
`int64()` coercion. Strings are assumed to be decimal numbers. Durations are
converted into an int64 with nanoseconds units. A Boolean is converted to an
int64 where false -> 0 and true -> 1.
```js
int(value) int64
```
##### Float
Converts a string or int64 into an float64 via Golang's
[strconv.ParseFloat](https://golang.org/pkg/strconv/#ParseInt) or simple
`float64()` coercion.
A Boolean is converted to a float64 where false -> 0.0 and true -> 1.0.
```js
float(value) float64
```
##### String
Converts a bool, int64 or float64 into an string via Golang's
[strconv.Format*](https://golang.org/pkg/strconv/#FormatBool) functions.
Durations are converted to a string representation of the duration.
```js
string(value) string
```
##### Duration
Converts an int64 or a float64 into an duration assuming the unit as specified as the 2nd argument
Strings are converted to duration of the form as duration literals in TICKscript.
```js
duration(value int64|float64, unit duration) duration
duration(value string) duration
```
#### Existence
##### IsPresent
Returns a Boolean value based on whether the specified field or tag key is present.
Useful for filtering out data this is missing the specified field or tag.
```js
|where(lambda: isPresent("myfield"))
```
This returns `TRUE` if `myfield` is a valid identifier and `FALSE` otherwise.
#### Time functions
##### The `time` field
Within each expression the `time` field contains the time of the current data point.
The following functions can be used on the `time` field.
Each function returns an int64.
| Function | Description |
| ---------- | ------------- |
| `unixNano(t time) int64` | the number of nanoseconds elapsed since January 1, 1970 UTC (Unix time) |
| `minute(t time) int64` | the minute within the hour: range [0,59] |
| `hour(t time) int64` | the hour within the day: range [0,23] |
| `weekday(t time) int64` | the weekday within the week: range [0,6], 0 is Sunday |
| `day(t time) int64` | the day within the month: range [1,31] |
| `month(t time) int64` | the month within the year: range [1,12] |
| `year(t time) int64` | the year |
Example usage:
```js
lambda: hour("time") >= 9 AND hour("time") < 19
```
The above expression evaluates to `true` if the hour of the day for the data
point falls between 0900 hours and 1900 hours.
##### Now
Returns the current time.
```js
now() time
```
Example usage:
```js
lambda: "expiration" < unixNano(now())
```
#### Math functions
The following mathematical functions are available.
Each function is implemented via the equivalent Go function.
| Function | Description |
| ---------- | ------------- |
| [abs(x float64) float64](https://golang.org/pkg/math/#Abs) | Abs returns the absolute value of x. |
| [acos(x float64) float64](https://golang.org/pkg/math/#Acos) | Acos returns the arccosine, in radians, of x. |
| [acosh(x float64) float64](https://golang.org/pkg/math/#Acosh) | Acosh returns the inverse hyperbolic cosine of x. |
| [asin(x float64) float64](https://golang.org/pkg/math/#Asin) | Asin returns the arcsine, in radians, of x. |
| [asinh(x float64) float64](https://golang.org/pkg/math/#Asinh) | Asinh returns the inverse hyperbolic sine of x. |
| [atan(x float64) float64](https://golang.org/pkg/math/#Atan) | Atan returns the arctangent, in radians, of x. |
| [atan2(y, x float64) float64](https://golang.org/pkg/math/#Atan2) | Atan2 returns the arc tangent of y/x, using the signs of the two to determine the quadrant of the return value. |
| [atanh(x float64) float64](https://golang.org/pkg/math/#Atanh) | Atanh returns the inverse hyperbolic tangent of x. |
| [cbrt(x float64) float64](https://golang.org/pkg/math/#Cbrt) | Cbrt returns the cube root of x. |
| [ceil(x float64) float64](https://golang.org/pkg/math/#Ceil) | Ceil returns the least integer value greater than or equal to x. |
| [cos(x float64) float64](https://golang.org/pkg/math/#Cos) | Cos returns the cosine of the radian argument x. |
| [cosh(x float64) float64](https://golang.org/pkg/math/#Cosh) | Cosh returns the hyperbolic cosine of x. |
| [erf(x float64) float64](https://golang.org/pkg/math/#Erf) | Erf returns the error function of x. |
| [erfc(x float64) float64](https://golang.org/pkg/math/#Erfc) | Erfc returns the complementary error function of x. |
| [exp(x float64) float64](https://golang.org/pkg/math/#Exp) | Exp returns e**x, the base-e exponential of x. |
| [exp2(x float64) float64](https://golang.org/pkg/math/#Exp2) | Exp2 returns 2**x, the base-2 exponential of x. |
| [expm1(x float64) float64](https://golang.org/pkg/math/#Expm1) | Expm1 returns e**x - 1, the base-e exponential of x minus 1. It is more accurate than Exp(x) - 1 when x is near zero. |
| [floor(x float64) float64](https://golang.org/pkg/math/#Floor) | Floor returns the greatest integer value less than or equal to x. |
| [gamma(x float64) float64](https://golang.org/pkg/math/#Gamma) | Gamma returns the Gamma function of x. |
| [hypot(p, q float64) float64](https://golang.org/pkg/math/#Hypot) | Hypot returns Sqrt(p*p + q*q), taking care to avoid unnecessary overflow and underflow. |
| [j0(x float64) float64](https://golang.org/pkg/math/#J0) | J0 returns the order-zero Bessel function of the first kind. |
| [j1(x float64) float64](https://golang.org/pkg/math/#J1) | J1 returns the order-one Bessel function of the first kind. |
| [jn(n int64, x float64) float64](https://golang.org/pkg/math/#Jn) | Jn returns the order-n Bessel function of the first kind. |
| [log(x float64) float64](https://golang.org/pkg/math/#Log) | Log returns the natural logarithm of x. |
| [log10(x float64) float64](https://golang.org/pkg/math/#Log10) | Log10 returns the decimal logarithm of x. |
| [log1p(x float64) float64](https://golang.org/pkg/math/#Log1p) | Log1p returns the natural logarithm of 1 plus its argument x. It is more accurate than Log(1 + x) when x is near zero. |
| [log2(x float64) float64](https://golang.org/pkg/math/#Log2) | Log2 returns the binary logarithm of x. |
| [logb(x float64) float64](https://golang.org/pkg/math/#Logb) | Logb returns the binary exponent of x. |
| [max(x, y float64) float64](https://golang.org/pkg/math/#Max) | Max returns the larger of x or y. |
| [min(x, y float64) float64](https://golang.org/pkg/math/#Min) | Min returns the smaller of x or y. |
| [mod(x, y float64) float64](https://golang.org/pkg/math/#Mod) | Mod returns the floating-point remainder of x/y. The magnitude of the result is less than y and its sign agrees with that of x. |
| [pow(x, y float64) float64](https://golang.org/pkg/math/#Pow) | Pow returns x**y, the base-x exponential of y. |
| [pow10(x int64) float64](https://golang.org/pkg/math/#Pow10) | Pow10 returns 10**e, the base-10 exponential of e. |
| [sin(x float64) float64](https://golang.org/pkg/math/#Sin) | Sin returns the sine of the radian argument x. |
| [sinh(x float64) float64](https://golang.org/pkg/math/#Sinh) | Sinh returns the hyperbolic sine of x. |
| [sqrt(x float64) float64](https://golang.org/pkg/math/#Sqrt) | Sqrt returns the square root of x. |
| [tan(x float64) float64](https://golang.org/pkg/math/#Tan) | Tan returns the tangent of the radian argument x. |
| [tanh(x float64) float64](https://golang.org/pkg/math/#Tanh) | Tanh returns the hyperbolic tangent of x. |
| [trunc(x float64) float64](https://golang.org/pkg/math/#Trunc) | Trunc returns the integer value of x. |
| [y0(x float64) float64](https://golang.org/pkg/math/#Y0) | Y0 returns the order-zero Bessel function of the second kind. |
| [y1(x float64) float64](https://golang.org/pkg/math/#Y1) | Y1 returns the order-one Bessel function of the second kind. |
| [yn(n int64, x float64) float64](https://golang.org/pkg/math/#Yn) | Yn returns the order-n Bessel function of the second kind. |
#### String functions
The following string manipulation functions are available.
Each function is implemented via the equivalent Go function.
| Function | Description |
| ---------- | ------------- |
| [strContains(s,&nbsp;substr&nbsp;string)&nbsp;bool](https://golang.org/pkg/strings/#Contains) | StrContains reports whether substr is within s. |
| [strContainsAny(s,&nbsp;chars&nbsp;string)&nbsp;bool](https://golang.org/pkg/strings/#ContainsAny) | StrContainsAny reports whether any Unicode code points in chars are within s. |
| [strCount(s,&nbsp;sep&nbsp;string)&nbsp;int64](https://golang.org/pkg/strings/#Count) | StrCount counts the number of non-overlapping instances of sep in s. If sep is an empty string, Count returns 1 + the number of Unicode code points in s. |
| [strHasPrefix(s,&nbsp;prefix&nbsp;string)&nbsp;bool](https://golang.org/pkg/strings/#HasPrefix) | StrHasPrefix tests whether the string s begins with prefix. |
| [strHasSuffix(s,&nbsp;suffix&nbsp;string)&nbsp;bool](https://golang.org/pkg/strings/#HasSuffix) | StrHasSuffix tests whether the string s ends with suffix. |
| [strIndex(s,&nbsp;sep&nbsp;string)&nbsp;int64](https://golang.org/pkg/strings/#Index) | StrIndex returns the index of the first instance of sep in s, or -1 if sep is not present in s. |
| [strIndexAny(s,&nbsp;chars&nbsp;string)&nbsp;int64](https://golang.org/pkg/strings/#IndexAny) | StrIndexAny returns the index of the first instance of any Unicode code point from chars in s, or -1 if no Unicode code point from chars is present in s. |
| [strLastIndex(s,&nbsp;sep&nbsp;string)&nbsp;int64](https://golang.org/pkg/strings/#LastIndex) | StrLastIndex returns the index of the last instance of sep in s, or -1 if sep is not present in s. |
| [strLastIndexAny(s,&nbsp;chars&nbsp;string)&nbsp;int64](https://golang.org/pkg/strings/#LastIndexAny) | StrLastIndexAny returns the index of the last instance of any Unicode code point from chars in s, or -1 if no Unicode code point from chars is present in s. |
| [strLength(s string) int64](https://golang.org/ref/spec#Length_and_capacity) | StrLength returns the length of the string. |
| [strReplace(s,&nbsp;old,&nbsp;new&nbsp;string,&nbsp;n&nbsp;int64)&nbsp;string](https://golang.org/pkg/strings/#Replace) | StrReplace returns a copy of the string s with the first n non-overlapping instances of old replaced by new. |
| [strSubstring(s&nbsp;string,&nbsp;start,&nbsp;stop&nbsp;int64)&nbsp;string](https://golang.org/ref/spec#Index_expressions) | StrSubstring returns a substring based on the given indexes, strSubstring(str, start, stop) is equivalent to str[start:stop] in Go. |
| [strToLower(s&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#ToLower) | StrToLower returns a copy of the string s with all Unicode letters mapped to their lower case. |
| [strToUpper(s&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#ToUpper) | StrToUpper returns a copy of the string s with all Unicode letters mapped to their upper case. |
| [strTrim(s,&nbsp;cutset&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#Trim) | StrTrim returns a slice of the string s with all leading and trailing Unicode code points contained in cutset removed. |
| [strTrimLeft(s,&nbsp;cutset&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#TrimLeft) | StrTrimLeft returns a slice of the string s with all leading Unicode code points contained in cutset removed. |
| [strTrimPrefix(s,&nbsp;prefix&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#TrimPrefix) | StrTrimPrefix returns s without the provided leading prefix string. If s doesn't start with prefix, s is returned unchanged. |
| [strTrimRight(s,&nbsp;cutset&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#TrimRight) | StrTrimRight returns a slice of the string s, with all trailing Unicode code points contained in cutset removed. |
| [strTrimSpace(s&nbsp;string)&nbsp;string](https://golang.org/pkg/strings/#TrimSpace) | StrTrimSpace returns a slice of the string s, with all leading and trailing white space removed, as defined by Unicode. |
| [strTrimSuffix(s,&nbsp;suffix&nbsp;string)&nbsp;string)](https://golang.org/pkg/strings/#TrimSuffix) | StrTrimSuffix returns s without the provided trailing suffix string. If s doesn't end with suffix, s is returned unchanged. |
| [regexReplace(r&nbsp;regex,&nbsp;s,&nbsp;pattern&nbsp;string)&nbsp;string](https://golang.org/pkg/regexp/#Regexp.ReplaceAllString) | RegexReplace replaces matches of the regular expression in the input string with the output string. For example regexReplace(/a(b*)c/, 'abbbc', 'group is $1') -> 'group is bbb'. The original string is returned if no matches are found. |
#### Human string functions
##### HumanBytes
Converts an int64 or a float64 with units bytes into a human readable string representing the number of bytes.
```js
humanBytes(value) string
```
#### Conditional functions
##### If
Returns the result of its operands depending on the value of the first argument.
The second and third arguments must return the same type.
Example:
```js
|eval(lambda: if("field" > threshold AND "field" != 0, 'true', 'false'))
.as('value')
```
The value of the field `value` in the above example will be the string `true` or `false`, depending on the condition passed as the first argument.
The `if` function's return type is the same type as its second and third arguments.
```js
if(condition, true expression, false expression)
```

View File

@ -0,0 +1,152 @@
---
title: Introducing the TICKscript language
menu:
kapacitor_1_5_ref:
name: Introduction
identifier: tick_intro
parent: tick
weight: 1
---
# Contents
* [Overview](#overview)
* [Nodes](#nodes)
* [Pipelines](#pipelines)
* [Basic examples](#basic-examples)
* [Where to next](#where-to-next)
# Overview
Kapacitor uses a Domain Specific Language(DSL) named **TICKscript** to define **tasks** involving the extraction, transformation and loading of data and involving, moreover, the tracking of arbitrary changes and the detection of events within data. One common task is defining alerts. TICKscript is used in `.tick` files to define **pipelines** for processing data. The TICKscript language is designed to chain together the invocation of data processing operations defined in **nodes**. The Kapacitor [Getting Started](/kapacitor/v1.5/introduction/getting-started/) guide introduces TICKscript basics in the context of that product. For a better understanding of what follows, it is recommended that the reader review that document first.
Each script has a flat scope and each variable in the scope can reference a literal value, such as a string, an integer or a float value, or a node instance with methods that can then be called.
These methods come in two forms.
* **Property methods** &ndash; A property method modifies the internal properties of a node and returns a reference to the same node. Property methods are called using dot ('.') notation.
* **Chaining methods** &ndash; A chaining method creates a new child node and returns a reference to it. Chaining methods are called using pipe ('|') notation.
# Nodes
In TICKscript the fundamental type is the **node**. A node has **properties** and, as mentioned, chaining methods. A new node can be created from a parent or sibling node using a chaining method of that parent or sibling node. For each **node type** the signature of this method will be the same, regardless of the parent or sibling node type. The chaining method can accept zero or more arguments used to initialize internal properties of the new node instance. Common node types are `batch`, `query`, `stream`, `from`, `eval` and `alert`, though there are dozens of others.
The top level nodes, which establish the processing type of the task to be defined, `stream` and `batch`, are simply declared and take no arguments. Nodes with more complex sets of properties rely on **Property methods** for their internal configuration.
Each node type **wants** data in either batch or stream mode. Some can handle both. Each node type also **provides** data in batch or stream mode. Some can provide both. This _wants/provides_ pattern is key to understanding how nodes work together. Taking into consideration the _wants/provides_ pattern, four general node use cases can be defined:
* _want_ a batch and _provide_ a stream - for example, when computing an average or a minimum or a maximum.
* _want_ a batch and _provide_ a batch - for example, when identifying outliers in a batch of data.
* _want_ a stream and _provide_ a batch - for example, when grouping together similar data points.
* _want_ a stream and _provide_ a stream - for example, when applying a mathematical function like a logarithm to a value in a point.
The [node reference documentation](/kapacitor/v1.5/nodes/) lists the property and chaining methods of each node along with examples and descriptions.
# Pipelines
Every TICKscript is broken into one or more **pipelines**. Pipelines are chains of nodes logically organized along edges that cannot cycle back to earlier nodes in the chain. The nodes within a pipeline can be assigned to variables. This allows the results of different pipelines to be combined using, for example, a `join` or a `union` node. It also allows for sections of the pipeline to be broken into reasonably understandable self-descriptive functional units. In a simple TICKscript there may be no need to assign pipeline nodes to variables. The initial node in the pipeline sets the processing type for the Kapacitor task they define. These can be either `stream` or `batch`. These two types of pipelines cannot be combined.
### Stream or batch?
With `stream` processing, datapoints are read, as in a classic data stream, point by point as they arrive. With `stream` Kapacitor subscribes to all writes of interest in InfluxDB. With `batch` processing a frame of 'historic' data is read from the database and then processed. With `stream` processing data can be transformed before being written to InfluxDB. With `batch` processing, the data should already be stored in InfluxDB. After processing, it can also be written back to it.
Which to use depends upon system resources and the kind of computation being undertaken. When working with a large set of data over a long time frame `batch` is preferred. It leaves data stored on the disk until it is required, though the query, when triggered, will result in a sudden high load on the database. Processing a large set of data over a long time frame with `stream` means needlessly holding potentially billions of data points in memory. When working with smaller time frames `stream` is preferred. It lowers the query load on InfluxDB.
### Pipelines as graphs
Pipelines in Kapacitor are directed acyclic graphs ([DAGs](https://en.wikipedia.org/wiki/Directed_acyclic_graph)). This means that
each edge has a direction down which data flows, and that there cannot be any cycles in the pipeline. An edge can also be thought of as the data-flow relationship that exists between a parent node and its child.
At the start of any pipeline will be declared one of two fundamental edges. This first edge establishes the type of processing for the task, however, each ensuing node establishes the edge type between itself and its children.
* `stream`&rarr;`from()`&ndash; an edge that transfers data a single data point at a time.
* `batch`&rarr;`query()`&ndash; an edge that transfers data in chunks instead of one point at a time.
### Pipeline validity
When connecting nodes and then creating a new Kapacitor task, Kapacitor will check whether or not the TICKscript syntax is well formed, and if the new edges are applicable to the most recent node. However full functionality of the pipeline will not be validated until runtime, when error messages can appear in the Kapacitor log.
**Example 1 &ndash; a runtime error**
```bash
...
[cpu_alert:alert4] 2017/10/24 14:42:59 E! error evaluating expression for level CRITICAL: left reference value "usage_idle" is missing value
[cpu_alert:alert4] 2017/10/24 14:42:59 E! error evaluating expression for level CRITICAL: left reference value "usage_idle" is missing value
...
```
Example 1 shows a runtime error that is thrown because a field value has gone missing from the pipeline. This can often happen following an `eval` node when the property `keep()` of the `eval` node has not been set. In general Kapacitor cannot anticipate all the modalities of the data that the task will encounter at runtime. Some tasks may not be written to handle all deviations or exceptions from the norm, such as when fields or tags go missing. In these cases Kapacitor will log an error.
# Basic examples
**Example 2 &ndash; An elementary stream &rarr; from() pipeline**
```js
dbrp "telegraf"."autogen"
stream
|from()
.measurement('cpu')
|httpOut('dump')
```
The simple script in Example 2 can be used to create a task with the default Telegraf database.
```
$ kapacitor define sf_task -tick sf.tick
```
The task, `sf_task`, will simply cache the latest cpu datapoint as JSON to the HTTP REST endpoint(e.g http<span>://localhost:</span><span>9092/kapacitor/v1/tasks/sf_task/dump</span>).
This example contains a database and retention policy statement: `dbrp`.
This example also contains three nodes:
* The base `stream` node.
* The requisite `from()` node, that defines the stream of data points.
* The processing node `httpOut()`, that caches the data it receives to the REST service of Kapacitor.
It contains two edges.
* `stream`&rarr;`from()`&ndash; sets the processing type of the task and the data stream.
* `from()`&rarr;`httpOut()`&ndash; passes the data stream to the HTTP output processing node.
It contains one property method, which is the call on the `from()` node to `.measurement('cpu')` defining the measurement to be used for further processing.
**Example 3 &ndash; An elementary batch &rarr; query() pipeline**
```js
batch
|query('SELECT * FROM "telegraf"."autogen".cpu WHERE time > now() - 10s')
.period(10s)
.every(10s)
|httpOut('dump')
```
The script in Example 3 can be used to define a task with the default Telegraf database.
```
$ kapacitor define bq_task -tick bq.tick -dbrp "telegraf"."autogen"
```
When used to create the `bq_task` with the default Telegraf database, the TICKscript in Example 3 will simply cache the last cpu datapoint of the batch of measurements representing the last 10 seconds of activity to the HTTP REST endpoint(e.g. <span>http</span>:<span>//</span>localhost<span>:9092</span><span>/kapacitor/v1/tasks/bq_task/dump</span>).
This example contains three nodes:
* The base `batch` node.
* The requisite `query()` node, that defines the data set.
* The processing node `httpOut()`, that defines the one step in processing the data set. In this case it is to publish it to the REST service of Kapacitor.
It contains two edges.
* `batch`&rarr;`query()`&ndash; sets the processing style and data set.
* `query()`&rarr;`httpOut()`&ndash; passes the data set to the HTTP output processing node.
It contains two property methods, which are called from the `query()` node.
* `period()`&ndash; sets the period in time which the batch of data will cover.
* `every()`&ndash; sets the frequency at which the batch of data will be processed.
### Where to next?
For basic examples of working with TICKscript see the latest examples in the code base on [GitHub](https://github.com/influxdata/kapacitor/tree/master/examples).
For TICKscript solutions for intermediate to advanced use cases, see the [Guides](/kapacitor/v1.5/guides/) documentation.
The next section covers [TICKscript syntax](/kapacitor/v1.5/tick/syntax/) in more detail. [Continue...](/kapacitor/v1.5/tick/syntax/)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,12 @@
---
title: Troubleshooting Kapacitor
menu:
kapacitor_1_5:
name: Troubleshooting
weight: 110
---
## [Frequently asked questions](/kapacitor/v1.5/troubleshooting/frequently-asked-questions/)
This page addresses frequent sources of confusion or important things to know related to Kapacitor.
Where applicable, it links to outstanding issues on Github.

View File

@ -0,0 +1,136 @@
---
title: Kapacitor frequently asked questions
menu:
kapacitor_1_5:
name: Frequently asked questions (FAQ)
weight: 10
parent: Troubleshooting
---
This page addresses frequent sources of confusion or important things to know related to Kapacitor.
Where applicable, it links to outstanding issues on Github.
**Administration**
- [Is the alert state and alert data lost happen updating a script?](#is-the-alert-state-and-alert-data-lost-happen-when-updating-a-script)
- [How do I verify that Kapacitor is receiving data from InfluxDB?](#how-do-i-verify-that-kapacitor-is-receiving-data-from-influxdb)
**TICKscript**
- [Batches work but streams do not. Why?](#batches-work-but-streams-do-not-why)
- [Is there a limit on the number of scripts Kapacitor can handle?](#is-there-a-limit-on-the-number-of-scripts-kapacitor-can-handle)
- [What causes unexpected or additional values with same timestamp??](#what-causes-unexpected-or-additional-values-with-same-timestamp)
**Performance**
- [Do you get better performance with running one complex script or having multiple scripts running in parallel?](#do-you-get-better-performance-with-running-one-complex-script-or-having-multiple-scripts-running-in-parallel)
- [Do template-based scripts use less resources or are they just an ease-of-use tool?](#do-template-based-scripts-use-less-resources-r-are-they-just-an-ease-of-use-tool)
- [How does Kapacitor handle high load?](#how-does-kapacitor-handle-high-load)
- [How can I optimize Kapacitor tasks?](#how-can-i-optimize-kapacitor-tasks)
## Administration
### Is the alert state and alert data lost happen when updating a script?
Kapacitor will remember the last level of an alert, but other state-like data, such as data buffered in a window, will be lost.
### How do I verify that Kapacitor is receiving data from InfluxDB?
There are a few ways to determine whether or not Kapacitor is receiving data from InfluxDB.
The [`kapacitor stats ingress`](/kapacitor/v1.5/working/cli_client/#stats-ingress) command
outputs InfluxDB measurements stored in the Kapacitor database as well as the number
of data points that pass through the Kapacitor server.
```bash
$ kapacitor stats ingress
Database Retention Policy Measurement Points Received
_internal monitor cq 5274
_internal monitor database 52740
_internal monitor httpd 5274
_internal monitor queryExecutor 5274
_internal monitor runtime 5274
_internal monitor shard 300976
# ...
```
You can also use Kapacitor's [`/debug/vars` API endpoint](/kapacitor/v1.5/working/api/#debug-vars-http-endpoint)
to view and monitor ingest rates.
Using this endpoint and [Telegraf's Kapacitor input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kapacitor),
you can create visualizations to monitor Kapacitor ingest rates.
Below are example queries that use Kapacitor data written into InfluxDB using
Telegraf's Kapacitor input plugin:
_**Kapacitor ingest rate (points/sec)**_
```sql
SELECT sum(points_received_rate) FROM (SELECT non_negative_derivative(first("points_received"),1s) as points_received_rate FROM "_kapacitor"."autogen"."ingress" WHERE time > :dashboardTime: GROUP BY "database", "retention_policy", "measurement", time(1m)) WHERE time > :dashboardTime: GROUP BY time(1m)
```
_**Kapacitor ingest by task (points/sec)**_
```sql
SELECT non_negative_derivative("collected",1s) FROM "_kapacitor"."autogen"."edges" WHERE time > now() - 15m AND ("parent"='stream' OR "parent"='batch') GROUP BY task
```
## TICKscript
### Batches work but streams do not. Why?
Make sure port `9092` is open to inbound connections.
Streams are a `PUSH`'d to port `9092` so it must be allowed through the firewall.
### Is there a limit on the number of scripts Kapacitor can handle?
There is no software limit, but it will be limited by available server resources.
### What causes unexpected or additional values with same timestamp?
If data is ingested at irregular intervals and you see unexpected results with the same timestamp, use the [`log node`](/kapacitor/v1.5/nodes/log_node) when ingesting data in your TICKscript to debug issues. This surfaces issues, for example, duplicate data hidden by httpOut.
## Performance
### Do you get better performance with running one complex script or having multiple scripts running in parallel?
Taking things to the extreme, best-case is one task that consumes all the data and does all the work since there is added overhead when managing multiple tasks.
However, significant effort has gone into reducing the overhead of each task.
Use tasks in a way that makes logical sense for your project and organization.
If you run into performance issues with multiple tasks, [let us know](https://github.com/influxdata/kapacitor/issues/new).
_**As a last resort**_, merge tasks into more complex tasks.
### Do template-based scripts use less resources or are they just an ease-of-use tool?
Templates are just an ease-of-use tool and make no difference in regards to performance.
### How does Kapacitor handle high load?
If Kapacitor is unable to ingest and process incoming data before it receives new data,
Kapacitor queues incoming data in memory and processes it when able.
Memory requirements of queued data depend on the ingest rate and shape of the incoming data.
Once Kapacitor is able to process all queued data, it slowly releases memory
as the internal garbage collector reclaims memory.
Extended periods of high data ingestion can overwhelm available system resources
forcing the operating system to stop the `kapacitord` process.
The primary means for avoiding this issue are:
- Ensure your hardware provides enough system resources to handle additional load.
- Optimize your Kapacitor tasks. _[See below](#how-can-i-optimize-kapacitor-tasks)_.
{{% note %}}
As Kapacitor processes data in the queue, it may consume other system resources such as
CPU, disk and network IO, etc., which will affect the overall performance of your Kapacitor server.
{{% /note %}}
### How can I optimize Kapacitor tasks?
As you optimize Kapacitor tasks, consider the following:
#### "Batch" incoming data
[`batch`](/kapacitor/v1.5/nodes/batch_node/) queries data from InfluxDB in batches.
As long as Kapacitor is able to process a batch before the next batch is queried,
it won't need to queue anything.
[`stream`](/kapacitor/v1.5/nodes/stream_node/) mirrors all InfluxDB writes to
Kapacitor in real time and is more prone to queueing.
If using `stream`, segment incoming data into time-based batches using
[`window`](/kapacitor/v1.5/nodes/window_node/).

View File

@ -0,0 +1,19 @@
---
title: Working with Kapacitor
menu:
kapacitor_1_5:
name: Working with Kapacitor
identifier: work-w-kapacitor
weight: 30
---
The documents in this section present the key features of the Kapacitor daemon
(`kapacitord`) and the Kapacitor client (`kapacitor`).
* [Kapacitor and Chronograf](/kapacitor/v1.5/working/kapa-and-chrono/) &ndash; presents how Kapacitor is integrated with the Chronograf graphical user interface application for managing tasks and alerts.
* [Kapacitor API Reference documentation](/kapacitor/v1.5/working/api/) &ndash; presents the HTTP API and how to use it to update tasks and the Kapacitor configuration.
* [Alerts - Overview](/kapacitor/v1.5/working/alerts/) &ndash; presents an overview of the Kapacitor alerting system.
* [Alerts - Using topics](/kapacitor/v1.5/working/using_alert_topics/) &ndash; a walk-through on creating and using alert topics.
* [Alerts - Event handler setup](/kapacitor/v1.5/working/event-handler-setup/) &ndash; presents setting up event handlers for HipChat and Telegraf, which can serve as a blueprint for other event handlers.
* [Dynamic data scraping](/kapacitor/v1.5/working/scraping-and-discovery/) &ndash; introduces the discovery and scraping features, which allow metrics to be dynamically pulled into Kapacitor and then written to InfluxDB.

View File

@ -0,0 +1,190 @@
---
title: Kapacitor alerts overview
menu:
kapacitor_1_5:
name: Alerts overview
weight: 3
parent: work-w-kapacitor
---
Kapacitor makes it possible to handle alert messages in two different ways.
* The messages can be pushed directly to an event handler exposed through the
[Alert](/kapacitor/v1.5/nodes/alert_node/) node.
* The messages can be published to a topic namespace to which one or more alert
handlers can subscribe.
<!--
In addition to defining alert handler in TICKscript Kapacitor supports an alert system that follows a publish subscribe design pattern.
Alerts are published to a `topic` and `handlers` subscribe to a topic.
-->
No matter which approach is used, the handlers need to be enabled and configured
in the [configuration](/kapacitor/v1.5/administration/configuration/#optional-table-groupings)
file. If the handler requires sensitive information such as tokens and
passwords, it can also be configured using the [Kapacitor HTTP API](/kapacitor/v1.5/working/api/#overriding-configurations).
## Push to handler
Pushing messages to a handler is the basic approach presented in the
[Getting started with Kapacitor](/kapacitor/v1.5/introduction/getting-started/#triggering-alerts-from-stream-data)
guide. This involves simply calling the relevant chaining method made available
through the `alert` node. Messages can be pushed to `log()` files, the `email()`
service, the `httpOut()` cache and many [third party services](#list-of-handlers).
## Publish and subscribe
An alert topic is simply a namespace where alerts are grouped.
When an alert event fires it can be published to a topic.
Multiple handlers can subscribe (can be bound) to that topic and all handlers
process each alert event for the topic. Handlers get bound to topics through
the `kapacitor` command line client and handler binding files. Handler binding
files can be written in `yaml` or `json`. They contain four key fields and one
optional one.
* `topic`: declares the topic to which the handler will subscribe.
* `id`: declares the identity of the binding.
* `kind`: declares the type of event handler to be used. Note that this
needs to be enabled in the `kapacitord` configuration.
* `match`: (optional) declares a match expression used to filter which
alert events will be processed. See the [Match Expressions](#match-expressions)
section below.
* `options`: options specific to the handler in question. These are
listed below in the section [List of handlers](#list-of-handlers)
**Example 1: A handler binding file for the _slack_ handler and _cpu_ topic**
```
topic: cpu
id: slack
kind: slack
options:
channel: '#kapacitor'
```
Example 1 could be saved into a file named `slack_cpu_handler.yaml`.
This can then be generated into a Kapacitor topic handler through the command
line client.
```
$ kapacitor define-topic-handler slack_cpu_handler.yaml
```
Handler bindings can also be created over the HTTP API. See the
[Create a Handler](/kapacitor/v1.5/working/api/#creating-handlers) section of
the HTTP API document.
For a walk through on defining and using alert topics see the
[Using Alert Topics](/kapacitor/v1.5/working/using_alert_topics) walk-through.
## Handlers
A handler takes action on incoming alert events for a specific topic.
Each handler operates on exactly one topic.
### List of handlers
The following is a list of available alert event handlers:
| Handler | Description |
| ------- | ----------- |
| [Alerta](/kapacitor/v1.5/event_handlers/alerta) | Post alert message to Alerta. |
| [email](/kapacitor/v1.5/event_handlers/email) | Send and email with alert data. |
| [exec](/kapacitor/v1.5/event_handlers/exec) | Execute a command passing alert data over STDIN. |
| [HipChat](/kapacitor/v1.5/event_handlers/hipchat) | Post alert message to HipChat room. |
| [Kafka](/kapacitor/v1.5/event_handlers/kafka) | Send alert to a Apache Kafka cluster. |
| [log](/kapacitor/v1.5/event_handlers/log) | Log alert data to file. |
| [MQTT](/kapacitor/v1.5/event_handlers/mqtt) | Post alert message to MQTT. |
| [OpsGenie v1](/kapacitor/v1.5/event_handlers/opsgenie/v1) | Send alert to OpsGenie using their v1 API. <em style="opacity: .5">(Deprecated)</em> |
| [OpsGenie v2](/kapacitor/v1.5/event_handlers/opsgenie/v2) | Send alert to OpsGenie using their v2 API. |
| [PagerDuty v1](/kapacitor/v1.5/event_handlers/pagerduty/v1) | Send alert to PagerDuty using their v1 API. <em style="opacity: .5">(Deprecated)</em> |
| [PagerDuty v2](/kapacitor/v1.5/event_handlers/pagerduty/v2) | Send alert to PagerDuty using their v2 API. |
| [post](/kapacitor/v1.5/event_handlers/post) | HTTP POST data to a specified URL. |
| [Pushover](/kapacitor/v1.5/event_handlers/pushover) | Send alert to Pushover. |
| [Sensu](/kapacitor/v1.5/event_handlers/sensu) | Post alert message to Sensu client. |
| [Slack](/kapacitor/v1.5/event_handlers/slack) | Post alert message to Slack channel. |
| [SNMPTrap](/kapacitor/v1.5/event_handlers/snmptrap) | Trigger SNMP traps. |
| [Talk](/kapacitor/v1.5/event_handlers/talk) | Post alert message to Talk client. |
| [tcp](/kapacitor/v1.5/event_handlers/tcp) | Send data to a specified address via raw TCP. |
| [Telegram](/kapacitor/v1.5/event_handlers/telegram) | Post alert message to Telegram client. |
| [VictorOps](/kapacitor/v1.5/event_handlers/victorops) | Send alert to VictorOps. |
## Match expressions
Alert handlers support match expressions that filter which alert events the handler processes.
A match expression is a TICKscript lambda expression.
The data that triggered the alert is available to the match expression, including all fields and tags.
In addition to the data that triggered the alert metadata about the alert is available.
This alert metadata is available via various functions.
| Name | Type | Description |
| ---- | ---- | ----------- |
| level | int | The alert level of the event, one of '0', '1', '2', or '3' corresponding to 'OK', 'INFO', 'WARNING', and 'CRITICAL'. |
| changed | bool | Indicates whether the alert level changed with this event. |
| name | string | Returns the measurement name of the triggering data. |
| taskName | string | Returns the task name that generated the alert event. |
| duration | duration | Returns the duration of the event in a non OK state. |
Additionally the vars `OK`, `INFO`, `WARNING`, and `CRITICAL` have been defined to correspond with the return value of the `level` function.
For example to send only critical alerts to a handler, use this match expression:
```yaml
match: level() == CRITICAL
```
### Examples
Send only changed events to the handler:
```yaml
match: changed() == TRUE
```
Send only WARNING and CRITICAL events to the handler:
```yaml
match: level() >= WARNING
```
Send events with the tag "host" equal to `s001.example.com` to the handler:
```yaml
match: "\"host\" == 's001.example.com'"
```
#### Alert event data
Each alert event that gets sent to a handler contains the following alert data:
| Name | Description |
| ---- | ----------- |
| **ID** | The ID of the alert, user defined. |
| **Message** | The alert message, user defined. |
| **Details** | The alert details, user defined HTML content. |
| **Time** | The time the alert occurred. |
| **Duration** | The duration of the alert in nanoseconds. |
| **Level** | One of OK, INFO, WARNING or CRITICAL. |
| **Data** | influxql.Result containing the data that triggered the alert. |
| **Recoverable** | Indicates whether the alert is auto-recoverable. Determined by the [`.noRecoveries()`](/kapacitor/v1.5/nodes/alert_node/#norecoveries) property. |
This data is used by [event handlers](/kapacitor/v1.5/event_handlers) in their
handling of alert events.
Alert messages use [Golang Template](https://golang.org/pkg/text/template/) and
have access to the alert data.
```js
|alert()
// ...
.message('{{ .ID }} is {{ .Level }} value:{{ index .Fields "value" }}, {{ if not .Recoverable }}non-recoverable{{ end }}')
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,13 @@
---
title: How to contribute a new alert integration to Kapacitor
aliases:
- kapacitor/v1.5/contributing/custom_alert/
- kapacitor/v1.5/about_the_project/custom_alert/
menu:
kapacitor_1_5:
name: Writing your own alert integration
identifier: custom_alert
weight: 4
parent: work-w-kapacitor
url: https://github.com/influxdata/kapacitor/blob/master/alert/HANDLERS.md
---

Some files were not shown because too many files have changed in this diff Show More