Merge pull request #2072 from influxdata/kapa-admin-privileges

Add note about InfluxDB user admin privileges
pull/2075/head
Tara Planas 2021-01-15 14:58:38 -08:00 committed by GitHub
commit d97f1e4c79
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 161 additions and 162 deletions

View File

@ -334,10 +334,11 @@ Multiple InfluxDB table array configurations can be specified,
but one InfluxDB table array configuration must be flagged as the `default`.
**Example: An InfluxDB connection grouping**
=======
{{% note %}}
#### InfluxDB user must have admin privileges
To use Kapacitor with an InfluxDB instance that requires authentication,
it must authenticate using an InfluxDB user with **read and write** permissions.
the InfluxDB user must have [admin privileges](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#admin-users).
{{% /note %}}
{{< keep-url >}}

View File

@ -7,14 +7,12 @@ menu:
parent: Administration
---
# Contents
* [Overview](#overview)
* [Secure InfluxDB and Kapacitor](#secure-influxdb-and-kapacitor)
* [Kapacitor Security](#kapacitor-security)
* [Secure Kapacitor and Chronograf](#secure-kapacitor-and-chronograf)
# Overview
## Overview
This document covers the basics of securing the open-source distribution of
Kapacitor. For information about security with Enterprise Kapacitor see the

View File

@ -20,28 +20,38 @@ all data is copied to your Kapacitor server or cluster through an InfluxDB subsc
This reduces the query load on InfluxDB and isolates overhead associated with data
manipulation to your Kapacitor server or cluster.
On startup, Kapacitor will check for a subscription in InfluxDB with a name matching the Kapacitor server or cluster ID.
On startup, Kapacitor checks for a subscription in InfluxDB with a name matching the Kapacitor server or cluster ID.
This ID is stored inside of `/var/lib/kapacitor/`.
If the ID file doesn't exist on startup, Kapacitor will create one.
If the ID file doesn't exist on startup, Kapacitor creates it.
If a subscription matching the Kapacitor ID doesn't exist in InfluxDB, Kapacitor
will create a new subscription in InfluxDB.
This process ensures that when Kapacitor stops, it will reconnect to the same subscription
creates a new subscription in InfluxDB.
This process ensures that when Kapacitor stops, it reconnects to the same subscription
on restart as long as the contents of `/var/lib/kapacitor/` remain intact.
{{% note %}}
#### InfluxDB user must have admin privileges
The InfluxDB user used to create subscriptions for Kapacitor must have
[admin privileges](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#admin-users).
Configure which InfluxDB user to use with the [`[influxdb].username` setting](/kapacitor/v1.5/administration/configuration/#influxdb)
in your Kapacitor configuration file.
{{% /note %}}
_The directory in which Kapacitor stores its ID can be configured with the
[`data-dir` root configuration option](/kapacitor/v1.5/administration/configuration/#organization)
in the `kapacitor.conf`._
> #### Kapacitor IDs in containerized or ephemeral filesystems
> In containerized environments, filesystems are considered ephemeral and typically
> do not persist between container stops and restarts.
> If `/var/lib/kapacitor/` is not persisted, Kapacitor will create a new InfluxDB subscription
> on startup, resulting in unnecessary "duplicate" subscriptions.
> You will then need to manually [drop the unnecessary subscriptions](/{{< latest "influxdb" "v1" >}}/administration/subscription-management/#remove-subscriptions).
>
> To avoid this, InfluxData recommends that you persist the `/var/lib/kapacitor` directory.
> Many persistence strategies are available and which to use depends on your
> specific architecture and containerization technology.
{{% note %}}
#### Kapacitor IDs in containerized or ephemeral filesystems
In containerized environments, filesystems are considered ephemeral and typically
do not persist between container stops and restarts.
If `/var/lib/kapacitor/` is not persisted, Kapacitor will create a new InfluxDB subscription
on startup, resulting in unnecessary "duplicate" subscriptions.
You will then need to manually [drop the unnecessary subscriptions](/{{< latest "influxdb" "v1" >}}/administration/subscription-management/#remove-subscriptions).
To avoid this, persist the `/var/lib/kapacitor` directory.
Many persistence strategies are available and which to use depends on your
specific architecture and containerization technology.
{{% /note %}}
## Configure Kapacitor subscriptions

View File

@ -106,7 +106,7 @@ options:
Use the Kapacitor CLI to define a new handler with a handler file:
```bash
# Pattern
# Syntax
kapacitor define-topic-handler <handler-file-name>
# Example

View File

@ -12,8 +12,6 @@ menu:
parent: guides
---
# File-based definitions of tasks, templates, and load handlers
The load directory service enables file-based definitions of Kapacitor tasks, templates, and topic handlers that are loaded on startup or when a SIGHUP signal is sent to the process.
## Configuration

View File

@ -10,8 +10,6 @@ menu:
parent: tick
---
# Overview
TICKscript uses lambda expressions to define transformations on data points as
well as define Boolean conditions that act as filters. Lambda expressions wrap
mathematical operations, Boolean operations, internal function calls or a
@ -102,7 +100,7 @@ of the same and the desired type.
In short, to ensure that the type of a field value is correct, use the built-in
type conversion functions (see [below](#above-header-type-conversion)).
# Built-in functions
## Built-in functions
### Stateful functions

View File

@ -8,14 +8,14 @@ menu:
parent: tick
weight: 1
---
# Contents
* [Overview](#overview)
* [Nodes](#nodes)
* [Pipelines](#pipelines)
* [Basic examples](#basic-examples)
* [Where to next](#where-to-next)
# Overview
## Overview
Kapacitor uses a Domain Specific Language(DSL) named **TICKscript** to define **tasks** involving the extraction, transformation and loading of data and involving, moreover, the tracking of arbitrary changes and the detection of events within data. One common task is defining alerts. TICKscript is used in `.tick` files to define **pipelines** for processing data. The TICKscript language is designed to chain together the invocation of data processing operations defined in **nodes**. The Kapacitor [Getting Started](/kapacitor/v1.5/introduction/getting-started/) guide introduces TICKscript basics in the context of that product. For a better understanding of what follows, it is recommended that the reader review that document first.
@ -26,7 +26,7 @@ These methods come in two forms.
* **Property methods** &ndash; A property method modifies the internal properties of a node and returns a reference to the same node. Property methods are called using dot ('.') notation.
* **Chaining methods** &ndash; A chaining method creates a new child node and returns a reference to it. Chaining methods are called using pipe ('|') notation.
# Nodes
## Nodes
In TICKscript the fundamental type is the **node**. A node has **properties** and, as mentioned, chaining methods. A new node can be created from a parent or sibling node using a chaining method of that parent or sibling node. For each **node type** the signature of this method will be the same, regardless of the parent or sibling node type. The chaining method can accept zero or more arguments used to initialize internal properties of the new node instance. Common node types are `batch`, `query`, `stream`, `from`, `eval` and `alert`, though there are dozens of others.
@ -41,7 +41,7 @@ Each node type **wants** data in either batch or stream mode. Some can handle b
The [node reference documentation](/kapacitor/v1.5/nodes/) lists the property and chaining methods of each node along with examples and descriptions.
# Pipelines
## Pipelines
Every TICKscript is broken into one or more **pipelines**. Pipelines are chains of nodes logically organized along edges that cannot cycle back to earlier nodes in the chain. The nodes within a pipeline can be assigned to variables. This allows the results of different pipelines to be combined using, for example, a `join` or a `union` node. It also allows for sections of the pipeline to be broken into reasonably understandable self-descriptive functional units. In a simple TICKscript there may be no need to assign pipeline nodes to variables. The initial node in the pipeline sets the processing type for the Kapacitor task they define. These can be either `stream` or `batch`. These two types of pipelines cannot be combined.
@ -74,7 +74,7 @@ When connecting nodes and then creating a new Kapacitor task, Kapacitor will che
```
Example 1 shows a runtime error that is thrown because a field value has gone missing from the pipeline. This can often happen following an `eval` node when the property `keep()` of the `eval` node has not been set. In general Kapacitor cannot anticipate all the modalities of the data that the task will encounter at runtime. Some tasks may not be written to handle all deviations or exceptions from the norm, such as when fields or tags go missing. In these cases Kapacitor will log an error.
# Basic examples
## Basic examples
**Example 2 &ndash; An elementary stream &rarr; from() pipeline**
```js
@ -143,7 +143,7 @@ It contains two property methods, which are called from the `query()` node.
* `period()`&ndash; sets the period in time which the batch of data will cover.
* `every()`&ndash; sets the frequency at which the batch of data will be processed.
### Where to next?
## Where to next?
For basic examples of working with TICKscript see the latest examples in the code base on [GitHub](https://github.com/influxdata/kapacitor/tree/master/examples).

View File

@ -11,20 +11,18 @@ menu:
parent: tick
---
# Table of Contents
* [Concepts](#concepts)
* [TICKscript syntax](#tickscript-syntax)
* [Code representation](#code-representation)
* [Variables and literals](#variables-and-literals)
* [Statements](#statements)
* [Taxonomy of node types](#taxonomy-of-node-types)
* [InfluxQL in TICKscript](#influxql-in-tickscript)
* [Lambda expressions](#lambda-expressions)
* [Summary of variable use between syntax sub-spaces](#summary-of-variable-use-between-syntax-sub-spaces)
* [Gotchas](#gotchas)
* [Concepts](#concepts)
* [TICKscript syntax](#tickscript-syntax)
* [Code representation](#code-representation)
* [Variables and literals](#variables-and-literals)
* [Statements](#statements)
* [Taxonomy of node types](#taxonomy-of-node-types)
* [InfluxQL in TICKscript](#influxql-in-tickscript)
* [Lambda expressions](#lambda-expressions)
* [Summary of variable use between syntax sub-spaces](#summary-of-variable-use-between-syntax-sub-spaces)
* [Gotchas](#gotchas)
# Concepts
## Concepts
The sections [Introduction](/kapacitor/v1.5/tick/introduction/) and [Getting Started](/kapacitor/v1.5/introduction/getting-started/) present the key concepts of **nodes** and **pipelines**. Nodes represent process invocation units, that either take data as a batch, or in a point by point stream, and then alter that data, store that data, or based on changes in that data trigger some other activity such as an alert. Pipelines are simply logically organized chains of nodes.
@ -47,11 +45,11 @@ To summarize, the two syntax subspaces to be aware of in TICKscript are:
As mentioned in Getting Started, a pipeline is a Directed Acylic Graph (DAG). (For more information see [Wolfram](http://mathworld.wolfram.com/AcyclicDigraph.html) or [Wikipedia](https://en.wikipedia.org/wiki/Directed_acyclic_graph)). It contains a finite number of nodes (a.k.a. vertices) and edges. Each edge is directed from one node to another. No edge path can lead back to an earlier node in the path, which would result in a cycle or loop. TICKscript paths (a.k.a pipelines and chains) typically begin with a data source definition node with an edge to a data set definition node and then pass their results down to data manipulation and processing nodes.
# TICKscript syntax
## TICKscript syntax
TICKscript is case sensitive and uses Unicode. The TICKscript parser scans TICKscript code from top to bottom and left to right instantiating variables and nodes and then chaining or linking them together into pipelines as they are encountered. When loading a TICKscript the parser checks that a chaining method called on a node is valid. If an invalid chaining method is encountered, the parser will throw an error with the message "no method or property &lt;identifier&gt; on &lt;node type&gt;".
## Code representation
### Code representation
Source files should be encoded using **UTF-8**. A script is broken into **declarations** and **expressions**. Declarations result in the creation of a variable and occur on one line. Expressions can cover more than one line and result in the creation of an entire pipeline, a pipeline **chain** or a pipeline **branch**.
@ -59,7 +57,7 @@ Source files should be encoded using **UTF-8**. A script is broken into **decla
**Comments** can be created on a single line by using a pair of forward slashes "//" before the text. Comment forward slashes can be preceded by whitespace and need not be the first characters of a newline.
### Keywords
#### Keywords
Keywords are tokens that have special meaning within a language and therefore cannot be used as identifiers for functions or variables. TICKscript is compact and contains only a small set of keywords.
@ -78,7 +76,7 @@ Keywords are tokens that have special meaning within a language and therefore ca
Since the set of native node types available in TICKscript is limited, each node type, such as `batch` or `stream`, could be considered key. Node types and their taxonomy are discussed in detail in the section [Taxonomy of node types](#taxonomy-of-node-types) below.
### Operators
#### Operators
TICKscript has support for traditional mathematical operators as well as a few which make sense in its data processing domain.
@ -114,11 +112,11 @@ Standard operators are used in TICKscript and in Lambda expressions.
Chaining operators are used within expressions to define pipelines or pipeline segments.
## Variables and literals
### Variables and literals
Variables in TICKscript are useful for storing and reusing values and for providing a friendly mnemonic for quickly understanding what a variable represents. They are typically declared along with the assignment of a literal value. In a TICKscript intended to be used as a [template task](/kapacitor/v1.5/guides/template_tasks/) they can also be declared with simply a type identifier.
### Variables
#### Variables
Variables are declared using the keyword `var` at the start of a declaration.
Variables are immutable and cannot be reassigned new values later on in the script, though they can be used in other declarations and can be passed into methods.
@ -127,7 +125,7 @@ Variables are also used in template tasks as placeholders to be filled when the
For a detailed presentation on working with **template tasks** see the guide [Template tasks](/kapacitor/v1.5/guides/template_tasks/).
If a TICKscript proves useful, it may be desirable to reuse it as a template task in order to quickly create other similar tasks. For this reason it is recommended to use variables as much as possible.
#### Naming variables
##### Naming variables
Variable identifiers must begin with a standard ASCII letter and can be followed by any number of letters, digits and underscores. Both upper and lower case can be used. In a TICKscript to be used to define a task directly, the type the variable holds depends on the literal value it is assigned. In a TICKscript written for a task template, the type can also be set using the keyword for the type the variable will hold. In a TICKscript to be used to define a task directly, using the type identifier will result in a compile time error `invalid TICKscript: missing value for var "<VARNAME>".`.
@ -150,11 +148,11 @@ var period = 12h
var critical = 3.0
```
### Literal values
#### Literal values
Literal values are parsed into instances of the types available in TICKscript. They can be declared directly in method arguments or can be assigned to variables. The parser interprets types based on context and creates instances of the following primitives: Boolean, string, float, integer. Regular expressions, lists, lambda expressions, duration structures and nodes are also recognized. The rules the parser uses to recognize a type are discussed in the following Types section.
#### Types
##### Types
TICKscript recognizes five type identifiers. These identifiers can be used directly in TICKscripts intended for template tasks. Otherwise, the type of the literal will be interpreted from its declaration.
@ -168,7 +166,7 @@ TICKscript recognizes five type identifiers. These identifiers can be used dire
| **float** | In a template task, declare a variable as type `float64`. | `var my_ratio float` |
| **lambda** | In a template task, declare a variable as a Lambda expression type. | `var crit lambda` |
##### Booleans
###### Booleans
Boolean values are generated using the Boolean keywords: `TRUE` and `FALSE`. Note that these keywords use all upper case letters. The parser will throw an error when using lower case characters, e.g. `True` or `true`.
**Example 3 &ndash; Boolean literals**
@ -183,7 +181,7 @@ var true_bool = TRUE
In Example 3 above the first line shows a simple assignment using a Boolean literal. The second example shows using the Boolean literal `FALSE` in a method call.
##### Numerical types
###### Numerical types
Any literal token containing only digits and optionally a decimal will lead to the generation of an instance of a numerical type. TICKscript understands two numerical types based on Go: `int64` and `float64`. Any numerical token containing a decimal point will result in the creation of a `float64` value. Any numerical token that ends without containing a decimal point will result in the creation of an `int64` value. If an integer is prefixed with the zero character, `0`, it is interpreted as an octal.
@ -196,7 +194,7 @@ var my_octal = 0400
```
In Example 4 above `my_int` is of type `int64`, `my_float` is of type `float64` and `my_octal` is of type `int64` octal.
##### Duration literals
###### Duration literals
Duration literals define a span of time. Their syntax follows the same syntax present in [InfluxQL](/influxdb/v1.4/query_language/spec/#literals). A duration literal is comprised of two parts: an integer and a duration unit. It is essentially an integer terminated by one or a pair of reserved characters, which represent a unit of time.
@ -230,7 +228,7 @@ var views = batch
In Example 5 above the first two lines show the declaration of Duration types. The first represents a time span of 10 seconds and the second a time frame of 10 seconds. The final example shows declaring duration literals directly in method calls.
##### Strings
###### Strings
Strings begin with either one or three single quotation marks: `'` or `'''`. Strings can be concatenated using the addition `+` operator. To escape quotation marks within a string delimited by a single quotation mark use the backslash character. If it is to be anticipated that many single quotation marks will be encountered inside the string, delimit it using triple single quotation marks instead. A string delimited by triple quotation marks requires no escape sequences. In both string demarcation cases, the double quotation mark, which is used to access field and tag values, can be used without an escape.
**Example 6 &ndash; Basic strings**
@ -260,7 +258,7 @@ batch
```
In Example 7 above the string is broken up to make the query more easily understood.
##### String templates
###### String templates
String templates allow node properties, tags and fields to be added to a string. The format follows the same format provided by the Go [text.template](https://golang.org/pkg/text/template/) package. This is useful when writing alert messages. To add a property, tag or field value to a string template, it needs to be wrapped inside of double curly braces: "{{}}".
@ -279,7 +277,7 @@ String templates can also include flow statements such as `if...else` as well as
```
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
```
##### String lists
###### String lists
A string list is a collection of strings declared between two brackets. They can be declared with literals, identifiers for other variables, or with the asterisk wild card, "\*". They can be passed into methods that take multiple string parameters. They are especially useful in template tasks. Note that when used in function calls, list contents get exploded and the elements are used as all the arguments to the function. When a list is given, it is understood that the list contains all the arguments to the function.
@ -329,7 +327,7 @@ stream
Example 10, taken from the examples in the [code repository](https://github.com/influxdata/kapacitor/blob/1de435db363fa8ece4b50e26d618fc225b38c70f/examples/load/templates/implicit_template.tick), defines `implicit_template.tick`. It uses the `groups` list to hold a variable arguments to be passed to the `from.groupBy()` method. The contents of the `groups` list will be determined when the template is used to create a new task.
##### Regular expressions
###### Regular expressions
Regular expressions begin and end with a forward slash: `/`. The regular expression syntax is the same as Perl, Python and other languages. For details on the syntax see the Go [regular expression library](https://golang.org/pkg/regexp/syntax/).
@ -351,7 +349,7 @@ var south_afr = stream
```
In Example 11 the first three lines show the assignment of regular expressions to variables. The `locals` stream uses the regular expression assigned to the variable `local_ips`. The `south_afr` stream uses a regular expression comparison with the regular expression declared literally as a part of the lambda expression.
##### Lambda expressions as literals
###### Lambda expressions as literals
A lambda expression is a parameter representing a short easily understood function to be passed into a method call or held in a variable. It can wrap a Boolean expression, a mathematical expression, a call to an internal function or a combination of these three. Lambda expressions always operate on point data. They are generally compact and as such are used as literals, which eventually get passed into node methods. Internal functions that can be used in Lambda expressions are discussed in the sections [Type conversion](#type-conversion) and [Lambda expressions](#lambda-expressions) below. Lambda expressions are presented in detail in the topic [Lambda Expressions](/kapacitor/v1.5/tick/expr/).
@ -380,7 +378,7 @@ var alert = data
Example 12 above shows that a lambda expression can be directly assigned to a variable. In the eval node a lambda statement is used which calls the sigma function. The alert node uses lambda expressions to define the log levels of given events.
##### Nodes
###### Nodes
Like the simpler types, node types are declared and can be assigned to variables.
@ -411,11 +409,11 @@ var alert = data
```
In Example 13 above, in the first section, five nodes are created. The top level node `stream` is assigned to the variable `data`. The `stream` node is then used as the root of the pipeline to which the nodes `from`, `eval`, `window` and `mean` are chained in order. In the second section the pipeline is then extended using assignment to the variable `alert`, so that a second `eval` node can be applied to the data.
#### Working with tags, fields and variables
##### Working with tags, fields and variables
In any script it is not enough to simply declare variables. The values they hold must also be accessed. In TICKscript it is also necessary to work with values held in tags and fields drawn from an InfluxDB data series. This is most evident in the examples presented so far. In addition values generated by lambda expressions can be added as new fields to the data set in the pipeline and then accessed as named results of those expressions. The following section explores working not only with variables but also with tag and field values, that can be extracted from the data, as well as with named results.
##### Accessing values
###### Accessing values
Accessing data tags and fields, using string literals and accessing TICKscript variables each involves a different syntax. Additionally it is possible to access the results of lambda expressions used with certain nodes.
@ -780,7 +778,7 @@ alert
```
Example 29 shows a `batch`&rarr;`query` pipeline broken into three expressions using two variables. The first expression declares the data frame, the second expression the alert thresholds and the final expression sets the `log` property of the `alert` node. The entire pipeline begins with the declaration of the `batch` node and ends with the call to the property method `log()`.
# Taxonomy of node types
## Taxonomy of node types
To aid in understanding the roles that different nodes play in a pipeline, a short taxonomy has been defined. For complete documentation on each node type see the topic [TICKscript Nodes](/kapacitor/v1.5/nodes/).
@ -858,7 +856,7 @@ User defined functions are nodes that implement functionality defined by user pr
* [`noOp`](/kapacitor/v1.5/nodes/no_op_node/) - a helper node that performs no operations. Do not use it!
# InfluxQL in TICKscript
## InfluxQL in TICKscript
InfluxQL occurs in a TICKscript primarily in a `query` node, whose chaining method takes an InfluxQL query string. This will nearly always be a `SELECT` statement.
@ -912,7 +910,7 @@ Note that the select statement gets passed directly to the InfluxDB API. Within
See the [InfluxQL](/influxdb/v1.3/query_language/) documentation for a complete introduction to working with the query language.
# Lambda expressions
## Lambda expressions
Lambda expressions occur in a number of chaining and property methods. Two of the most common usages are in the creation of an `eval` node and in defining threshold properties on an `alert` node. They are declared with the keyword "lambda" followed by a colon: `lambda:`. They can contain mathematical and Boolean operations as well as calls to a large library of internal functions. With many nodes, their results can be captured by setting an `as` property on the node.
@ -965,13 +963,10 @@ alert
Example 33 contains four lambda expressions. The first expression is passed to the `eval` node. It calls the internal stateful function `sigma`, into which it passes the named result `stat`, which is set using the `AS` clause in the query string of the `query` node. Through the `.as()` setter of the `eval` node its result is named `sigma`. Three other lambda expressions occur inside the threshold determining property methods of the `alert` node. These lambda expressions also access the named results `stat` and `sigma` as well as variables declared at the start of the script. They each define a series of Boolean operations, which set the level of the alert message.
# Summary of variable use between syntax sub-spaces
## Summary of variable use between syntax sub-spaces
The following section summarizes how to access variables and data series tags and fields in TICKscript and the different syntax sub-spaces.
<!-- see defect 1238 -->
### TICKscript variable
Declaration examples:
@ -982,41 +977,41 @@ var my_field = `usage_idle`
var my_num = 2.71
```
**Accessing...**
**Accessing...**
* In **TICKscript** simply use the identifier.
* In **TICKscript** simply use the identifier.
```js
var my_other_num = my_num + 3.14
...
|default()
.tag('bar', my_var)
...
```
```js
var my_other_num = my_num + 3.14
...
|default()
.tag('bar', my_var)
...
```
* In a **query string** simply use the identifier with string concatenation.
* In a **query string** simply use the identifier with string concatenation.
```js
...
|query('SELECT ' + my_field + ' FROM "telegraf"."autogen".cpu WHERE host = \'' + my_var + '\'' )
...
```
```js
...
|query('SELECT ' + my_field + ' FROM "telegraf"."autogen".cpu WHERE host = \'' + my_var + '\'' )
...
```
* In a **lambda expression** simply use the identifier.
* In a **lambda expression** simply use the identifier.
```js
...
.info(lambda: "stat" > my_num )
...
```
```js
...
.info(lambda: "stat" > my_num )
...
```
* In an **InfluxQL node** use the identifier. Note that in most cases strings will be used as field or tag names.
* In an **InfluxQL node** use the identifier. Note that in most cases strings will be used as field or tag names.
```js
...
|mean(my_var)
...
```
```js
...
|mean(my_var)
...
```
### Tag, Field or Named Result
@ -1034,50 +1029,50 @@ Examples
**Accessing...**
* In a **TICKscript** method call use single quotes.
* In a **TICKscript** method call use single quotes.
```js
...
|derivative('mean')
...
```
```js
...
|derivative('mean')
...
```
* In a **query string** use the identifier directly in the string.
* In a **query string** use the identifier directly in the string.
```js
...
|query('SELECT cpu, usage_idle FROM "telegraf"."autogen".cpu')
...
```
```js
...
|query('SELECT cpu, usage_idle FROM "telegraf"."autogen".cpu')
...
```
* In a **lambda expression** use double quotes.
* In a **lambda expression** use double quotes.
```js
...
|eval(lambda: 100.0 - "usage_idle")
...
|alert
.info(lambda: "sigma" > 2 )
...
```
```js
...
|eval(lambda: 100.0 - "usage_idle")
...
|alert
.info(lambda: "sigma" > 2 )
...
```
* In an **InfluxQL node** use single quotes.
* In an **InfluxQL node** use single quotes.
```js
...
|mean('used')
...
```
```js
...
|mean('used')
...
```
# Gotchas
## Gotchas
## Literals versus field values
### Literals versus field values
Please keep in mind that literal string values are declared using single quotes. Double quotes are used only in lambda expressions to access the values of tags and fields. In most instances using double quotes in place of single quotes will be caught as an error: `unsupported literal type`. On the other hand, using single quotes when double quotes were intended, i.e. accessing a field value, will not be caught and, if this occurs in a lambda expression, the literal value may be used instead of the desired value of a tag, or a field.
As of Kapacitor 1.3 it is possible to declare a variable using double quotes, which is invalid, and the parser will not flag it as an error. For example `var my_var = "foo"` will pass so long as it is not used. However, when this variable is used in a Lambda expression or other method call, it will trigger a compilation error: `unsupported literal type *ast.ReferenceNode`.
## Circular rewrites
### Circular rewrites
When using the InfluxDBOut node, be careful not to create circular rewrites to the same database and the same measurement from which data is being read.
@ -1103,10 +1098,10 @@ In such a case, the above script will loop infinitely adding a new data point wi
<!-- defect 589 -->
## Alerts and ids
### Alerts and ids
When using the `deadman` method along with one or more `alert` nodes or when using more than one `alert` node in a pipeline, be sure to set the ID property with the property method `id()`. The value of ID must be unique on each node. Failure to do so will lead Kapacitor to assume that they are all the same group of alerts, and so some alerts may not appear as expected.
# Where to next?
## Where to next?
See the [examples](https://github.com/influxdata/kapacitor/tree/master/examples) in the code base on Github. See also the detailed use case solutions in the section [Guides](/kapacitor/v1.5/guides).

View File

@ -9,7 +9,6 @@ menu:
parent: work-w-kapacitor
---
# Contents
* [General options](#general-options)
* [Core commands](#core-commands)
* [Server management](#server-management)
@ -58,8 +57,8 @@ This can be used to run `kapacitor` commands on a remote Kapacitor server.
> username and password as query parameters, `u` and `p` respectively, in the Kapacitor URL.
> For both convenience and security, InfluxData recommends storing these credentials as
> part of the Kapacitor URL in the `KAPACITOR_URL` environment variable.
>
>```sh
```sh
export KAPACITOR_URL=https://192.168.67.88:9092?u=username&p=password
# When KAPACITOR_URL is defined, the -url flag isn't necessary.
@ -112,7 +111,7 @@ ability to push task definitions to other servers.
The `backup` command creates a backup of the Kapacitor database at a specified filepath.
```bash
# Pattern
# Syntax
kapacitor backup [PATH_TO_BACKUP_FILE]
# Example
@ -128,7 +127,7 @@ The `stats` command displays statistics about the Kapacitor server.
It requires either the `general` or `ingress` argument.
```bash
# Pattern
# Syntax
kapacitor stats <general|ingress>
```
@ -221,7 +220,7 @@ discovery and scraping of data. _For more information about services see the
The `list service-tests` lists all service tests currently available on the server.
```bash
# Pattern
# Syntax
kapacitor list service-tests [ <SERVICE_NAME> | <PATTERN> ]
# Example
@ -307,7 +306,7 @@ The `logs` command outputs the entire Kapacitor log stream or the log stream of
Log streams can be filtered by log level.
```bash
# Pattern
# Syntax
kapacitor logs [service=<SERVICE_ID>] [lvl=<LEVEL>]
```
@ -384,7 +383,7 @@ The `watch` command follows logs associated with a **task**.
> This is different from the `logs` command, which allows tracking logs associated with a service.
```bash
# Pattern
# Syntax
kapacitor watch <TASK_ID> [<TAGS> ...]
```
@ -421,7 +420,7 @@ argument `-no-wait` will spawn the replay into a separate process and exit leavi
it to run in the background.
```bash
# Pattern
# Syntax
kapacitor record batch (-no-wait) [-past <WINDOW_IN_PAST> | -start <START_TIME> -stop <STOP_TIME>] [-recording-id <ID>] -task <TASK_ID>
# Example
@ -440,7 +439,7 @@ process and exit leaving it to run in the background.
> duration has expired. It returns the recording ID in the console.
```bash
# Pattern
# Syntax
kapacitor record stream -duration <DURATION> (-no-wait) (-recording-id <ID> ) -task <TASK_ID>
# Example
@ -456,7 +455,7 @@ The optional boolean argument `-no-wait` will spawn the replay into a separate
process and exit leaving it to run in the background.
```bash
# Pattern
# Syntax
kapacitor record query [-cluster <INFLUXDB_CLUSTER_NAME>] [-no-wait] -query <QUERY> [-recording-id <RECORDING_ID>] -type <stream|batch>
# Example
@ -476,7 +475,7 @@ when not provided will be generated. The optional Boolean argument `-no-wait` wi
spawn the replay into a separate process and exit leaving it to run in the background.
```bash
# Pattern
# Syntax
kapacitor replay [-no-wait] [-real-clock] [-rec-time] -recording <ID> [-replay-id <REPLAY_ID>] -task <TASK_ID>
# Example
@ -493,7 +492,7 @@ With the query argument, the replay executes an InfluxDB query against the task.
The query should include the database, retention policy and measurement string.
```bash
# Pattern
# Syntax
kapacitor replay-live query [-cluster <CLUSTER_URL>] [-no-wait] -query <QUERY> [-real-clock] [-rec-time] [-replay-id <REPLAY_ID>] -task <TASK_ID>
# Example
@ -516,7 +515,7 @@ Use of present times is the default behavior.
With the batch argument the replay executes the task with batch data already stored to InfluxDB. It takes the following form:
```bash
# Pattern
# Syntax
kapacitor replay-live batch [-no-wait] [ -past <TIME_WINDOW> | -start <START_TIME> -stop <STOP_TIME> ] [-real-clock] [-rec-time] [-replay-id <REPLAY_ID>] -task <TASK_ID>
# Example
@ -570,7 +569,7 @@ bbe8567c-a642-4da9-83ef-2a7d32ad5eb1 cpu_alert 4e0f09c5-1426-4778-8f9b-c4a
The `delete recordings` command deletes one or more recordings.
```bash
# Pattern
# Syntax
kapacitor delete recordings <Recording-ID | Pattern>
# Examples
@ -594,7 +593,7 @@ To verify results, use the `list recordings` command.
The `delete replays` command deletes one or more replays.
```bash
# Pattern
# Syntax
kapacitor delete replays <Replay-ID | Pattern>
# Examples
@ -630,7 +629,7 @@ The `define-topic-handler` command defines or redefines a topic handler based on
the contents of a topic handler script.
```bash
# Pattern
# Syntax
kapacitor define-topic-handler <PATH_TO_HANDLER_SCRIPT>
# Example
@ -670,7 +669,7 @@ cpu slack slack
Use the `show-topic` command to see the details of a topic.
```bash
# Pattern
# Syntax
kapacitor show-topic [TOPIC_ID]
# Example
@ -688,7 +687,7 @@ cpu:nil OK cpu:nil is OK 13 Nov 17 13:34 CET
The `show-topic-handler` command outputs the topic-handler's contents to the console.
```bash
# Pattern
# Syntax
kapacitor show-topic-handler [TOPIC_ID] [HANDLER_ID]
# Example
@ -704,7 +703,7 @@ Options: {"channel":"#kapacitor"}
Use the `delete topics` command to remove one or more topics.
```bash
# Pattern
# Syntax
kapacitor delete topics <Topic-ID | Pattern>
# Examples
@ -725,7 +724,7 @@ To verify the results, use the `list topics` command.
The `topic-handlers` command removes a topic handler.
```bash
# Pattern
# Syntax
kapacitor delete topic-handlers [TOPIC_ID] [HANDLER_ID]
# Example
@ -758,7 +757,7 @@ It takes one of the following three forms:
#### Define a straight-forward task
```bash
# Pattern
# Syntax
kapacitor define <TASK_ID> -tick <PATH_TO_TICKSCRIPT> -type <stream|batch> [-no-reload] -dbrp <DATABASE>.<RETENTION_POLICY>
# Example
@ -780,7 +779,7 @@ To verify the results, use the `list tasks` command.
#### Define a task from a template
```bash
# Pattern
# Syntax
kapacitor define <TASK_ID> -template <TEMPLATE_ID> -vars <PATH_TO_VARS_FILE> [-no-reload] -dbrp <DATABASE>.<RETENTION_POLICY>
# Example
@ -802,7 +801,7 @@ To verify the results, use the `list tasks` command.
#### Define a task from a template with a descriptor file
```bash
# Pattern
# Syntax
kapacitor define <TASK_ID> -file <PATH_TO_TEMPLATE_FILE> [-no-reload]
# Example
@ -825,7 +824,7 @@ To verify the results, use the `list tasks` command.
Use this command to load a task template to Kapacitor. It takes the following form:
```bash
# Pattern
# Syntax
kapacitor define-template <TEMPLATE_ID> -tick <PATH_TO_TICKSCRIPT> -type <string|batch>
# Example
@ -847,7 +846,7 @@ The `enable` command enables one or more tasks.
When tasks are first created, they are in a `disabled` state.
```bash
# Pattern
# Syntax
kapacitor enable <TASK_ID>
# Example
@ -861,7 +860,7 @@ To verify the results, use the `list tasks` command.
The `disable` command disables one or more active tasks.
```bash
# Pattern
# Syntax
kapacitor disable <TASK_ID>...
# Examples
@ -877,7 +876,7 @@ The `reload` command disables and then reenables one or more tasks.
It's useful when troubleshooting a tasks to stop and start it again.
```bash
# Pattern
# Syntax
kapacitor reload <TASK_ID>
kapacitor reload cpu_alert
```
@ -915,7 +914,7 @@ generic_mean_alert stream crit,field,groups,measurement,slack_channel,warn,wh
The `show` command outputs the details of a task.
```bash
# Pattern
# Syntax
kapacitor show [-replay <REPLAY_ID>] <TASK_ID>
# Example
@ -964,7 +963,7 @@ alert2 [alerts_triggered="147" avg_exec_time_ns="1.665189ms" crits_triggered="10
The `show-template` command outputs the details of a task template.
```bash
# Pattern
# Syntax
kapacitor show-template <TEMPLATE_ID>
# Example
@ -1041,7 +1040,7 @@ mean3 -> alert4;
The `delet tasks` command removes one or more tasks.
```bash
# Pattern
# Syntax
kapacitor delete tasks <Task-IDs | Pattern>
# Example
@ -1061,7 +1060,7 @@ To verify the results, use the `list tasks` command.
The `delete templates` command removes one or more templates.
```bash
# Pattern
# Syntax
kapacitor delete templates <Tempalte-IDs | Pattern>
# Example