Fixing typos (#5242)
* Fixing typos * Update content/enterprise_influxdb/v1/flux/guides/rate.md --------- Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>pull/5241/head
parent
2c5559b27d
commit
c9488e3465
|
@ -5013,7 +5013,7 @@ components:
|
|||
readOnly: true
|
||||
links:
|
||||
example:
|
||||
lables: /api/v2/telegrafs/1/labels
|
||||
labels: /api/v2/telegrafs/1/labels
|
||||
members: /api/v2/telegrafs/1/members
|
||||
owners: /api/v2/telegrafs/1/owners
|
||||
self: /api/v2/telegrafs/1
|
||||
|
@ -8366,7 +8366,7 @@ paths:
|
|||
- Bucket Schemas
|
||||
post:
|
||||
description: |
|
||||
Creates an _explict_ measurement [schema](/influxdb/cloud-serverless/reference/glossary/#schema)
|
||||
Creates an _explicit_ measurement [schema](/influxdb/cloud-serverless/reference/glossary/#schema)
|
||||
for a bucket.
|
||||
|
||||
_Explicit_ schemas are used to enforce column names, tags, fields, and data
|
||||
|
@ -8379,7 +8379,7 @@ paths:
|
|||
|
||||
#### Limitations
|
||||
|
||||
- Buckets must be created with the "explict" `schemaType` in order to use
|
||||
- Buckets must be created with the "explicit" `schemaType` in order to use
|
||||
schemas.
|
||||
|
||||
<!-- TSM-ONLY -->
|
||||
|
|
|
@ -5029,7 +5029,7 @@ components:
|
|||
readOnly: true
|
||||
links:
|
||||
example:
|
||||
lables: /api/v2/telegrafs/1/labels
|
||||
labels: /api/v2/telegrafs/1/labels
|
||||
members: /api/v2/telegrafs/1/members
|
||||
owners: /api/v2/telegrafs/1/owners
|
||||
self: /api/v2/telegrafs/1
|
||||
|
@ -8434,7 +8434,7 @@ paths:
|
|||
- Bucket Schemas
|
||||
post:
|
||||
description: |
|
||||
Creates an _explict_ measurement [schema](/influxdb/cloud/reference/glossary/#schema)
|
||||
Creates an _explicit_ measurement [schema](/influxdb/cloud/reference/glossary/#schema)
|
||||
for a bucket.
|
||||
|
||||
_Explicit_ schemas are used to enforce column names, tags, fields, and data
|
||||
|
@ -8447,7 +8447,7 @@ paths:
|
|||
|
||||
#### Limitations
|
||||
|
||||
- Buckets must be created with the "explict" `schemaType` in order to use
|
||||
- Buckets must be created with the "explicit" `schemaType` in order to use
|
||||
schemas.
|
||||
|
||||
#### Related guides
|
||||
|
|
|
@ -5043,7 +5043,7 @@ components:
|
|||
readOnly: true
|
||||
links:
|
||||
example:
|
||||
lables: /api/v2/telegrafs/1/labels
|
||||
labels: /api/v2/telegrafs/1/labels
|
||||
members: /api/v2/telegrafs/1/members
|
||||
owners: /api/v2/telegrafs/1/owners
|
||||
self: /api/v2/telegrafs/1
|
||||
|
|
|
@ -495,7 +495,7 @@ TLS1.2 is now the default minimum required TLS version. If you have clients that
|
|||
- Fix alert rule message text template parsing.
|
||||
- Fix erroneous query manipulation.
|
||||
- Fix group by database for numSeries and numMeasurement queries in canned dashboards.
|
||||
- Update `axios` and `lodash` dependenies with known vulnerabilities.
|
||||
- Update `axios` and `lodash` dependencies with known vulnerabilities.
|
||||
- Fix dashboard typos in protoboard queries.
|
||||
- Fix repeating last command in Data Explore window when multiple tabs are open.
|
||||
|
||||
|
|
|
@ -236,7 +236,7 @@ List of etcd endpoints.
|
|||
## Single parameter
|
||||
--etcd-endpoints=localhost:2379
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--etcd-endpoints=localhost:2379 \
|
||||
--etcd-endpoints=192.168.1.61:2379 \
|
||||
--etcd-endpoints=192.192.168.1.100:2379
|
||||
|
@ -250,7 +250,7 @@ Environment variable: `$ETCD_ENDPOINTS`
|
|||
## Single parameter
|
||||
ETCD_ENDPOINTS=localhost:2379
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
|
||||
```
|
||||
|
||||
|
@ -419,7 +419,7 @@ Environment variable: `$GH_CLIENT_SECRET`
|
|||
## Single parameter
|
||||
--github-organization=org1
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--github-organization=org1 \
|
||||
--github-organization=org2 \
|
||||
--github-organization=org3
|
||||
|
@ -433,7 +433,7 @@ Environment variable: `$GH_ORGS`
|
|||
## Single parameter
|
||||
GH_ORGS=org1
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
GH_ORGS=org1,org2,org3
|
||||
```
|
||||
|
||||
|
@ -463,7 +463,7 @@ Environment variable: `$GOOGLE_CLIENT_SECRET`
|
|||
## Single parameter
|
||||
--google-domains=delorean.com
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--google-domains=delorean.com \
|
||||
--google-domains=savetheclocktower.com
|
||||
```
|
||||
|
@ -476,7 +476,7 @@ Environment variable: `$GOOGLE_DOMAINS`
|
|||
## Single parameter
|
||||
GOOGLE_DOMAINS=delorean.com
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
GOOGLE_DOMAINS=delorean.com,savetheclocktower.com
|
||||
```
|
||||
|
||||
|
@ -516,7 +516,7 @@ Lists are comma-separated and are only available when using environment variable
|
|||
## Single parameter
|
||||
--auth0-organizations=org1
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--auth0-organizations=org1 \
|
||||
--auth0-organizations=org2 \
|
||||
--auth0-organizations=org3
|
||||
|
@ -530,7 +530,7 @@ Environment variable: `$AUTH0_ORGS`
|
|||
## Single parameter
|
||||
AUTH0_ORGS=org1
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
AUTH0_ORGS=org1,org2,org3
|
||||
```
|
||||
|
||||
|
@ -557,7 +557,7 @@ The Heroku organization memberships required for access to Chronograf.
|
|||
## Single parameter
|
||||
--heroku-organization=org1
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--heroku-organization=org1 \
|
||||
--heroku-organization=org2 \
|
||||
--heroku-organization=org3
|
||||
|
@ -571,7 +571,7 @@ The Heroku organization memberships required for access to Chronograf.
|
|||
## Single parameter
|
||||
HEROKU_ORGS=org1
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
HEROKU_ORGS=org1,org2,org3
|
||||
```
|
||||
|
||||
|
@ -610,7 +610,7 @@ Default value: `user:email`
|
|||
## Single parameter
|
||||
--generic-scopes=api
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--generic-scopes=api \
|
||||
--generic-scopes=openid \
|
||||
--generic-scopes=read_user
|
||||
|
@ -624,7 +624,7 @@ Environment variable: `$GENERIC_SCOPES`
|
|||
## Single parameter
|
||||
GENERIC_SCOPES=api
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
GENERIC_SCOPES=api,openid,read_user
|
||||
```
|
||||
|
||||
|
@ -640,7 +640,7 @@ Example: `--generic-domains=example.com`
|
|||
## Single parameter
|
||||
--generic-domains=delorean.com
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
--generic-domains=delorean.com \
|
||||
--generic-domains=savetheclocktower.com
|
||||
```
|
||||
|
@ -653,7 +653,7 @@ Environment variable: `$GENERIC_DOMAINS`
|
|||
## Single parameter
|
||||
GENERIC_DOMAINS=delorean.com
|
||||
|
||||
## Mutiple parameters
|
||||
## Multiple parameters
|
||||
GENERIC_DOMAINS=delorean.com,savetheclocktower.com
|
||||
```
|
||||
|
||||
|
|
|
@ -33,5 +33,5 @@ The first column lists all the databases in your Influx instance and the queries
|
|||
2. Click on **InfluxDB**.
|
||||
3. Click the **Queries** tab.
|
||||
4. Click the **CSV** button in the upper-righthand corner.
|
||||
5. The CSV file is downloaded to your Downlaods folder.
|
||||
5. The CSV file is downloaded to your Downloads folder.
|
||||
|
||||
|
|
|
@ -80,9 +80,9 @@ Below are the options and how they appear in the log table:
|
|||
|
||||
| Severity Format | Display |
|
||||
| --------------- |:------- |
|
||||
| Dot | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot.png" alt="Log serverity format 'Dot'" style="display:inline;max-height:24px;"/> |
|
||||
| Dot + Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot-text.png" alt="Log serverity format 'Dot + Text'" style="display:inline;max-height:24px;"/> |
|
||||
| Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-text.png" alt="Log serverity format 'Text'" style="display:inline;max-height:24px;"/> |
|
||||
| Dot | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot.png" alt="Log severity format 'Dot'" style="display:inline;max-height:24px;"/> |
|
||||
| Dot + Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot-text.png" alt="Log severity format 'Dot + Text'" style="display:inline;max-height:24px;"/> |
|
||||
| Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-text.png" alt="Log severity format 'Text'" style="display:inline;max-height:24px;"/> |
|
||||
|
||||
### Truncate or wrap log messages
|
||||
By default, text in Log Viewer columns is truncated if it exceeds the column width. You can choose to wrap the text instead to display the full content of each cell.
|
||||
|
|
|
@ -32,7 +32,7 @@ First, we need to configure Kapacitor to receive the stream of scores.
|
|||
In this example, the scores update too frequently to store all of the score data in a InfluxDB database, so the score data will be semt directly to Kapacitor.
|
||||
Like InfluxDB, you can configure a UDP listener.
|
||||
|
||||
Add the following settings the `[[udp]]` secton in your Kapacitor configuration file (`kapacitor.conf`).
|
||||
Add the following settings the `[[udp]]` section in your Kapacitor configuration file (`kapacitor.conf`).
|
||||
|
||||
```
|
||||
[[udp]]
|
||||
|
@ -52,7 +52,7 @@ messing with the real game servers.
|
|||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# default options: can be overriden with corresponding arguments.
|
||||
# default options: can be overridden with corresponding arguments.
|
||||
host=${1-localhost}
|
||||
port=${2-9100}
|
||||
games=${3-10}
|
||||
|
|
|
@ -160,7 +160,7 @@ Perform a full backup into a specific directory with the command below.
|
|||
The directory must already exist.
|
||||
|
||||
```bash
|
||||
# Sytnax
|
||||
# Syntax
|
||||
influxd-ctl backup -full <path-to-backup-directory>
|
||||
|
||||
# Example
|
||||
|
|
|
@ -43,7 +43,7 @@ _See also: [Back up and restore](/enterprise_influxdb/v1/administration/backup-a
|
|||
|
||||
### 8089
|
||||
|
||||
Used for communcation between meta nodes.
|
||||
Used for communication between meta nodes.
|
||||
It is used by the Raft consensus protocol.
|
||||
The only clients using `8089` should be the other meta nodes in the cluster.
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ All previous writes are now stored in cold shards.
|
|||
influxd-ctl truncate-shards
|
||||
```
|
||||
|
||||
The expected ouput of this command is:
|
||||
The expected output of this command is:
|
||||
|
||||
```
|
||||
Truncated shards.
|
||||
|
@ -306,7 +306,7 @@ All previous writes are now stored in cold shards.
|
|||
influxd-ctl truncate-shards
|
||||
```
|
||||
|
||||
The expected ouput of this command is:
|
||||
The expected output of this command is:
|
||||
|
||||
```
|
||||
Truncated shards.
|
||||
|
|
|
@ -47,7 +47,7 @@ have options that facilitate the use of TLS.
|
|||
|
||||
#### `influxd-ctl -bind-tls`
|
||||
|
||||
To manage your cluster over TLS, pass the `-bind-tls` flag with any `influxd-ctl` commmand.
|
||||
To manage your cluster over TLS, pass the `-bind-tls` flag with any `influxd-ctl` command.
|
||||
|
||||
{{% note %}}
|
||||
If using a self-signed certificate, pass the `-k` flag to skip certificate verification.
|
||||
|
|
|
@ -21,12 +21,12 @@ InfluxDB Enterprise runs on a network of independent servers, a *cluster*,
|
|||
to provide fault tolerance, availability, and horizontal scalability of the database.
|
||||
|
||||
While many InfluxDB Enterprise features are available
|
||||
when run with a single meta node and a single data node, this configuration does not take advantage of the clustering capablity
|
||||
or ensure high availablity.
|
||||
when run with a single meta node and a single data node, this configuration does not take advantage of the clustering capability
|
||||
or ensure high availability.
|
||||
|
||||
Nodes can be added to an existing cluster to improve database performance for querying and writing data.
|
||||
Certain configurations (e.g., 3 meta and 2 data node) provide high-availability assurances
|
||||
while making certain tradeoffs in query peformance when compared to a single node.
|
||||
while making certain tradeoffs in query performance when compared to a single node.
|
||||
|
||||
Further increasing the number of nodes can improve performance in both respects.
|
||||
For example, a cluster with 4 data nodes and a [replication factor](https://docs.influxdata.com/enterprise_influxdb/v1/concepts/glossary/#replication-factor)
|
||||
|
@ -52,7 +52,7 @@ for particular data is also available.
|
|||
|
||||
InfluxDB Enterprise can also use [LDAP for managing authentication](/enterprise_influxdb/v1/administration/manage/security/ldap/).
|
||||
|
||||
For FIPS compliance, InfluxDB Enterprise password hashing alogrithms are configurable.
|
||||
For FIPS compliance, InfluxDB Enterprise password hashing algorithms are configurable.
|
||||
|
||||
{{% note %}}
|
||||
Kapacitor OSS can also delegate its LDAP and security setup to InfluxDB Enterprise.
|
||||
|
|
|
@ -145,7 +145,7 @@ Best practices when using an active-passive node setup:
|
|||
- Keep the ratio of active to passive nodes between 1:1 and 2:1.
|
||||
- Passive nodes should receive all writes.
|
||||
|
||||
For more inforrmation, see how to [add a passive node to a cluster](/enterprise_influxdb/v1/tools/influxd-ctl/add-data/#add-a-passive-data-node-to-a-cluster).
|
||||
For more information, see how to [add a passive node to a cluster](/enterprise_influxdb/v1/tools/influxd-ctl/add-data/#add-a-passive-data-node-to-a-cluster).
|
||||
|
||||
{{% note %}}
|
||||
**Note:** This feature is experimental and available only in InfluxDB Enterprise.
|
||||
|
|
|
@ -51,7 +51,7 @@ linearBins(start: 0.0, width: 10.0, count: 10)
|
|||
The [`logarithmicBins()` function](/flux/v0/stdlib/built-in/misc/logarithmicbins) generates a list of exponentially separated floats.
|
||||
|
||||
```js
|
||||
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
|
||||
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinity: true)
|
||||
|
||||
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
|
||||
```
|
||||
|
|
|
@ -38,7 +38,7 @@ data
|
|||
```
|
||||
|
||||
By default, `derivative()` returns only positive derivative values and replaces negative values with _null_.
|
||||
Cacluated values are returned as [floats](/flux/v0/language/types/#numeric-types).
|
||||
Calculated values are returned as [floats](/flux/v0/language/types/#numeric-types).
|
||||
|
||||
|
||||
{{< flex >}}
|
||||
|
|
|
@ -589,7 +589,7 @@ Since InfluxQL does not support joins, the cost of a InfluxQL query is typically
|
|||
The elements of `EXPLAIN` query plan include:
|
||||
|
||||
- expression
|
||||
- auxillary fields
|
||||
- auxiliary fields
|
||||
- number of shards
|
||||
- number of series
|
||||
- cached values
|
||||
|
|
|
@ -34,7 +34,7 @@ Output includes the following:
|
|||
- Operation start time
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "View example ouput" %}}
|
||||
{{% expand "View example output" %}}
|
||||
```sh
|
||||
Source Dest Database Policy ShardID TotalSize CurrentSize StartedAt
|
||||
cluster-data-node-02:8088 cluster-data-node-03:8088 telegraf autogen 34 119624324 119624324 2023-06-22 23:45:09.470696179 +0000 UTC
|
||||
|
|
|
@ -84,7 +84,7 @@ Processing data can take on many forms, and includes the following types of oper
|
|||
For example, return the first or last row, the row with the highest or lowest value, and more.
|
||||
For information, see [Function types and categories - Selectors](/flux/v0/function-types/#selectors).
|
||||
- **Rewrite rows**: use [`map()`](/flux/v0/stdlib/universe/map/) to rewrite each input row.
|
||||
Tranform values with mathematic operations, process strings, dynamically add new columns, and more.
|
||||
Transform values with mathematic operations, process strings, dynamically add new columns, and more.
|
||||
- **Send notifications**: evaluate data and use Flux notification endpoint functions
|
||||
to send notifications to external services.
|
||||
For information, see [Function types and categories- Notification endpoints](/flux/v0/function-types/#notification-endpoints).
|
||||
|
|
|
@ -67,7 +67,7 @@ _For information about operator precedence, see
|
|||
[Flux Operators – Operator precedence](/flux/v0/spec/operators/#operator-precedence)._
|
||||
|
||||
## Predicate expressions
|
||||
A predicate expression compares values using [comparison operators](/flux/v0/spec/operators/#comparison-operators), [logical operators](/flux/v0/spec/operators/#logical-operators), or both, and evalutes as `true` or `false`.
|
||||
A predicate expression compares values using [comparison operators](/flux/v0/spec/operators/#comparison-operators), [logical operators](/flux/v0/spec/operators/#logical-operators), or both, and evaluates as `true` or `false`.
|
||||
For example:
|
||||
|
||||
```js
|
||||
|
|
|
@ -3557,7 +3557,7 @@ In Flux 0.39.0, `holtWinters()` can cause the query engine to panic.
|
|||
- Suppress group push down for \_time and \_value.
|
||||
- Terminal output functions must produce results.
|
||||
- Fix race in interpreter.doCall.
|
||||
- Fix ast.Walk for Assignemnt rename.
|
||||
- Fix ast.Walk for Assignment rename.
|
||||
- Improve error message for missing object properties.
|
||||
- Add unary logical expression to the parser.
|
||||
- Variable declarator node needs to duplicate the location information.
|
||||
|
|
|
@ -55,7 +55,7 @@ A stream is grouped into individual tables using their respective group keys.
|
|||
Tables within a stream each have a unique group key value.
|
||||
|
||||
A stream is represented using the stream type `stream[A] where A: Record`.
|
||||
The group key is not explicity modeled in the Flux type system.
|
||||
The group key is not explicitly modeled in the Flux type system.
|
||||
|
||||
## Missing values (null)
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ a
|
|||
A _testcase_ statement defines a test case.
|
||||
|
||||
{{% note %}}
|
||||
Testcase statements only work within the context of a Flux developement environment.
|
||||
Testcase statements only work within the context of a Flux development environment.
|
||||
{{% /note %}}
|
||||
|
||||
```js
|
||||
|
@ -81,7 +81,7 @@ testcase addition {
|
|||
}
|
||||
```
|
||||
|
||||
##### Example testcase extension to prevent feature regession
|
||||
##### Example testcase extension to prevent feature regression
|
||||
|
||||
```js
|
||||
@feature({vectorization: true})
|
||||
|
|
|
@ -10,7 +10,7 @@ menu:
|
|||
parent: contrib/chobbs/discord
|
||||
identifier: contrib/chobbs/discord/endpoint
|
||||
weight: 301
|
||||
flux/v0/tags: [notifcation endpoints, transformations]
|
||||
flux/v0/tags: [notification endpoints, transformations]
|
||||
---
|
||||
|
||||
<!------------------------------------------------------------------------------
|
||||
|
|
|
@ -33,7 +33,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
{{% warn %}}
|
||||
#### Deprecated
|
||||
Experimetnal `array.concat()` is deprecated in favor of
|
||||
Experimental `array.concat()` is deprecated in favor of
|
||||
[`array.concat()`](/flux/v0/stdlib/array/concat).
|
||||
{{% /warn %}}
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ has on the results of the first query are met.
|
|||
##### Applicable use cases
|
||||
- Write to an InfluxDB bucket and query the written data in a single Flux script.
|
||||
|
||||
_**Note:** `experimental.chain()` does not gaurantee that data written to
|
||||
_**Note:** `experimental.chain()` does not guarantee that data written to
|
||||
InfluxDB is immediately queryable. A delay between when data is written and
|
||||
when it is queryable may cause a query using `experimental.chain()` to fail.
|
||||
|
||||
|
|
|
@ -214,7 +214,7 @@ option geo.units = {distance: "km"}
|
|||
|
||||
### units
|
||||
|
||||
`units` defines the unit of measurment used in geotemporal operations.
|
||||
`units` defines the unit of measurement used in geotemporal operations.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ Input data. Default is piped-forward data (`<-`).
|
|||
|
||||
## Examples
|
||||
|
||||
### Create a histgram from input data
|
||||
### Create a histogram from input data
|
||||
|
||||
```js
|
||||
import "experimental"
|
||||
|
|
|
@ -41,7 +41,7 @@ import "experimental/http/requests"
|
|||
#### Deprecated
|
||||
This package is deprecated in favor of [`requests`](/flux/v0/stdlib/http/requests/).
|
||||
Do not mix usage of this experimental package with the `requests` package as the `defaultConfig` is not shared between the two packages.
|
||||
This experimental package is completely superceded by the `requests` package so there should be no need to mix them.
|
||||
This experimental package is completely superseded by the `requests` package so there should be no need to mix them.
|
||||
{{% /warn %}}
|
||||
|
||||
## Options
|
||||
|
|
|
@ -33,7 +33,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
column for each input table.
|
||||
|
||||
## Standard deviation modes
|
||||
The following modes are avaialable when calculating the standard deviation of data.
|
||||
The following modes are available when calculating the standard deviation of data.
|
||||
|
||||
##### sample
|
||||
Calculate the sample standard deviation where the data is considered to be
|
||||
|
|
|
@ -36,7 +36,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
- Outputs a single table for each input table.
|
||||
- Outputs a single record for each unique value in an input table.
|
||||
- Leaves group keys, columns, and values unmodified.
|
||||
- Drops emtpy tables.
|
||||
- Drops empty tables.
|
||||
|
||||
##### Function type signature
|
||||
|
||||
|
|
|
@ -57,7 +57,7 @@ _For information about properties, see `http.post`._
|
|||
|
||||
### url
|
||||
({{< req >}})
|
||||
URL to send the POST reqeust to.
|
||||
URL to send the POST request to.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -49,10 +49,10 @@ Secret key to retrieve.
|
|||
|
||||
## Examples
|
||||
|
||||
- [Retrive a key from the InfluxDB secret store](#retrive-a-key-from-the-influxdb-secret-store)
|
||||
- [Retrieve a key from the InfluxDB secret store](#retrieve-a-key-from-the-influxdb-secret-store)
|
||||
- [Populate sensitive credentials with secrets//](#populate-sensitive-credentials-with-secrets)
|
||||
|
||||
### Retrive a key from the InfluxDB secret store
|
||||
### Retrieve a key from the InfluxDB secret store
|
||||
|
||||
```js
|
||||
import "influxdata/influxdb/secrets"
|
||||
|
|
|
@ -51,7 +51,7 @@ Default time value returned if the task has never successfully run.
|
|||
|
||||
## Examples
|
||||
|
||||
### Return the time an InfluxDB task last succesfully ran
|
||||
### Return the time an InfluxDB task last successfully ran
|
||||
|
||||
```js
|
||||
import "influxdata/influxdb/tasks"
|
||||
|
|
|
@ -55,10 +55,10 @@ y-value to use in the operation.
|
|||
|
||||
## Examples
|
||||
|
||||
- [Return the maximum difference betwee two values](#return-the-maximum-difference-betwee-two-values)
|
||||
- [Return the maximum difference between two values](#return-the-maximum-difference-betwee-two-values)
|
||||
- [Use math.dim in map](#use-mathdim-in-map)
|
||||
|
||||
### Return the maximum difference betwee two values
|
||||
### Return the maximum difference between two values
|
||||
|
||||
```js
|
||||
import "math"
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: math.j1() function
|
||||
description: >
|
||||
`math.j1()` is a funciton that returns the order-one Bessel function for the first kind.
|
||||
`math.j1()` is a function that returns the order-one Bessel function for the first kind.
|
||||
menu:
|
||||
flux_v0_ref:
|
||||
name: math.j1
|
||||
|
@ -26,7 +26,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
------------------------------------------------------------------------------->
|
||||
|
||||
`math.j1()` is a funciton that returns the order-one Bessel function for the first kind.
|
||||
`math.j1()` is a function that returns the order-one Bessel function for the first kind.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: math.jn() function
|
||||
description: >
|
||||
`math.jn()` returns the order-n Bessel funciton of the first kind.
|
||||
`math.jn()` returns the order-n Bessel function of the first kind.
|
||||
menu:
|
||||
flux_v0_ref:
|
||||
name: math.jn
|
||||
|
@ -26,7 +26,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
------------------------------------------------------------------------------->
|
||||
|
||||
`math.jn()` returns the order-n Bessel funciton of the first kind.
|
||||
`math.jn()` returns the order-n Bessel function of the first kind.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: math.mMin() function
|
||||
description: >
|
||||
`math.mMin()` is a function taht returns the lessser of `x` or `y`.
|
||||
`math.mMin()` is a function that returns the lesser of `x` or `y`.
|
||||
menu:
|
||||
flux_v0_ref:
|
||||
name: math.mMin
|
||||
|
@ -26,7 +26,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
------------------------------------------------------------------------------->
|
||||
|
||||
`math.mMin()` is a function taht returns the lessser of `x` or `y`.
|
||||
`math.mMin()` is a function that returns the lesser of `x` or `y`.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ InfluxDB status level to convert to a PagerDuty severity.
|
|||
|
||||
## Examples
|
||||
|
||||
### Convert a status level to a PagerDuty serverity
|
||||
### Convert a status level to a PagerDuty severity
|
||||
|
||||
```js
|
||||
import "pagerduty"
|
||||
|
|
|
@ -96,7 +96,7 @@ The Flux BigQuery implementation uses the Google Cloud Go SDK. Provide your
|
|||
authentication credentials using one of the following methods:
|
||||
|
||||
- The `GOOGLE_APPLICATION_CREDENTIALS` environment variable that identifies the
|
||||
location of yur credential JSON file.
|
||||
location of your credential JSON file.
|
||||
- Provide your BigQuery credentials using the `credentials` URL parameters in your BigQuery DSN.
|
||||
|
||||
#### BigQuery credential URL parameter
|
||||
|
|
|
@ -48,7 +48,7 @@ String value to search.
|
|||
|
||||
### substr
|
||||
({{< req >}})
|
||||
Substring to count occurences of.
|
||||
Substring to count occurrences of.
|
||||
|
||||
The function counts only non-overlapping instances of `substr`.
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
|
||||
|
||||
When start or end are past the bounds of the string, respecitvely the start or end of the string
|
||||
When start or end are past the bounds of the string, respectively the start or end of the string
|
||||
is assumed. When end is less than or equal to start an empty string is returned.
|
||||
|
||||
##### Function type signature
|
||||
|
|
|
@ -31,7 +31,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
`types.isNumeric()` tests if a value is a numeric type (int, uint, or float).
|
||||
|
||||
This is a helper function to test or filter for values that can be used in
|
||||
arithmatic operations or aggregations.
|
||||
arithmetic operations or aggregations.
|
||||
|
||||
##### Function type signature
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ bool(v: uint(v: 0))// Returns false
|
|||
|
||||
If converting the `_value` column to boolean types, use `toBool()`.
|
||||
If converting columns other than `_value`, use `map()` to iterate over each
|
||||
row and `bool()` to covert a column value to a boolean type.
|
||||
row and `bool()` to convert a column value to a boolean type.
|
||||
|
||||
```js
|
||||
data
|
||||
|
|
|
@ -87,7 +87,7 @@ float(v: "10")// Returns 10.0
|
|||
|
||||
If converting the `_value` column to float types, use `toFloat()`.
|
||||
If converting columns other than `_value`, use `map()` to iterate over each
|
||||
row and `float()` to covert a column value to a float type.
|
||||
row and `float()` to convert a column value to a float type.
|
||||
|
||||
```js
|
||||
data
|
||||
|
|
|
@ -30,7 +30,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
`group()` regroups input data by modifying group key of input tables.
|
||||
|
||||
**Note**: Group does not gaurantee sort order.
|
||||
**Note**: Group does not guarantee sort order.
|
||||
To ensure data is sorted correctly, use `sort()` after `group()`.
|
||||
|
||||
##### Function type signature
|
||||
|
@ -54,7 +54,7 @@ all data merges it into a single output table.
|
|||
|
||||
Grouping mode. Default is `by`.
|
||||
|
||||
**Avaliable modes**:
|
||||
**Available modes**:
|
||||
- **by**: Group by columns defined in the `columns` parameter.
|
||||
- **except**: Group by all columns _except_ those in defined in the
|
||||
`columns` parameter.
|
||||
|
|
|
@ -88,7 +88,7 @@ int(v: 2022-01-01T00:00:00Z)// Returns 1640995200000000000
|
|||
|
||||
If converting the `_value` column to integer types, use `toInt()`.
|
||||
If converting columns other than `_value`, use `map()` to iterate over each
|
||||
row and `int()` to covert a column value to a integer type.
|
||||
row and `int()` to convert a column value to a integer type.
|
||||
|
||||
```js
|
||||
data
|
||||
|
|
|
@ -66,7 +66,7 @@ Input data. Default is piped-forward data (`<-`).
|
|||
|
||||
## Examples
|
||||
|
||||
### Caclulate Kaufman's Adaptive Moving Average for input data
|
||||
### Calculate Kaufman's Adaptive Moving Average for input data
|
||||
|
||||
```js
|
||||
import "sampledata"
|
||||
|
|
|
@ -30,7 +30,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
`keep()` returns a stream of tables containing only the specified columns.
|
||||
|
||||
Columns in the group key that are not specifed in the `columns` parameter or
|
||||
Columns in the group key that are not specified in the `columns` parameter or
|
||||
identified by the `fn` parameter are removed from the group key and dropped
|
||||
from output tables. `keep()` is the inverse of `drop()`.
|
||||
|
||||
|
|
|
@ -51,7 +51,7 @@ If the output record drops a group key column, that column is removed from
|
|||
the group key.
|
||||
|
||||
#### Preserve columns
|
||||
`map()` drops any columns that are not mapped explictly by column label or
|
||||
`map()` drops any columns that are not mapped explicitly by column label or
|
||||
implicitly using the `with` operator in the `fn` function.
|
||||
The `with` operator updates a record property if it already exists, creates
|
||||
a new record property if it doesn’t exist, and includes all existing
|
||||
|
|
|
@ -63,7 +63,7 @@ Column to use to compute the median. Default is `_value`.
|
|||
|
||||
Computation method. Default is `estimate_tdigest`.
|
||||
|
||||
**Avaialable methods**:
|
||||
**Available methods**:
|
||||
- **estimate_tdigest**: Aggregate method that uses a
|
||||
[t-digest data structure](https://github.com/tdunning/t-digest) to
|
||||
compute an accurate median estimate on large data sources.
|
||||
|
|
|
@ -79,7 +79,7 @@ Quantile to compute. Must be between `0.0` and `1.0`.
|
|||
|
||||
Computation method. Default is `estimate_tdigest`.
|
||||
|
||||
**Avaialable methods**:
|
||||
**Available methods**:
|
||||
- **estimate_tdigest**: Aggregate method that uses a
|
||||
[t-digest data structure](https://github.com/tdunning/t-digest) to
|
||||
compute an accurate quantile estimate on large data sources.
|
||||
|
|
|
@ -28,7 +28,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
|
|||
|
||||
------------------------------------------------------------------------------->
|
||||
|
||||
`sort()` orders rows in each intput table based on values in specified columns.
|
||||
`sort()` orders rows in each input table based on values in specified columns.
|
||||
|
||||
#### Output data
|
||||
One output table is produced for each input table.
|
||||
|
|
|
@ -53,7 +53,7 @@ Column to operate on. Default is `_value`.
|
|||
Standard deviation mode or type of standard deviation to calculate.
|
||||
Default is `sample`.
|
||||
|
||||
**Availble modes:**
|
||||
**Available modes:**
|
||||
- **sample**: Calculate the sample standard deviation where the data is
|
||||
considered part of a larger population.
|
||||
- **population**: Calculate the population standard deviation where the
|
||||
|
|
|
@ -75,7 +75,7 @@ string(v: 10.12)
|
|||
|
||||
If converting the `_value` column to string types, use `toString()`.
|
||||
If converting columns other than `_value`, use `map()` to iterate over each
|
||||
row and `string()` to covert a column value to a string type.
|
||||
row and `string()` to convert a column value to a string type.
|
||||
|
||||
```js
|
||||
data
|
||||
|
|
|
@ -78,7 +78,7 @@ time(v: 1640995200000000000)// Returns 2022-01-01T00:00:00Z
|
|||
|
||||
If converting the `_value` column to time types, use `toTime()`.
|
||||
If converting columns other than `_value`, use `map()` to iterate over each
|
||||
row and `time()` to covert a column value to a time type.
|
||||
row and `time()` to convert a column value to a time type.
|
||||
|
||||
```js
|
||||
data
|
||||
|
|
|
@ -88,7 +88,7 @@ uint(v: -100)// Returns 18446744073709551516
|
|||
|
||||
If converting the `_value` column to uint types, use `toUInt()`.
|
||||
If converting columns other than `_value`, use `map()` to iterate over each
|
||||
row and `uint()` to covert a column value to a uint type.
|
||||
row and `uint()` to convert a column value to a uint type.
|
||||
|
||||
```js
|
||||
data
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: /write 1.x compatibility API
|
||||
list_title: /write
|
||||
description: >
|
||||
The `/write` 1.x compatibilty endpoint writes data to InfluxDB Cloud using patterns from the
|
||||
The `/write` 1.x compatibility endpoint writes data to InfluxDB Cloud using patterns from the
|
||||
InfluxDB 1.x `/write` API endpoint.
|
||||
menu:
|
||||
influxdb_cloud:
|
||||
|
|
|
@ -46,7 +46,7 @@ import "influxdata/influxdb/v1"
|
|||
|
||||
v1.measurementTagValues(
|
||||
bucket: "bucket-name",
|
||||
measurement: "measurment-name",
|
||||
measurement: "measurement-name",
|
||||
tag: "_field",
|
||||
)
|
||||
```
|
||||
|
|
|
@ -183,7 +183,7 @@ query {{% product-name %}}:
|
|||
|
||||
- **Database**: Provide a default database name to query.
|
||||
- **User**: Provide an arbitrary string.
|
||||
_This credential is ingored when querying {{% product-name %}}, but it cannot be empty._
|
||||
_This credential is ignored when querying {{% product-name %}}, but it cannot be empty._
|
||||
- **Password**: Provide an InfluxDB [database token](/influxdb/clustered/admin/tokens/)
|
||||
with read access to the databases you want to query.
|
||||
|
||||
|
|
|
@ -309,7 +309,7 @@ Since InfluxQL doesn't support joins, the cost of an InfluxQL query is typically
|
|||
A query plan generated by `EXPLAIN` contains the following elements:
|
||||
|
||||
- expression
|
||||
- auxillary fields
|
||||
- auxiliary fields
|
||||
- number of shards
|
||||
- number of series
|
||||
- cached values
|
||||
|
|
|
@ -738,7 +738,7 @@ will find the shards refuse to open and will most likely see the following error
|
|||
- Add `EXPLAIN ANALYZE` command, which produces a detailed execution plan of a `SELECT` statement.
|
||||
- Improved compaction scheduling.
|
||||
- Support Ctrl+C to cancel a running query in the Influx CLI.
|
||||
- Allow human-readable byte sizes in configuation file.
|
||||
- Allow human-readable byte sizes in configuration file.
|
||||
- Respect X-Request-Id/Request-Id headers.
|
||||
- Add 'X-Influxdb-Build' to http response headers so users can identify if a response is from an OSS or Enterprise service.
|
||||
- All errors from queries or writes are available via X-InfluxDB-Error header, and 5xx error messages will be written
|
||||
|
@ -1332,7 +1332,7 @@ All Changes:
|
|||
## v1.0.0 {date="2016-09-08"}
|
||||
|
||||
### Release Notes
|
||||
Inital release of InfluxDB.
|
||||
Initial release of InfluxDB.
|
||||
|
||||
### Breaking changes
|
||||
|
||||
|
|
|
@ -57,7 +57,7 @@ For more information, see the [`journald.conf` manual page](https://www.freedesk
|
|||
{{% tab-content %}}
|
||||
#### sysvinit
|
||||
|
||||
On Linux sytems not using systemd, InfluxDB writes all log data and `stderr` to `/var/log/influxdb/influxd.log`.
|
||||
On Linux systems not using systemd, InfluxDB writes all log data and `stderr` to `/var/log/influxdb/influxd.log`.
|
||||
You can override this location by setting the environment variable `STDERR` in a start-up script at `/etc/default/influxdb`.
|
||||
(If this file doesn't exist, you need to create it.)
|
||||
|
||||
|
@ -127,7 +127,7 @@ Note: One or more password `p` values are replaced by a single `[REDACTED]`.
|
|||
|User agent |`Baz Service` |
|
||||
|Request ID |`d4ca9a10-ab63-11e9-8942-000000000000` |
|
||||
|Response time in microseconds |`9357049` |
|
||||
* This field shows the database being acessed and the query being run. For more details, see [InfluxDB API reference](/influxdb/v1/tools/api/). Note that this field is URL-encoded.
|
||||
* This field shows the database being accessed and the query being run. For more details, see [InfluxDB API reference](/influxdb/v1/tools/api/). Note that this field is URL-encoded.
|
||||
|
||||
### Redirecting HTTP access logging
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ To find the number of points per second being written to the instance. Must have
|
|||
$ influx -execute 'select derivative(pointReq, 1s) from "write" where time > now() - 5m' -database '_internal' -precision 'rfc3339'
|
||||
```
|
||||
|
||||
To find the number of writes separated by database since the beginnning of the log file:
|
||||
To find the number of writes separated by database since the beginning of the log file:
|
||||
|
||||
```bash
|
||||
grep 'POST' /var/log/influxdb/influxd.log | awk '{ print $10 }' | sort | uniq -c
|
||||
|
|
|
@ -54,7 +54,7 @@ linearBins(start: 0.0, width: 10.0, count: 10)
|
|||
The [`logarithmicBins()` function](/flux/v0/stdlib/universe/logarithmicbins) generates a list of exponentially separated floats.
|
||||
|
||||
```js
|
||||
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
|
||||
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinity: true)
|
||||
|
||||
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
|
||||
```
|
||||
|
|
|
@ -38,7 +38,7 @@ data
|
|||
```
|
||||
|
||||
By default, `derivative()` returns only positive derivative values and replaces negative values with _null_.
|
||||
Cacluated values are returned as [floats](/flux/v0/language/types/#numeric-types).
|
||||
Calculated values are returned as [floats](/flux/v0/language/types/#numeric-types).
|
||||
|
||||
|
||||
{{< flex >}}
|
||||
|
|
|
@ -591,7 +591,7 @@ Since InfluxQL does not support joins, the cost of a InfluxQL query is typically
|
|||
The elements of `EXPLAIN` query plan include:
|
||||
|
||||
- expression
|
||||
- auxillary fields
|
||||
- auxiliary fields
|
||||
- number of shards
|
||||
- number of series
|
||||
- cached values
|
||||
|
|
|
@ -793,7 +793,7 @@ InfluxData also makes [Helm charts](https://github.com/influxdata/helm-charts) a
|
|||
|
||||
{{% /tab-content %}}
|
||||
<!--------------------------------- END kubernetes ---------------------------->
|
||||
<!--------------------------------- BEGIN Rasberry Pi ------------------------->
|
||||
<!--------------------------------- BEGIN Raspberry Pi ------------------------->
|
||||
{{% tab-content %}}
|
||||
|
||||
## Install InfluxDB v2 on Raspberry Pi
|
||||
|
@ -827,7 +827,7 @@ to collect and send data to:
|
|||
- InfluxDB Cloud with a paid [**Usage-Based**](/influxdb/cloud/account-management/pricing-plans/#usage-based-plan) account with relaxed resource restrictions.
|
||||
|
||||
{{% /tab-content %}}
|
||||
<!--------------------------------- END Rasberry Pi --------------------------->
|
||||
<!--------------------------------- END Raspberry Pi --------------------------->
|
||||
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
|
|
|
@ -268,7 +268,7 @@ Provide the following:
|
|||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[Single bucket](#)
|
||||
[Mutiple buckets](#)
|
||||
[Multiple buckets](#)
|
||||
{{% /code-tabs %}}
|
||||
{{% code-tab-content %}}
|
||||
```sh
|
||||
|
|
|
@ -123,7 +123,7 @@ SHOW MEASUREMENTS [ON <database_name>] [WITH MEASUREMENT <operator> ['<measureme
|
|||
[InfluxDB API](/influxdb/v2/reference/api/influxdb-1x/) request.
|
||||
|
||||
- The `WITH`, `WHERE`, `LIMIT` and `OFFSET` clauses are optional.
|
||||
- The `WHERE` in `SHOW MEASURMENTS` supports tag comparisons, but not field comparisons.
|
||||
- The `WHERE` in `SHOW MEASUREMENTS` supports tag comparisons, but not field comparisons.
|
||||
|
||||
**Supported operators in the `WHERE` clause:**
|
||||
|
||||
|
|
|
@ -668,9 +668,8 @@ Return the mode of the values associated with the `water_level` field key in the
|
|||
between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and
|
||||
[group](/influxdb/v2/query-data/influxql/explore-data/group-by/)
|
||||
results into 12-minute time intervals and per tag.
|
||||
Then [limis](/influxdb/influxdb/v2/query-data/influxql/explore-data/limit-and-slimit/)
|
||||
the number of points and series retu
|
||||
ned tothree and one, and it [offsets](/influxdb/v2/query-data/influxql/explore-data
|
||||
Then [limits](/influxdb/influxdb/v2/query-data/influxql/explore-data/limit-and-slimit/)
|
||||
the number of points and series returned to three and one, and it [offsets](/influxdb/v2/query-data/influxql/explore-data
|
||||
#the-offset-and-soffset-clauses) the series returned by one.
|
||||
|
||||
```sql
|
||||
|
|
|
@ -145,5 +145,5 @@ DROP MEASUREMENT "h2o_feet"
|
|||
A successful `DROP MEASUREMENT` query returns an empty result.
|
||||
|
||||
{{% warn %}}
|
||||
The DROP MEASURMENT command is very resource intensive. We do not recommend this command for bulk data deletion. Use the DELETE FROM command instead, which is less resource intensive.
|
||||
The DROP MEASUREMENT command is very resource intensive. We do not recommend this command for bulk data deletion. Use the DELETE FROM command instead, which is less resource intensive.
|
||||
{{% /warn %}}
|
||||
|
|
|
@ -21,7 +21,7 @@ Remote connections are used to replicate data on write at the bucket level.
|
|||
|
||||
## Usage
|
||||
```
|
||||
influx remote [commond options] [arguments...]
|
||||
influx remote [command options] [arguments...]
|
||||
```
|
||||
|
||||
## Subcommands
|
||||
|
|
|
@ -19,7 +19,7 @@ The `influx remote create` command creates a new remote InfluxDB connection for
|
|||
|
||||
## Usage
|
||||
```
|
||||
influx remote create [commond options] [arguments...]
|
||||
influx remote create [command options] [arguments...]
|
||||
```
|
||||
|
||||
## Flags
|
||||
|
|
|
@ -17,7 +17,7 @@ The `influx replication` command and its subcommands manage InfluxDB Edge Data R
|
|||
|
||||
## Usage
|
||||
```
|
||||
influx replication [commond options] [arguments...]
|
||||
influx replication [command options] [arguments...]
|
||||
```
|
||||
|
||||
## Subcommands
|
||||
|
|
|
@ -673,7 +673,7 @@ from(bucket: "example-bucket")
|
|||
{{% /oss-only %}}
|
||||
|
||||
Using InfluxQL with InfluxDB {{< current-version >}} is made possible by the
|
||||
[1.x compatiblity API](/influxdb/v2/reference/api/influxdb-1x/) which replicates
|
||||
[1.x compatibility API](/influxdb/v2/reference/api/influxdb-1x/) which replicates
|
||||
the `/query` endpoint from InfluxDB 1.x. This allows all InfluxDB 1.x-compatible
|
||||
clients to work with InfluxDB {{< current-version >}}. However, InfluxQL relies
|
||||
on a database and retention policy data model doesn't exist in InfluxDB
|
||||
|
|
|
@ -473,7 +473,7 @@ In Flux, an implicit block is a possibly empty sequence of statements within mat
|
|||
- File: Each file has a file block containing Flux source text in the file.
|
||||
- Function: Each function literal has a function block with Flux source text (even if not explicitly declared).
|
||||
|
||||
Related entries: [explict block](#explicit-block), [block](#block)
|
||||
Related entries: [explicit block](#explicit-block), [block](#block)
|
||||
|
||||
### influx
|
||||
|
||||
|
|
|
@ -822,7 +822,7 @@ The startup process automatically generates replacement `tsi1` indexes for shard
|
|||
|
||||
- Standardize binary naming conventions.
|
||||
- Fix configuration loading issue.
|
||||
- Add Flux dictionary expressions to Swagger documetnation.
|
||||
- Add Flux dictionary expressions to Swagger documentation.
|
||||
- Ensure `influxdb` service sees default environment variables when running under `init.d`.
|
||||
- Remove upgrade notice from new installs.
|
||||
- Ensure `config.toml` is initialized on new installs.
|
||||
|
@ -1357,7 +1357,7 @@ The beta 11 version was **not released**. Changes below are included in the beta
|
|||
- Clicking on bucket name takes user to Data Explorer with bucket selected.
|
||||
- Extend pkger (InfluxDB Templates) dashboards with table view support.
|
||||
- Allow for retention to be provided to `influx setup` command as a duration.
|
||||
- Extend `influx pkg export all` capabilities to support filtering by lable name and resource type.
|
||||
- Extend `influx pkg export all` capabilities to support filtering by label name and resource type.
|
||||
- Added new login and sign-up screen for InfluxDB Cloud users that allows direct login from their region.
|
||||
- Added new `influx config` CLI for managing multiple configurations.
|
||||
|
||||
|
|
|
@ -301,7 +301,7 @@ Since InfluxQL does not support joins, the cost of a InfluxQL query is typically
|
|||
The elements of `EXPLAIN` query plan include:
|
||||
|
||||
- expression
|
||||
- auxillary fields
|
||||
- auxiliary fields
|
||||
- number of shards
|
||||
- number of series
|
||||
- cached values
|
||||
|
|
|
@ -63,4 +63,4 @@ from(bucket: "example-bucket")
|
|||
```
|
||||
|
||||
###### Visualization options for pressure gauge
|
||||
{{< img-hd src="/img/influxdb/2-0-visualizations-gauge-pressure-8.png" alt="Pressure guage example" />}}
|
||||
{{< img-hd src="/img/influxdb/2-0-visualizations-gauge-pressure-8.png" alt="Pressure gauge example" />}}
|
||||
|
|
|
@ -312,7 +312,7 @@ override certain values through the HTTP API. It is enabled by default.
|
|||
# ...
|
||||
|
||||
[config-override]
|
||||
# Enable/Disable the service for overridding configuration via the HTTP API.
|
||||
# Enable/Disable the service for overriding configuration via the HTTP API.
|
||||
enabled = true
|
||||
|
||||
#...
|
||||
|
@ -365,7 +365,7 @@ Use the `[replay]` group specify the path to the directory where the replay file
|
|||
# ...
|
||||
|
||||
[replay]
|
||||
# Where to store replay filess.
|
||||
# Where to store replay files.
|
||||
dir = "/var/lib/kapacitor/replay"
|
||||
|
||||
# ...
|
||||
|
@ -512,7 +512,7 @@ the InfluxDB user must have [admin privileges](/influxdb/v1/administration/authe
|
|||
# then the UDP port will be bound to `hostname_or_ip:1234`
|
||||
# The default empty value will bind to all addresses.
|
||||
udp-bind = ""
|
||||
# Subscriptions use the UDP network protocl.
|
||||
# Subscriptions use the UDP network protocol.
|
||||
# The following options of for the created UDP listeners for each subscription.
|
||||
# Number of packets to buffer when reading packets off the socket.
|
||||
udp-buffer = 1000
|
||||
|
|
|
@ -260,7 +260,7 @@ The relevant parameters in Example 4 are `username` and `password`.
|
|||
|
||||
These can also be set as environment variables.
|
||||
|
||||
**Example 5 – InfluxDB Authentication Paramenters – ENVARS**
|
||||
**Example 5 – InfluxDB Authentication Parameters – ENVARS**
|
||||
|
||||
```
|
||||
KAPACITOR_INFLUXDB_0_USERNAME="admin"
|
||||
|
@ -356,7 +356,7 @@ chronograf-v1-3586109e-8b7d-437a-80eb-a9c50d00ad53 stream enabled true
|
|||
|
||||
To ensure Kapacitor requires a username and password to connect, enable basic authentication.
|
||||
To do this, set up the `username`, `password`, and `auth-enabled`
|
||||
paramenters in the `[http]` section of `kapacitor.conf`.
|
||||
parameters in the `[http]` section of `kapacitor.conf`.
|
||||
|
||||
Kapacitor also supports using InfluxDB Enterprise
|
||||
to manage authentication and authorization for interactions with the Kapacitor API.
|
||||
|
|
|
@ -22,7 +22,7 @@ It uses as its example a hypothetical high-volume website for which two measurem
|
|||
are taken:
|
||||
|
||||
* `errors` -- the number of page views that had an error.
|
||||
* `views` -- the number of page views that had no errror.
|
||||
* `views` -- the number of page views that had no error.
|
||||
|
||||
### The Data generator
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ messing with the real game servers.
|
|||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# default options: can be overriden with corresponding arguments.
|
||||
# default options: can be overridden with corresponding arguments.
|
||||
host=${1-localhost}
|
||||
port=${2-9100}
|
||||
games=${3-10}
|
||||
|
|
|
@ -206,7 +206,7 @@ func (*mirrorHandler) Info() (*agent.InfoResponse, error) {
|
|||
return info, nil
|
||||
}
|
||||
|
||||
// Initialze the handler based of the provided options.
|
||||
// Initialize the handler based of the provided options.
|
||||
func (*mirrorHandler) Init(r *agent.InitRequest) (*agent.InitResponse, error) {
|
||||
// Since we expected no options this method is trivial
|
||||
// and we return success.
|
||||
|
@ -393,7 +393,7 @@ func (*mirrorHandler) Info() (*agent.InfoResponse, error) {
|
|||
return info, nil
|
||||
}
|
||||
|
||||
// Initialze the handler based of the provided options.
|
||||
// Initialize the handler based of the provided options.
|
||||
func (*mirrorHandler) Init(r *agent.InitRequest) (*agent.InitResponse, error) {
|
||||
init := &agent.InitResponse{
|
||||
Success: true,
|
||||
|
@ -580,7 +580,7 @@ func (*mirrorHandler) Info() (*agent.InfoResponse, error) {
|
|||
return info, nil
|
||||
}
|
||||
|
||||
// Initialze the handler based of the provided options.
|
||||
// Initialize the handler based of the provided options.
|
||||
func (h *mirrorHandler) Init(r *agent.InitRequest) (*agent.InitResponse, error) {
|
||||
init := &agent.InitResponse{
|
||||
Success: true,
|
||||
|
|
|
@ -653,7 +653,7 @@ For more details on the alerting system, see the full documentation [here](/kapa
|
|||
Also empty string on a tag value is now a sufficient condition for the default conditions to be applied.
|
||||
See [#1233](https://github.com/influxdata/kapacitor/pull/1233) for more information.
|
||||
- Fixed dot view syntax to use xlabels and not create invalid quotes.
|
||||
- Fixed curruption of recordings list after deleting all recordings.
|
||||
- Fixed corruption of recordings list after deleting all recordings.
|
||||
- Fixed missing "vars" key when listing tasks.
|
||||
- Fixed bug where aggregates would not be able to change type.
|
||||
- Fixed panic when the process cannot stat the data dir.
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: kapacitor level
|
||||
description: >
|
||||
The `kapacitor level` command sets the log level on the Kapacitor server
|
||||
([`kapacitord`](/kapacitor/v1/referene/cli/kapacitord/)).
|
||||
([`kapacitord`](/kapacitor/v1/reference/cli/kapacitord/)).
|
||||
menu:
|
||||
kapacitor_v1:
|
||||
name: kapacitor level
|
||||
|
@ -11,7 +11,7 @@ weight: 301
|
|||
---
|
||||
|
||||
The `kapacitor level` command sets the log level on the Kapacitor server
|
||||
([`kapacitord`](/kapacitor/v1/referene/cli/kapacitord/)).
|
||||
([`kapacitord`](/kapacitor/v1/reference/cli/kapacitord/)).
|
||||
|
||||
## Usage
|
||||
|
||||
|
|
|
@ -269,7 +269,7 @@ stream
|
|||
|
||||
Error rates are also stored in the same InfluxDB instance and we want to
|
||||
send daily reports of `500` errors to the `error-reports` Discord workspace.
|
||||
The following TICKscript collects `500` error occurances and publishes them to
|
||||
The following TICKscript collects `500` error occurrences and publishes them to
|
||||
the `500-errors` topic.
|
||||
|
||||
_**500_errors.tick**_
|
||||
|
|
|
@ -51,12 +51,12 @@ the default.
|
|||
URL of the MQTT broker.
|
||||
Possible protocols include:
|
||||
|
||||
**tcp** - Raw TCP network connection
|
||||
**tcp** - Raw TCP network connection
|
||||
**ssl** - TLS protected TCP network connection
|
||||
**ws** - Websocket network connection
|
||||
|
||||
#### `ssl-ca`
|
||||
Absolute path to certificate autority (CA) file.
|
||||
Absolute path to certificate authority (CA) file.
|
||||
_A CA can be provided without a key/certificate pair._
|
||||
|
||||
#### `ssl-cert`
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Publish event handler
|
||||
list_title: Publish
|
||||
description: >
|
||||
The "publish" event handler allows you to publish Kapacitor alerts messages to mulitple Kapacitor topics. This page includes options and usage examples.
|
||||
The "publish" event handler allows you to publish Kapacitor alerts messages to multiple Kapacitor topics. This page includes options and usage examples.
|
||||
menu:
|
||||
kapacitor_v1:
|
||||
name: Publish
|
||||
|
|
|
@ -287,7 +287,7 @@ stream
|
|||
|
||||
Error rates are also being stored in the same InfluxDB instance and we want to
|
||||
send daily reports of `500` errors to the `error-reports` Slack workspace.
|
||||
The following TICKscript collects `500` error occurances and publishes them to
|
||||
The following TICKscript collects `500` error occurrences and publishes them to
|
||||
the `500-errors` topic.
|
||||
|
||||
###### 500_errors.tick
|
||||
|
|
|
@ -228,7 +228,7 @@ Group the data by a set of dimensions.
|
|||
Can specify one time dimension.
|
||||
|
||||
This property adds a `GROUP BY` clause to the query
|
||||
so all the normal behaviors when quering InfluxDB with a `GROUP BY` apply.
|
||||
so all the normal behaviors when querying InfluxDB with a `GROUP BY` apply.
|
||||
|
||||
Use group by time when your period is longer than your group by time interval.
|
||||
|
||||
|
|
|
@ -272,7 +272,7 @@ In Example 8 three values are added to two string templates. In the call to the
|
|||
|
||||
String templates are currently applicable with the [Alert](/kapacitor/v1/reference/nodes/alert_node/) node and are discussed further in the section [Accessing values in string templates](#accessing-values-in-string-templates) below.
|
||||
|
||||
String templates can also include flow statements such as `if...else` as well as calls to internal formating methods.
|
||||
String templates can also include flow statements such as `if...else` as well as calls to internal formatting methods.
|
||||
|
||||
```
|
||||
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
|
||||
|
|
|
@ -689,7 +689,7 @@ A template has these read only properties in addition to the properties listed [
|
|||
| Property | Description |
|
||||
| -------- | ----------- |
|
||||
| vars | Set of named vars from the TICKscript with their type, default values and description. |
|
||||
| dot | [GraphViz DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) syntax formatted representation of the template DAG. NOTE: lables vs attributes does not matter since a template is never executing. |
|
||||
| dot | [GraphViz DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) syntax formatted representation of the template DAG. NOTE: labels vs attributes does not matter since a template is never executing. |
|
||||
| error | Any error encountered when reading the template. |
|
||||
| created | Date the template was first created |
|
||||
| modified | Date the template was last modified |
|
||||
|
@ -1307,7 +1307,7 @@ A replay has these read only properties in addition to the properties listed [ab
|
|||
| -------- | ----------- |
|
||||
| status | One of `replaying` or `finished`. |
|
||||
| progress | Number between 0 and 1 indicating the approximate progress of the replay. |
|
||||
| error | Any error that occured while perfoming the replay |
|
||||
| error | Any error that occurred while performing the replay |
|
||||
|
||||
|
||||
#### Example
|
||||
|
|
|
@ -1079,7 +1079,7 @@ The `delete templates` command removes one or more templates.
|
|||
|
||||
```bash
|
||||
# Syntax
|
||||
kapacitor delete templates <Tempalte-IDs | Pattern>
|
||||
kapacitor delete templates <Template-IDs | Pattern>
|
||||
|
||||
# Example
|
||||
kapacitor delete templates generic_mean_alert
|
||||
|
|
|
@ -477,7 +477,7 @@ func (et *ExecutingTask) createNode(p pipeline.Node, d NodeDiagnostic) (n Node,
|
|||
|
||||
### Documenting your new node
|
||||
|
||||
Since TICKscript is its own language we have built a small utility similiar to [godoc](https://godoc.org/golang.org/x/tools/cmd/godoc) named [tickdoc](https://github.com/influxdb/kapacitor/tree/master/tick/cmd/tickdoc).
|
||||
Since TICKscript is its own language we have built a small utility similar to [godoc](https://godoc.org/golang.org/x/tools/cmd/godoc) named [tickdoc](https://github.com/influxdb/kapacitor/tree/master/tick/cmd/tickdoc).
|
||||
`tickdoc` generates documentation from comments in the code.
|
||||
The `tickdoc` utility understands two special comments to help it generate clean documentation.
|
||||
|
||||
|
@ -485,7 +485,7 @@ The `tickdoc` utility understands two special comments to help it generate clean
|
|||
generate any documentation for it. This is most useful to ignore fields that are set via property methods.
|
||||
2. `tick:property`: only added to methods. Informs `tickdoc` that the method is a `property method` not a `chaining method`.
|
||||
|
||||
Place one of these comments on a line all by itself and `tickdoc` will find it and behave accordingly. Otherwise document your code normaly and `tickdoc` will do the rest.
|
||||
Place one of these comments on a line all by itself and `tickdoc` will find it and behave accordingly. Otherwise document your code normally and `tickdoc` will do the rest.
|
||||
|
||||
### Contributing non output node.
|
||||
|
||||
|
|
|
@ -57,7 +57,7 @@ To add a Kapacitor instance to Chronograf:
|
|||
<img src="/img/kapacitor/1-4-chrono-configuration01.png" alt="Configuration open" style="max-width: 225px;" />
|
||||
|
||||
2. Locate the InfluxDB source in the list and in the right most column under the
|
||||
"Acitve Kapacitor" heading, click **Add Config**.
|
||||
"Active Kapacitor" heading, click **Add Config**.
|
||||
The Configure Kapacitor page loads with default settings.
|
||||
|
||||
<img src="/img/kapacitor/1-4-chrono-configuration02.png" alt="conifguration-new" style="max-width: 100%;"/>
|
||||
|
@ -87,7 +87,7 @@ To add a Kapacitor instance to Chronograf:
|
|||
One of key set of Kapacitor features that can be modified through Chronograf are
|
||||
third party alert handlers.
|
||||
|
||||
##### To modify a thrid party alert handler:
|
||||
##### To modify a third party alert handler:
|
||||
|
||||
1. In the Configuration table locate the Influxdata instance and its associated
|
||||
Kapacitor instance, click the Kapacitor drop down menu and then the **edit icon**.
|
||||
|
|
|
@ -217,7 +217,7 @@ Match conditions can be applied to handlers.
|
|||
Only alerts matching the conditions will be handled by that handler.
|
||||
|
||||
For example it is typical to only send Slack messages when alerts change state instead of every time an alert is evaluated.
|
||||
Modifing the slack handler definition from the first example results in the following:
|
||||
Modifying the slack handler definition from the first example results in the following:
|
||||
|
||||
```yaml
|
||||
topic: cpu
|
||||
|
|
|
@ -35,7 +35,7 @@ In order to follow this guide you’ll need to create the following resources:
|
|||
- An API-invokable script:
|
||||
- `water_level_process.flux`: This script computes the minute water level averages and counts the number of points that were used in water level average calculation. The average and count is written to the **water_level_mean** and **water_level_checksum** buckets respectively.
|
||||
- A Task:
|
||||
- `water_level_checksum.flux`: This task triggers the `water_level_process.flux` script. This task also recomputes a count of the number of points used to calculagte the most recent water level average value. It compares the most recent count from **water_level_checksum** bucket against this new count and triggers a recaclulation of the water level average to accomodate an increase in the count from late arriving data.
|
||||
- `water_level_checksum.flux`: This task triggers the `water_level_process.flux` script. This task also recomputes a count of the number of points used to calculate the most recent water level average value. It compares the most recent count from **water_level_checksum** bucket against this new count and triggers a recalculation of the water level average to accommodate an increase in the count from late arriving data.
|
||||
|
||||
In this process, you compute the average water level at each location over one minute windows.
|
||||
It's designed to handle data arriving up to one hour late.
|
||||
|
|
|
@ -11,7 +11,7 @@ weight: 105
|
|||
## Send data in JSON body with `http.post()`
|
||||
Use the [reduce()](/flux/v0.x/stdlib/universe/reduce/) function to create a JSON object to include as the body with `http.post()`.
|
||||
|
||||
1. Import both the [array](/flux/v0.x/stdlib/array/) package to query data and contruct table(s), and the [http package](/flux/v0.x/stdlib/http/) to transfer JSON over http.
|
||||
1. Import both the [array](/flux/v0.x/stdlib/array/) package to query data and construct table(s), and the [http package](/flux/v0.x/stdlib/http/) to transfer JSON over http.
|
||||
2. Use `array.from()` to query data and construct a table. Or, use another method [to query data with Flux](/influxdb/v2/query-data/flux/).
|
||||
3. Use the `reduce()` function to construct a JSON object, and then use `yield()` to store the output of reduce. This table looks like:
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue