base update to 2.2 (#3744)

* base update to 2.2

* draft release notes

* draft InfluxDB 2.2 release notes

* move release notes to new PR

* update api for 2.2

* fix menu items

* Update data/products.yml

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update data/products.yml

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* frontmatter fixes

* Feat/oss v2 2 metrics (#3762)

* feat: copy v2.1 metrics to v2.2.

* feat: new storage metrics in v2.2

* feat: v2.2 storage metrics.

* fix: revert 2.1 paths.

* Update "Get started with tasks" (#3763)

* update get started with tasks, closes influxdata/DAR#272

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* update nats config options, update influxd-flags logic, related to #3766 (#3771)

* updated token create API examples, closes influxdata/DAR#267 (#3773)

* ported scalar values update to 2.2

* Chore/move openapicli process (#3801)

* Internal measurements for monitoring Enterprise (#3698)

* Add 1.21.4 (#3789)

* Add 1.21.4

* Update content/telegraf/v1.21/about_the_project/release-notes-changelog.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Add apt and knot plugins

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* InfluxDB Enterprise 1.9.6 (#3650)

* Document meta-node HTTP access logging (#3486)
* Document `influxd-ctl backup -estimate` flag (#3484)
  Closes #3480
* add new option for SIGTERM (#3496)
* Document `-meta-only-overwrite-force` restore flag for Enterprise (#3487)
* Document `max-concurrent-deletes` option (#3697)
* Update Enterprise cluster metrics: add `openConnections` (#3703)
  Closes #3653
* Remove marketplace offerings from Enterprise
* Use bytes in certain Enterprise config examples (#3743)
* InfluxDB Enterprise 1.9: remove default values from configuration headings (#3759)

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
Co-authored-by: Sam Arnold <sarnold@influxdata.com>

* fix: bad link. (#3793)

* Update products.yml

* Feat/3411 link to cloud limits (#3795)

* feat: link from write error responses to related Cloud limits topics.

* feat: link Troubleshooting writes to Cloud limits.

* fix: grammar.

* Update content/influxdb/cloud/write-data/troubleshoot.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* First updates for the new brand (#3796)

* first pass with new branding, logos, fonts

* updated table font sizes

* removed unnecessary file

* Update assets/styles/layouts/_sidebar.scss

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* a few last style updates

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* chore: move openapi-cli postprocessing to the getSwagger step.

* chore: convert the content generators to functions so they're not executed on import (which broke the linter). Move the generators into the decorators and remove options args to simplify. Update and simplify the README. Update the oss v2.1 contract.

* README

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: noramullen1 <42354779+noramullen1@users.noreply.github.com>
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Sam Arnold <sarnold@influxdata.com>

* port menu url fix from master to 2.2

* Feat/oss 2 2 config (#3774)

* wip: add new server-config CLI command and /api/v2/config path. Fix Operator token doc. (#3523)

* feat: document OSS-only  and . Deprecate assets-path: ""
(closes #3523)

* chore: add toc

* Replace run-time with runtime. (#3774)

* Update content/influxdb/v2.2/reference/cli/influx/server-config/_index.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* formatted all flux examples in influxdb 2.2

* Fix/set tag groups (#3823)

* fix: replacement of Info: in contracts.
fix: add missing const declarations.
fix: add a decorator to remove redundant servers with empty URLs
  -- all servers should be root now since paths are explicit.

* fix: invalid mix of tags as strings and objects in root.tags. Convert all tags to objects with a default description.

* fix: use tag objects instead of mixing strings and objects in x-tagGroups -- fixes validation errors.

* port data retention and deletion updates to 2.2

* ported updated current version shortcode to 2.2

* Feat/3414 v2.2 debug pprof (#3844)

* feat: update API reference for OSS v2.2. Adds /debug/pprof.

* feat: add v2.2 runtime reference.

* fix: update URLs.

* port hardening-enabled config info, closes #3867, closes #3866 (#3873)

* Fix/3855 oss2.2 (#3857)

* fix: update npm example in v2.2. (#3855)

* fix: cardinality for ossv2.2 (#3850)

* updating placeholders in 2.2

* Document `influx remote` and `influx replication` (#3469)

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* ported migration docs into 2.2

* updated edge.js

* porting various minor changes into 2.2

* InfluxDB 2.2 release notes (#3745)

* draft 2.2 release notes
* remove flux lines; regroup other items
* update link to Flux RNs
* update release notes
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
* fix frontmatter
* fix link; update version
* edit
* revert for review clarity
* Update influxdb.md
* note about technical preview & feature overview
* edit from Jason
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
* Updates per Tim H and Sam feedback
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* changes made from feedback
* changed technical preview notes
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
* add link to OSS metrics page
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
* Add security section to influxdb/v2.2/reference/release-notes/influxdb.md (#3898)
* edits from Sam
* edit from sam
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* Update content/influxdb/v2.2/reference/release-notes/influxdb.m
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
* edits
* Update content/influxdb/v2.2/reference/release-notes/influxdb.md
Co-authored-by: Sam Dillard <sam@influxdata.com>
* edit from Sam
* edit from Sam
* edits from Tim
Co-authored-by: lwandzura <51929958+lwandzura@users.noreply.github.com>
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
Co-authored-by: Jamie Strandboge <jamie@strandboge.com>
Co-authored-by: Sam Dillard <sam@influxdata.com>

* Recover credentials (#3915)

* draft recover creds
* add relate
* add new option to index
* Update content/influxdb/v2.2/users/recover-credentials.md
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
* Update content/influxdb/v2.2/users/recover-credentials.md
* edit from SamA; add note
* sync headers
* add overview bullets; update links
* context from SamA
* Update content/influxdb/v2.2/users/recover-credentials.md
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
* Update content/influxdb/v2.2/users/recover-credentials.md
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
* Update content/influxdb/v2.2/users/recover-credentials.md
Co-authored-by: Sam Dillard <sam@influxdata.com>
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
Co-authored-by: Sam Dillard <sam@influxdata.com>

* update couple stray versions

* add technical preview to the glossary

* update release date

* fix link

* Add documentation for replications max-age (#3936)

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: Scott Anderson <scott@influxdata.com>
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
Co-authored-by: noramullen1 <42354779+noramullen1@users.noreply.github.com>
Co-authored-by: Sam Arnold <sarnold@influxdata.com>
Co-authored-by: Sunbrye Ly <sunbryely@Sunbryes-MacBook-Pro.local>
Co-authored-by: sunbryely-influxdata <101659702+sunbryely-influxdata@users.noreply.github.com>
Co-authored-by: Dane Strandboge <dane@strandboge.com>
pull/3933/head^2
kelseiv 2022-04-06 15:07:21 -07:00 committed by GitHub
parent 56bdc46d88
commit d674758ec0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
478 changed files with 60634 additions and 446 deletions

View File

@ -1,41 +1,41 @@
## Generate InfluxDB API docs
InfluxDB uses [Redoc](https://github.com/Redocly/redoc/),
InfluxData uses [Redoc](https://github.com/Redocly/redoc/),
[redoc-cli](https://github.com/Redocly/redoc/blob/master/cli/README.md),
and Redocly's [OpenApi CLI](https://redoc.ly/docs/cli/) to generate
API documentation from the InfluxDB openapi contracts.
API documentation from the [InfluxDB OpenAPI (aka Swagger) contracts](https://github.com/influxdata/openapi).
To minimize repo size, the generated API documentation HTML is gitignored, therefore
not committed directly to the docs repo.
The InfluxDB docs deployment process uses swagger files in the `api-docs` directory
to generate version-specific API documentation.
To minimize the size of the `docs-v2` repository, the generated API documentation HTML is gitignored, therefore
not committed to the docs repo.
The InfluxDB docs deployment process uses OpenAPI specification files in the `api-docs` directory
to generate version-specific (Cloud, OSS v2.1, OSS v2.0, etc.) API documentation.
### Versioned swagger files
The structure versions swagger files using the following pattern:
### Versioned OpenAPI files
The `api-docs` directory structure versions OpenAPI files using the following pattern:
```
api-docs/
├── v2.0/
│ └── ref.yml
│ └── swaggerV1Compat.yml
├── v2.1/
│ └── ref.yml
│ └── swaggerV1Compat.yml
├── v2.2/
│ └── ref.yml
│ └── swaggerV1Compat.yml
└── etc...
```
### Configure OpenAPI CLI linting and bundling
`.redoc.yaml` sets linting and bundling options for `openapi` CLI.
`./openapi/plugins` contains custom OpenAPI CLI plugins composed of *rules* (for linting) and *decorators* (for bundle customization).
### Fetch and process openapi contracts
Update the contracts in `api-docs` to the latest from `influxdata/openapi`.
### Custom content
`./openapi/content` contains custom OAS (OpenAPI Spec) content in YAML files. The content structure and Markdown must be valid OAS.
```sh
# In your terminal, go to the `docs-v2/api-docs` directory:
cd api-docs
`./openapi/plugins` use `./openapi/plugins/decorators` to apply the content to the contracts.
`.yml` files in `./openapi/content/` set content for sections (nodes) in the contract. To update the content for those nodes, you only need to update the YAML files.
To add new YAML files for other nodes in the openapi contracts, configure the new content YAML file in `./openapi/content/content.js`. Then, write a decorator module for the node and configure the decorator in the plugin, e.g. `./openapi/plugins/docs-plugin.js`. See the [complete list of OAS v3.0 nodes](https://github.com/Redocly/openapi-cli/blob/master/packages/core/src/types/oas3.ts#L529).
`openapi` CLI requires that modules use CommonJS `require` syntax for imports.
# Fetch the contracts and run @redocly/openapi-cli to customize and bundle them.
sh getswagger.sh oss; sh getswagger.sh cloud
```
### Generate API docs locally
Because the API documentation HTML is gitignored, you must manually generate it
@ -50,11 +50,44 @@ npx --version
If `npx` returns errors, [download](https://nodejs.org/en/) and run a recent version of the Node.js installer for your OS.
In your terminal, from the root of the docs repo, run:
```sh
# In your terminal, go to the `docs-v2/api-docs` directory:
cd api-docs
# Generate the API docs
# Generate the API docs with Redocly
sh generate-api-docs.sh
```
### Test your openapi spec edits
You can use `getswagger.sh` to fetch contracts from any URL.
For example, if you've made changes to spec files and generated new contracts in your local `openapi` repo, you can use `getswagger.sh` to fetch and process them.
To fetch contracts from your own `openapi` repo, pass the
`-b` `base_url` option and the full path to your `openapi` directory.
```sh
# Use the file:/// protocol to pass your openapi directory.
sh getswagger.sh oss -b file:///Users/me/github/openapi
```
After you fetch them, run the linter or generate HTML to test your changes before you commit them to `influxdata/openapi`.
By default, `getswagger.sh` doesn't run the linter when bundling
the specs. Manually run the [linter rules](https://redoc.ly/docs/cli/resources/built-in-rules/) to get a report of errors and warnings.
```sh
npx @redocly/openapi-cli lint v2.1/ref.yml
```
### Configure OpenAPI CLI linting and bundling
The `.redoc.yaml` configuration file sets options for the `@redocly/openapi-cli` [`lint`](https://redoc.ly/docs/cli/commands/lint/) and [`bundle`](https://redoc.ly/docs/cli/commands/bundle/) commands.
`./openapi/plugins` contains custom InfluxData Docs plugins composed of *rules* (for validating and linting) and *decorators* (for customizing). For more configuration options, see `@redocly/openapi-cli` [configuration file documentation](https://redoc.ly/docs/cli/configuration/configuration-file/).
### Custom content
`./openapi/content` contains custom OAS (OpenAPI Spec) content in YAML files. The content structure and Markdown must be valid OAS.
`./openapi/plugins` use `./openapi/plugins/decorators` to apply the content to the contracts.
`.yml` files in `./openapi/content/` set content for sections (nodes) in the contract. To update the content for those nodes, you only need to update the YAML files.
To add new YAML files for other nodes in the openapi contracts, configure the new content YAML file in `./openapi/content/content.js`. Then, write a decorator module for the node and configure the decorator in the plugin, e.g. `./openapi/plugins/docs-plugin.js`. See the [complete list of OAS v3.0 nodes](https://github.com/Redocly/openapi-cli/blob/master/packages/core/src/types/oas3.ts#L529).
`@redocly/openapi-cli` requires that modules use CommonJS `require` syntax for imports.

View File

@ -2,8 +2,8 @@ components:
parameters:
After:
description: >
The last resource ID from which to seek from (but not including). This
is to be used instead of `offset`.
Resource ID to seek from. Results are not inclusive of this ID. Use
`after` instead of `offset`.
in: query
name: after
required: false
@ -169,8 +169,8 @@ components:
status:
default: active
description: >-
If inactive the token is inactive and requests using the token will
be rejected.
Status of the token. If `inactive`, requests using the token will be
rejected.
enum:
- active
- inactive
@ -197,10 +197,10 @@ components:
- 'y'
type: object
Axis:
description: The description of a particular axis for a visualization.
description: Axis used in a visualization.
properties:
base:
description: Base represents the radix for formatting axis values.
description: Radix for formatting axis values.
enum:
- ''
- '2'
@ -208,23 +208,23 @@ components:
type: string
bounds:
description: >-
The extents of an axis in the form [lower, upper]. Clients determine
whether bounds are to be inclusive or exclusive of their limits
The extents of the axis in the form [lower, upper]. Clients
determine whether bounds are inclusive or exclusive of their limits.
items:
type: string
maxItems: 2
minItems: 0
type: array
label:
description: Label is a description of this Axis
description: Description of the axis.
type: string
prefix:
description: Prefix represents a label prefix for formatting axis values.
description: Label prefix for formatting axis values.
type: string
scale:
$ref: '#/components/schemas/AxisScale'
suffix:
description: Suffix represents a label suffix for formatting axis values.
description: Label suffix for formatting axis values.
type: string
type: object
AxisScale:
@ -391,22 +391,22 @@ components:
properties:
labels:
$ref: '#/components/schemas/Link'
description: URL to retrieve labels for this bucket
description: URL to retrieve labels for this bucket.
members:
$ref: '#/components/schemas/Link'
description: URL to retrieve members that can read this bucket
description: URL to retrieve members that can read this bucket.
org:
$ref: '#/components/schemas/Link'
description: URL to retrieve parent organization for this bucket
description: URL to retrieve parent organization for this bucket.
owners:
$ref: '#/components/schemas/Link'
description: URL to retrieve owners that can read and write to this bucket.
self:
$ref: '#/components/schemas/Link'
description: URL for this bucket
description: URL for this bucket.
write:
$ref: '#/components/schemas/Link'
description: URL to write line protocol for this bucket
description: URL to write line protocol to this bucket.
readOnly: true
type: object
name:
@ -596,10 +596,12 @@ components:
readOnly: true
type: string
latestCompleted:
description: Timestamp of latest scheduled, completed run, RFC3339.
description: >-
Timestamp (in RFC3339 date/time
format](https://datatracker.ietf.org/doc/html/rfc3339)) of the
latest scheduled and completed run.
format: date-time
readOnly: true
type: string
links:
example:
labels: /api/v2/checks/1/labels
@ -828,24 +830,24 @@ components:
DBRP:
properties:
bucketID:
description: the bucket ID used as target for the translation.
description: ID of the bucket used as the target for the translation.
type: string
database:
description: InfluxDB v1 database
type: string
default:
description: >-
Specify if this mapping represents the default retention policy for
the database specificed.
Mapping represents the default retention policy for the database
specified.
type: boolean
id:
description: the mapping identifier
description: ID of the DBRP mapping.
readOnly: true
type: string
links:
$ref: '#/components/schemas/Links'
orgID:
description: the organization ID that owns this mapping.
description: ID of the organization that owns this mapping.
type: string
retention_policy:
description: InfluxDB v1 retention policy
@ -861,21 +863,21 @@ components:
DBRPCreate:
properties:
bucketID:
description: the bucket ID used as target for the translation.
description: ID of the bucket used as the target for the translation.
type: string
database:
description: InfluxDB v1 database
type: string
default:
description: >-
Specify if this mapping represents the default retention policy for
the database specificed.
Mapping represents the default retention policy for the database
specified.
type: boolean
org:
description: the organization that owns this mapping.
description: Name of the organization that owns this mapping.
type: string
orgID:
description: the organization ID that owns this mapping.
description: ID of the organization that owns this mapping.
type: string
retention_policy:
description: InfluxDB v1 retention policy
@ -1134,80 +1136,6 @@ components:
- start
- stop
type: object
DemoDataBucket:
properties:
createdAt:
format: date-time
readOnly: true
type: string
description:
type: string
id:
readOnly: true
type: string
labels:
$ref: '#/components/schemas/Labels'
links:
example:
labels: /api/v2/buckets/1/labels
members: /api/v2/buckets/1/members
org: /api/v2/orgs/2
owners: /api/v2/buckets/1/owners
self: /api/v2/buckets/1
write: /api/v2/write?org=2&bucket=1
properties:
labels:
$ref: '#/components/schemas/Link'
description: URL to retrieve labels for this bucket
members:
$ref: '#/components/schemas/Link'
description: URL to retrieve members that can read this bucket
org:
$ref: '#/components/schemas/Link'
description: URL to retrieve parent organization for this bucket
owners:
$ref: '#/components/schemas/Link'
description: URL to retrieve owners that can read and write to this bucket.
self:
$ref: '#/components/schemas/Link'
description: URL for this bucket
write:
$ref: '#/components/schemas/Link'
description: URL to write line protocol for this bucket
readOnly: true
type: object
name:
type: string
orgID:
type: string
retentionRules:
$ref: '#/components/schemas/RetentionRules'
rp:
type: string
schemaType:
$ref: '#/components/schemas/SchemaType'
default: implicit
type:
default: demodata
readOnly: true
type: string
updatedAt:
format: date-time
readOnly: true
type: string
required:
- name
- retentionRules
DemoDataBuckets:
properties:
buckets:
items:
$ref: '#/components/schemas/DemoDataBucket'
type: array
links:
$ref: '#/components/schemas/Links'
readOnly: true
type: object
Dialect:
description: >-
Dialect are options to change the default CSV output format;
@ -1315,23 +1243,22 @@ components:
type: string
err:
description: >-
err is a stack of errors that occurred during processing of the
request. Useful for debugging.
Stack of errors that occurred during processing of the request.
Useful for debugging.
readOnly: true
type: string
message:
description: message is a human-readable message.
description: Human-readable message.
readOnly: true
type: string
op:
description: >-
op describes the logical code operation during error. Useful for
debugging.
Describes the logical code operation when the error occurred. Useful
for debugging.
readOnly: true
type: string
required:
- code
- message
Expression:
oneOf:
- $ref: '#/components/schemas/ArrayExpression'
@ -2232,6 +2159,9 @@ components:
deleteRequestsPerSecond:
description: Allowed organization delete request rate.
type: integer
queryTime:
description: Query Time in nanoseconds
type: integer
readKBs:
description: Query limit in kb/sec. 0 is unlimited.
type: integer
@ -2240,6 +2170,7 @@ components:
type: integer
required:
- readKBs
- queryTime
- concurrentReadRequests
- writeKBs
- concurrentWriteRequests
@ -2376,30 +2307,27 @@ components:
type: string
err:
description: >-
Err is a stack of errors that occurred during processing of the
request. Useful for debugging.
Stack of errors that occurred during processing of the request.
Useful for debugging.
readOnly: true
type: string
line:
description: First line within sent body containing malformed data
description: First line in the request body that contains malformed data.
format: int32
readOnly: true
type: integer
message:
description: Message is a human-readable message.
description: Human-readable message.
readOnly: true
type: string
op:
description: >-
Op describes the logical code operation during error. Useful for
debugging.
Describes the logical code operation when the error occurred. Useful
for debugging.
readOnly: true
type: string
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
@ -2409,7 +2337,7 @@ components:
readOnly: true
type: string
message:
description: Message is a human-readable message.
description: Human-readable message.
readOnly: true
type: string
required:
@ -2498,12 +2426,13 @@ components:
- note
type: object
MeasurementSchema:
description: The schema definition for a single measurement
description: Definition of a measurement schema.
example:
bucketID: ba3c5e7f9b0a0010
columns:
- name: time
type: timestamp
- format: unix timestamp
name: time
type: integer
- name: host
type: tag
- name: region
@ -2514,17 +2443,17 @@ components:
- dataType: float
name: usage_user
type: field
createdAt: 2021-01-21T00:48:40.993Z
createdAt: '2021-01-21T00:48:40.993Z'
id: 1a3c5e7f9b0a8642
name: cpu
orgID: 0a3c5e7f9b0a0001
updatedAt: 2021-01-21T00:48:40.993Z
updatedAt: '2021-01-21T00:48:40.993Z'
properties:
bucketID:
description: ID of the bucket that the measurement schema is associated with.
type: string
columns:
description: An ordered collection of column definitions
description: Ordered collection of column definitions.
items:
$ref: '#/components/schemas/MeasurementSchemaColumn'
type: array
@ -2539,7 +2468,9 @@ components:
nullable: false
type: string
orgID:
description: ID of organization that the measurement schema is associated with.
description: >-
ID of the organization that the measurement schema is associated
with.
type: string
updatedAt:
format: date-time
@ -2553,10 +2484,11 @@ components:
- updatedAt
type: object
MeasurementSchemaColumn:
description: Definition of a measurement column
description: Definition of a measurement schema column.
example:
format: unix timestamp
name: time
type: timestamp
type: integer
properties:
dataType:
$ref: '#/components/schemas/ColumnDataType'
@ -2569,11 +2501,12 @@ components:
- type
type: object
MeasurementSchemaCreateRequest:
description: Create a new measurement schema
description: Create a new measurement schema.
example:
columns:
- name: time
type: timestamp
- format: unix timestamp
name: time
type: integer
- name: host
type: tag
- name: region
@ -2587,7 +2520,7 @@ components:
name: cpu
properties:
columns:
description: An ordered collection of column definitions
description: Ordered collection of column definitions.
items:
$ref: '#/components/schemas/MeasurementSchemaColumn'
type: array
@ -2602,23 +2535,23 @@ components:
example:
measurementSchemas:
- bucketID: ba3c5e7f9b0a0010
createdAt: 2021-01-21T00:48:40.993Z
createdAt: '2021-01-21T00:48:40.993Z'
id: 1a3c5e7f9b0a8642
name: cpu
orgID: 0a3c5e7f9b0a0001
updatedAt: 2021-01-21T00:48:40.993Z
updatedAt: '2021-01-21T00:48:40.993Z'
- bucketID: ba3c5e7f9b0a0010
createdAt: 2021-01-21T00:48:40.993Z
createdAt: '2021-01-21T00:48:40.993Z'
id: 1a3c5e7f9b0a8643
name: memory
orgID: 0a3c5e7f9b0a0001
updatedAt: 2021-01-21T00:48:40.993Z
updatedAt: '2021-01-21T00:48:40.993Z'
- bucketID: ba3c5e7f9b0a0010
createdAt: 2021-01-21T00:48:40.993Z
createdAt: '2021-01-21T00:48:40.993Z'
id: 1a3c5e7f9b0a8644
name: disk
orgID: 0a3c5e7f9b0a0001
updatedAt: 2021-01-21T00:48:40.993Z
updatedAt: '2021-01-21T00:48:40.993Z'
properties:
measurementSchemas:
items:
@ -2631,8 +2564,9 @@ components:
description: Update an existing measurement schema
example:
columns:
- name: time
type: timestamp
- format: unix timestamp
name: time
type: integer
- name: host
type: tag
- name: region
@ -2923,7 +2857,10 @@ components:
readOnly: true
type: string
latestCompleted:
description: Timestamp of latest scheduled, completed run, RFC3339.
description: >-
Timestamp (in RFC3339 date/time
format](https://datatracker.ietf.org/doc/html/rfc3339)) of the
latest scheduled and completed run.
format: date-time
readOnly: true
type: string
@ -3428,7 +3365,7 @@ components:
Ready:
properties:
started:
example: 2019-03-13T10:09:33.891196-04:00
example: '2019-03-13T10:09:33.891196-04:00'
format: date-time
type: string
status:
@ -4319,8 +4256,8 @@ components:
properties:
authorizationID:
description: >-
The ID of the authorization used when this task communicates with
the query engine.
ID of the authorization used when the task communicates with the
query engine.
type: string
createdAt:
format: date-time
@ -4328,17 +4265,27 @@ components:
type: string
cron:
description: >-
A task repetition schedule in the form '* * * * * *'; parsed from
Flux.
[Cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that
defines the schedule on which the task runs. Cron scheduling is
based on system time.
Value is a [Cron
expression](https://en.wikipedia.org/wiki/Cron#Overview).
type: string
description:
description: An optional description of the task.
description: Description of the task.
type: string
every:
description: A simple task repetition schedule; parsed from Flux.
description: >-
Interval at which the task runs. `every` also determines when the
task first runs, depending on the specified time.
Value is a [duration
literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals)).
format: duration
type: string
flux:
description: The Flux script to run for this task.
description: Flux script to run for this task.
type: string
id:
readOnly: true
@ -4356,7 +4303,11 @@ components:
readOnly: true
type: string
latestCompleted:
description: Timestamp of latest scheduled, completed run, RFC3339.
description: >-
Timestamp of the latest scheduled and completed run.
Value is a timestamp in [RFC3339 date/time
format](https://docs.influxdata.com/flux/v0.x/data-types/basic/time/#time-syntax).
format: date-time
readOnly: true
type: string
@ -4384,29 +4335,31 @@ components:
readOnly: true
type: object
name:
description: The name of the task.
description: Name of the task.
type: string
offset:
description: >-
Duration to delay after the schedule, before executing the task;
parsed from flux, if set to zero it will remove this option and use
0 as the default.
[Duration](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals)
to delay execution of the task after the scheduled time has elapsed.
`0` removes the offset.
The value is a [duration
literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals).
format: duration
type: string
org:
description: The name of the organization that owns this Task.
description: Name of the organization that owns the task.
type: string
orgID:
description: The ID of the organization that owns this Task.
description: ID of the organization that owns the task.
type: string
ownerID:
description: The ID of the user who owns this Task.
description: ID of the user who owns this Task.
type: string
status:
$ref: '#/components/schemas/TaskStatusType'
type:
description: >-
The type of task, this can be used for filtering tasks on list
actions.
description: Type of the task, useful for filtering a task list.
type: string
updatedAt:
format: date-time
@ -5992,7 +5945,7 @@ info:
with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
openapi: 3.0.0
paths:
/api/v2/:
/api/v2:
get:
operationId: GetRoutes
parameters:
@ -8181,69 +8134,6 @@ paths:
summary: Delete data
tags:
- Delete
/api/v2/experimental/sampledata/buckets:
get:
operationId: GetDemoDataBuckets
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/DemoDataBuckets'
description: A list of demo data buckets
default:
$ref: '#/components/responses/ServerError'
description: Unexpected error
summary: List of Demo Data Buckets
tags:
- DemoDataBuckets
/api/v2/experimental/sampledata/buckets/{bucketID}/members:
delete:
operationId: DeleteDemoDataBucketMembers
parameters:
- description: bucket id
in: path
name: bucketID
required: true
schema:
type: string
responses:
'200':
description: if sampledata route is not available gateway responds with 200
'204':
description: A list of demo data buckets
default:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
description: Unexpected error
summary: List of Demo Data Buckets
tags:
- DemoDataBuckets
post:
operationId: GetDemoDataBucketMembers
parameters:
- description: bucket id
in: path
name: bucketID
required: true
schema:
type: string
responses:
'200':
description: if sampledata route is not available gateway responds with 200
'204':
description: A list of demo data buckets
default:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
description: Unexpected error
summary: List of Demo Data Buckets
tags:
- DemoDataBuckets
/api/v2/flags:
get:
operationId: GetFlags
@ -8419,13 +8309,14 @@ paths:
application/json:
schema:
$ref: '#/components/schemas/Token'
description: A temp token for Mapbox
description: Temporary token for Mapbox.
'401':
$ref: '#/components/responses/ServerError'
'500':
$ref: '#/components/responses/ServerError'
default:
$ref: '#/components/responses/ServerError'
summary: Get a mapbox token
/api/v2/me:
get:
operationId: GetMe
@ -9250,8 +9141,9 @@ paths:
- Organizations
/api/v2/orgs/{orgID}/limits:
get:
operationId: GetOrgLimitsID
parameters:
- description: The identifier of the organization.
- description: ID of the organization.
in: path
name: orgID
required: true
@ -9269,7 +9161,7 @@ paths:
links:
$ref: '#/components/schemas/Links'
type: object
description: The Limits defined for the organization.
description: Limits defined for the organization.
default:
$ref: '#/components/responses/ServerError'
description: unexpected error
@ -9581,25 +9473,38 @@ paths:
- Secrets
/api/v2/orgs/{orgID}/usage:
get:
operationId: GetOrgUsageID
parameters:
- description: The identifier of the organization.
- description: ID of the organization.
in: path
name: orgID
required: true
schema:
type: string
- description: start time
- description: >
Earliest time to include in results.
For more information about timestamps, see [Manipulate timestamps
with
Flux](https://docs.influxdata.com/influxdb/cloud/query-data/flux/manipulate-timestamps/).
in: query
name: start
required: true
schema:
type: timestamp
- description: stop time
format: unix timestamp
type: integer
- description: >
Latest time to include in results.
For more information about timestamps, see [Manipulate timestamps
with
Flux](https://docs.influxdata.com/influxdb/cloud/query-data/flux/manipulate-timestamps/).
in: query
name: stop
required: false
schema:
type: timestamp
format: unix timestamp
type: integer
- description: return raw usage data
in: query
name: raw
@ -11653,6 +11558,22 @@ paths:
summary of the run. The summary contains newly created resources.
The diff compares the initial state to the state after the package
applied. This corresponds to `"dryRun": true`.
'422':
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/TemplateSummary'
- properties:
code:
type: string
message:
type: string
required:
- message
- code
type: object
description: Template failed validation
default:
content:
application/json:
@ -12402,7 +12323,6 @@ tags:
- Dashboards
- DBRPs
- Delete
- DemoDataBuckets
- Invocable Scripts
- Labels
- Limits
@ -12531,7 +12451,6 @@ x-tagGroups:
- Dashboards
- DBRPs
- Delete
- DemoDataBuckets
- Invocable Scripts
- Labels
- Limits

View File

@ -36,7 +36,7 @@ paths:
type: string
required: true
description: >-
Bucket to write to. If none exists, a bucket will be created with a
Bucket to write to. If none exists, InfluxDB creates a bucket with a
default 3-day retention policy.
- in: query
name: rp

View File

@ -46,15 +46,9 @@ weight: 304
# npm_config_yes=true npx overrides the prompt
# and (vs. npx --yes) is compatible with npm@6 and npm@7.
openapiCLI="@redocly/openapi-cli"
redocCLI="redoc-cli@0.12.3"
npm --version
# Use Redoc's openapi-cli to regenerate the spec with custom decorations.
INFLUXDB_VERSION=$version npm_config_yes=true npx $openapiCLI bundle $version/ref.yml \
--config=./.redocly.yaml \
-o $version/ref.yml
npx --version
# Use Redoc to generate the v2 API html
npm_config_yes=true npx $redocCLI bundle $version/ref.yml \
@ -67,11 +61,6 @@ weight: 304
--templateOptions.version="$version" \
--templateOptions.titleVersion="$titleVersion" \
# Use Redoc's openapi-cli to regenerate the v1-compat spec with custom decorations.
INFLUXDB_API_VERSION=v1compat INFLUXDB_VERSION=$version npm_config_yes=true npx $openapiCLI bundle $version/swaggerV1Compat.yml \
--config=./.redocly.yaml \
-o $version/swaggerV1Compat.yml
# Use Redoc to generate the v1 compatibility API html
npm_config_yes=true npx $redocCLI bundle $version/swaggerV1Compat.yml \
-t template.hbs \

View File

@ -87,20 +87,43 @@ function showArgs {
echo "ossVersion: $ossVersion";
}
function postProcess() {
# Use npx to install and run the specified version of openapi-cli.
# npm_config_yes=true npx overrides the prompt
# and (vs. npx --yes) is compatible with npm@6 and npm@7.
specPath=$1
version="$2"
apiVersion="$3"
openapiCLI="@redocly/openapi-cli"
npx --version
# Use Redoc's openapi-cli to regenerate the spec with custom decorations.
INFLUXDB_API_VERSION=$apiVersion INFLUXDB_VERSION=$version npm_config_yes=true npx $openapiCLI bundle $specPath \
--config=./.redocly.yaml \
-o $specPath
}
function updateCloud {
echo "Updating Cloud openapi..."
curl ${verbose} ${baseUrl}/contracts/ref/cloud.yml -s -o cloud/ref.yml
postProcess $_ cloud
}
function updateOSS {
echo "Updating OSS ${ossVersion} openapi..."
mkdir -p ${ossVersion} && curl ${verbose} ${baseUrl}/contracts/ref/oss.yml -s -o $_/ref.yml
postProcess $_ $ossVersion
}
function updateV1Compat {
echo "Updating Cloud and ${ossVersion} v1 compatibility openapi..."
curl ${verbose} ${baseUrl}/contracts/swaggerV1Compat.yml -s -o cloud/swaggerV1Compat.yml
postProcess $_ cloud v1compat
mkdir -p ${ossVersion} && cp cloud/swaggerV1Compat.yml $_/swaggerV1Compat.yml
postProcess $_ $ossVersion v1compat
}
if [ ! -z ${verbose} ];

View File

@ -5,15 +5,15 @@ function getVersion(filename) {
return path.join(__dirname, process.env.INFLUXDB_VERSION, (process.env.INFLUXDB_API_VERSION || ''), filename);
}
const info = toJSON(getVersion('info.yml'));
const info = () => toJSON(getVersion('info.yml'));
const securitySchemes = toJSON(getVersion('security-schemes.yml'));
const securitySchemes = () => toJSON(getVersion('security-schemes.yml'));
const servers = toJSON(path.join(__dirname, 'servers.yml'));
const servers = () => toJSON(path.join(__dirname, 'servers.yml'));
const tags = toJSON(getVersion('tags.yml'));
const tags = () => toJSON(getVersion('tags.yml'));
const tagGroups = toJSON(getVersion('tag-groups.yml'));
const tagGroups = () => toJSON(getVersion('tag-groups.yml'));
module.exports = {
info,
@ -22,4 +22,3 @@ module.exports = {
tagGroups,
tags,
}

View File

@ -1,3 +1,4 @@
title: InfluxDB OSS API Service
version: 2.0.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.

View File

@ -1,3 +1,4 @@
title: InfluxDB OSS API Service
version: 2.0.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.

View File

@ -20,7 +20,10 @@
- Users
- name: System information endpoints
tags:
- Config
- Debug
- Health
- Metrics
- Ping
- Ready
- Routes

View File

@ -0,0 +1,4 @@
title: InfluxDB OSS API Service
version: 2.0.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.

View File

@ -0,0 +1,58 @@
TokenAuthentication:
type: http
scheme: token
bearerFormat: InfluxDB Token String
description: |
### Token authentication scheme
InfluxDB API tokens ensure secure interaction between users and data. A token belongs to an organization and identifies InfluxDB permissions within the organization.
Include your API token in an `Authentication: Token YOUR_API_TOKEN` HTTP header with each request.
### Example
`curl http://localhost:8086/ping
--header "Authentication: Token YOUR_API_TOKEN"`
For more information and examples, see the following:
- [Use tokens in API requests](https://docs.influxdata.com/influxdb/v2.1/api-guide/api_intro/#authentication).
- [Manage API tokens](https://docs.influxdata.com/influxdb/v2.1/security/tokens).
- [`/authorizations`](#tag/Authorizations) endpoint.
BasicAuthentication:
type: http
scheme: basic
description: |
### Basic authentication scheme
Use HTTP Basic Auth with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme).
Username and password schemes require the following credentials:
- **username**: 1.x username (this is separate from the UI login username)
- **password**: 1.x password or InfluxDB API token
### Example
`curl --get "http://localhost:8086/query"
--user "YOUR_1.x_USERNAME":"YOUR_TOKEN_OR_PASSWORD"`
For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/v2.1/reference/api/influxdb-1x/)
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: |
### Querystring authentication scheme
Use InfluxDB 1.x API parameters to provide credentials through the query string.
Username and password schemes require the following credentials:
- **username**: 1.x username (this is separate from the UI login username)
- **password**: 1.x password or InfluxDB API token
### Example
`curl --get "http://localhost:8086/query"
--data-urlencode "u=YOUR_1.x_USERNAME"
--data-urlencode "p=YOUR_TOKEN_OR_PASSWORD"`
For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/v2.1/reference/api/influxdb-1x/)

View File

@ -0,0 +1,31 @@
- name: Overview
tags:
- Quick start
- Authentication
- Response codes
- name: Data I/O endpoints
tags:
- Write
- Query
- name: Resource endpoints
tags:
- Buckets
- Dashboards
- Tasks
- Resources
- name: Security and access endpoints
tags:
- Authorizations
- Organizations
- Users
- name: System information endpoints
tags:
- Config
- Debug
- Health
- Metrics
- Ping
- Ready
- Routes
- name: All endpoints
tags: []

View File

@ -0,0 +1,43 @@
- name: Authentication
description: |
Use one of the following schemes to authenticate to the InfluxDB API:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- name: Quick start
x-traitTag: true
description: |
See the [**API Quick Start**](https://docs.influxdata.com/influxdb/v2.1/api-guide/api_intro/) to get up and running authenticating with tokens, writing to buckets, and querying data.
[**InfluxDB API client libraries**](https://docs.influxdata.com/influxdb/v2.1/api-guide/client-libraries/) are available for popular languages and ready to import into your application.
- name: Response codes
x-traitTag: true
description: |
The InfluxDB API uses standard HTTP status codes for success and failure responses.
The response body may include additional details. For details about a specific operation's response, see **Responses** and **Response Samples** for that operation.
API operations may return the following HTTP status codes:
| &nbsp;Code&nbsp; | Status | Description |
|:-----------:|:------------------------ |:--------------------- |
| `200` | Success | |
| `204` | No content | For a `POST` request, `204` indicates that InfluxDB accepted the request and request data is valid. Asynchronous operations, such as `write`, might not have completed yet. |
| `400` | Bad request | `Authorization` header is missing or malformed or the API token does not have permission for the operation. |
| `401` | Unauthorized | May indicate one of the following: <li>`Authorization: Token` header is missing or malformed</li><li>API token value is missing from the header</li><li>API token does not have permission. For more information about token types and permissions, see [Manage API tokens](https://docs.influxdata.com/influxdb/v2.1/security/tokens/)</li> |
| `404` | Not found | Requested resource was not found. `message` in the response body provides details about the requested resource. |
| `413` | Request entity too large | Request payload exceeds the size limit. |
| `422` | Unprocessible entity | Request data is invalid. `code` and `message` in the response body provide details about the problem. |
| `429` | Too many requests | API token is temporarily over the request quota. The `Retry-After` header describes when to try the request again. |
| `500` | Internal server error | |
| `503` | Service unavailable | Server is temporarily unavailable to process the request. The `Retry-After` header describes when to try the request again. |
- name: Query
description: |
Retrieve data, analyze queries, and get query suggestions.
- name: Write
description: |
Write time series data to buckets.
- name: Authorizations
description: |
Create and manage API tokens. An **authorization** associates a list of permissions to an **organization** and provides a token for API access. To assign a token to a specific user, scope the authorization to the user ID.

View File

@ -0,0 +1 @@
title: InfluxDB OSS v1 compatibility API documentation

View File

@ -0,0 +1,9 @@
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags: []

View File

@ -1,17 +1,21 @@
module.exports = SetSecuritySchemes;
const { securitySchemes } = require('../../../content/content')
/** @type {import('@redocly/openapi-cli').OasDecorator} */
function SetSecuritySchemes(options) {
function SetSecuritySchemes() {
const data = securitySchemes();
return {
Components: {
leave(comps, ctx) {
if(options.data) {
comps.securitySchemes = comps.securitySchemes || {};
Object.keys(options.data).forEach(
function(scheme) {
comps.securitySchemes[scheme] = options.data[scheme];
})
}
if(data) {
comps.securitySchemes = comps.securitySchemes || {};
Object.keys(data).forEach(
function(scheme) {
comps.securitySchemes[scheme] = data[scheme];
}
)
}
}
}
}

View File

@ -0,0 +1,21 @@
module.exports = DeleteServers;
/** @type {import('@redocly/openapi-cli').OasDecorator} */
/**
* Returns an object with keys in [node type, any, ref].
* The key instructs openapi when to invoke the key's Visitor object.
* Object key "Server" is an OAS 3.0 node type.
*/
function DeleteServers() {
return {
Operation: {
leave(op) {
/** Delete servers with empty url. **/
if(Array.isArray(op.servers)) {
op.servers = op.servers.filter(server => server.url);
}
}
}
}
};

View File

@ -1,17 +1,20 @@
module.exports = SetServers;
const { servers } = require('../../../content/content')
/** @type {import('@redocly/openapi-cli').OasDecorator} */
/**
* Returns an object with keys in [node type, any, ref].
* The key instructs openapi when to invoke the key's Visitor object.
* The key instructs openapi when to invoke the key's Visitor object.
* Object key "Server" is an OAS 3.0 node type.
*/
function SetServers(options) {
function SetServers() {
const data = servers();
return {
DefinitionRoot: {
leave(root) {
root.servers = options.data;
root.servers = data;
}
},
}

View File

@ -1,16 +1,24 @@
module.exports = SetInfo;
const { info } = require('../../content/content')
/** @type {import('@redocly/openapi-cli').OasDecorator} */
function SetInfo(options) {
function SetInfo() {
const data = info();
return {
Info: {
leave(info, ctx) {
if(options.data) {
if(options.data.hasOwnProperty('title')) {
info.title = options.data.title;
if(data) {
if(data.hasOwnProperty('title')) {
info.title = data.title;
}
if(options.data.hasOwnProperty('description')) {
info.description = options.data.description;
if(data.hasOwnProperty('version')) {
info.version = data.version;
}
if(data.hasOwnProperty('description')) {
info.description = data.description;
}
}
}

View File

@ -1,12 +1,13 @@
module.exports = SetTagGroups;
const { tagGroups } = require('../../../content/content')
const { collect, getName, sortName } = require('../../helpers/content-helper.js')
/**
* Returns an object that defines handler functions for:
* - Operation nodes
* - DefinitionRoot (the root openapi) node
* The order of the two functions is significant.
* The Operation handler collects tags from the
* The Operation handler collects tags from the
* operation ('get', 'post', etc.) in every path.
* The DefinitionRoot handler, executed when
* the parser is leaving the root node,
@ -14,12 +15,19 @@ const { collect, getName, sortName } = require('../../helpers/content-helper.js'
* and sets the value of `All Endpoints` to the collected tags.
*/
/** @type {import('@redocly/openapi-cli').OasDecorator} */
function SetTagGroups(options) {
function SetTagGroups() {
const data = tagGroups();
let tags = [];
/** Collect tags for each operation and convert string tags to object tags. **/
return {
Operation: {
leave(op, ctx, parents) {
tags = collect(tags, op.tags);
let opTags = op.tags?.map(
function(t) {
return typeof t === 'string' ? { name: t, description: '' } : t;
}
) || [];
tags = collect(tags, opTags);
}
},
DefinitionRoot: {
@ -28,12 +36,12 @@ function SetTagGroups(options) {
root.tags = collect(root.tags, tags)
.sort((a, b) => sortName(a, b));
if(!options.data) { return; }
if(!data) { return; }
endpointTags = root.tags
.filter(t => !t['x-traitTag'])
.map(t => getName(t));
root['x-tagGroups'] = options.data
root['x-tagGroups'] = data
.map(function(grp) {
grp.tags = grp.name === 'All endpoints' ? endpointTags : grp.tags;
return grp;

View File

@ -1,21 +1,23 @@
module.exports = SetTags;
const { tags } = require('../../../content/content')
/**
* Returns an object that defines handler functions for:
* - DefinitionRoot (the root openapi) node
* The DefinitionRoot handler, executed when
* the parser is leaving the root node,
* sets the root `tags` list to the provided `data`.
* sets the root `tags` list to the provided `data`.
*/
/** @type {import('@redocly/openapi-cli').OasDecorator} */
function SetTags(options) {
function SetTags() {
const data = tags();
let tags = [];
return {
DefinitionRoot: {
leave(root) {
if(options.data) {
root.tags = options.data;
}
if(data) {
root.tags = data;
}
}
}
}

View File

@ -3,12 +3,12 @@ const ValidateServersUrl = require('./rules/validate-servers-url');
const RemovePrivatePaths = require('./decorators/paths/remove-private-paths');
const ReplaceShortcodes = require('./decorators/replace-shortcodes');
const SetInfo = require('./decorators/set-info');
const DeleteServers = require('./decorators/servers/delete-servers');
const SetServers = require('./decorators/servers/set-servers');
const SetSecuritySchemes = require('./decorators/security/set-security-schemes');
const SetTags = require('./decorators/tags/set-tags');
const SetTagGroups = require('./decorators/tags/set-tag-groups');
const StripVersionPrefix = require('./decorators/paths/strip-version-prefix');
const {info, securitySchemes, servers, tags, tagGroups } = require('../content/content')
const id = 'docs';
@ -23,15 +23,16 @@ const rules = {
/** @type {import('@redocly/openapi-cli').CustomRulesConfig} */
const decorators = {
oas3: {
'set-servers': () => SetServers({data: servers}),
'set-servers': SetServers,
'delete-servers': DeleteServers,
'remove-private-paths': RemovePrivatePaths,
'replace-docs-url-shortcode': ReplaceShortcodes().docsUrl,
'strip-version-prefix': StripVersionPrefix,
'set-info': () => SetInfo({data: info}),
'set-security': () => SetSecurity({data: security}),
'set-security-schemes': () => SetSecuritySchemes({data: securitySchemes}),
'set-tags': () => SetTags({data: tags}),
'set-tag-groups': () => SetTagGroups({data: tagGroups}),
'set-info': SetInfo,
// 'set-security': SetSecurity,
'set-security-schemes': SetSecuritySchemes,
'set-tags': SetTags,
'set-tag-groups': SetTagGroups,
}
};
@ -40,16 +41,17 @@ module.exports = {
configs: {
all: {
rules: {
'no-server-trailing-slash': 'off',
'no-server-trailing-slash': 'off',
'docs/validate-servers-url': 'error',
},
decorators: {
'docs/set-servers': 'error',
'docs/remove-private-paths': 'error',
'docs/replace-docs-url-shortcode': 'error',
'docs/strip-version-prefix': 'error',
'docs/set-info': 'error',
'docs/set-tag-groups': 'error',
'docs/delete-servers': 'error',
'docs/remove-private-paths': 'error',
'docs/replace-docs-url-shortcode': 'error',
'docs/strip-version-prefix': 'error',
'docs/set-info': 'error',
'docs/set-tag-groups': 'error',
},
},
},

View File

@ -3188,7 +3188,7 @@ components:
Ready:
properties:
started:
example: 2019-03-13T10:09:33.891196-04:00
example: '2019-03-13T10:09:33.891196-04:00'
format: date-time
type: string
status:

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,7 @@ info:
title: InfluxDB OSS v1 compatibility API documentation
version: 0.1.0
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with
The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.

14688
api-docs/v2.2/ref.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,502 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 0.1.0
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.
If you want to use the latest InfluxDB `/api/v2` API instead,
see the [InfluxDB v2 API documentation](/influxdb/v2.1/api/).
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: >-
Bucket to write to. If none exist a bucket will be created with a
default 3 day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: >-
When present, its value indicates to the database that compression
is applied to the line protocol body.
schema:
type: string
description: >-
Specifies that the line protocol in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: >-
Write data is correctly formatted and accepted for writing to the
bucket.
'400':
description: >-
Line protocol poorly formed and no points were written. Response
can be used to determine the first malformed line in the body
line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: >-
Token does not have sufficient permissions to write to this
organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: >-
Write has been rejected because the payload is too large. Error
message returns max size supported. All data in body was rejected
and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
'503':
description: >-
Server is temporarily unavailable to accept writes. The Retry-After
header describes when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: >-
Specifies how query results should be encoded in the response.
**Note:** With `application/csv`, query results include epoch
timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: >-
The Accept-Encoding request HTTP header advertises which content
encoding, usually a compression algorithm, the client is able to
understand.
schema:
type: string
description: >-
Specifies that the query response in the body should be encoded
with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: >-
The Content-Encoding entity header is used to compress the
media-type. When present, its value indicates which encodings
were applied to the entity-body
schema:
type: string
description: >-
Specifies that the response in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: >-
The Trace-Id header reports the request's trace ID, if one was
generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the read again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
items:
type: object
properties:
statement_id:
type: integer
series:
type: array
items:
type: object
properties:
name:
type: string
columns:
type: array
items:
type: integer
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: >
name,tags,time,test_field,test_tag
test_measurement,,1603740794286107366,1,tag_value
test_measurement,,1603740870053205649,2,tag_value
test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: >-
Op describes the logical code operation during error. Useful for
debugging.
type: string
err:
readOnly: true
description: >-
Err is a stack of errors that occurred during processing of the
request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: >
Use the [Token
authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and
an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/v2.1/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/v2.1/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: >
Use the HTTP [Basic
authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username
and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a
username and password](/influxdb/v2.1/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: >
Use the [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through
the query string.
For examples and more information, see how to [authenticate with a
username and password](/influxdb/v2.1/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: >
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- Query
- Write
x-tagGroups:
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags:
- Query
- Write

View File

@ -0,0 +1,15 @@
---
title: influx remote
description: Manage remote InfluxDB connections for replicating data.
menu:
influxdb_cloud_ref:
name: influx remote
parent: influx
weight: 101
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
- /influxdb/cloud/write-data/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx remote create
description: Create a new remote InfluxDB connection for replicating data.
menu:
influxdb_cloud_ref:
name: influx remote create
parent: influx remote
weight: 101
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx remote delete
description: Delete remote InfluxDB connections used for replicating data.
menu:
influxdb_cloud_ref:
name: influx remote delete
parent: influx remote
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx remote list
description: List remote InfluxDB connections sued for replicating data.
menu:
influxdb_cloud_ref:
name: influx remote list
parent: influx remote
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx remote update
description: Update remote InfluxDB connections used for for replicating data.
menu:
influxdb_cloud_ref:
name: influx remote update
parent: influx remote
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx replication
description: Use the `influx` CLI to manage InfluxDB replication streams.
menu:
influxdb_cloud_ref:
name: influx replication
parent: influx
weight: 101
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/remote
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx replication create
description: Create a new InfluxDB replication stream.
menu:
influxdb_cloud_ref:
name: influx replication create
parent: influx replication
weight: 101
influxdb/cloud/tags: [write]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx replication delete
description: Delete an InfluxDB replication stream.
menu:
influxdb_cloud_ref:
name: influx replication delete
parent: influx replication
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx replication list
description: List InfluxDB replication streams and corresponding metrics.
menu:
influxdb_cloud_ref:
name: influx replication list
parent: influx replication
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: influx replication update
description: Update InfluxDB replication streams.
menu:
influxdb_cloud_ref:
name: influx replication update
parent: influx replication
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,17 @@
---
title: Replicate data from InfluxDB OSS to InfluxDB Cloud
weight: 106
description: >
Use InfluxDB replication streams to replicate all data written to an InfluxDB OSS
instance to InfluxDB Cloud.
menu:
influxdb_cloud:
name: Replicate data
parent: Write data
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/remote
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -18,15 +18,16 @@ related:
- /resources/videos/influxdb-tasks/
---
An **InfluxDB task** is a scheduled Flux script that takes a stream of input data, modifies or analyzes
it in some way, then stores the modified data in a new bucket or performs other actions.
An **InfluxDB task** is a scheduled Flux script that takes a stream of input data,
modifies or analyzes it in some way, then writes the modified data back to InfluxDB
or performs other actions.
This article walks through writing a basic InfluxDB task that downsamples
data and stores it in a new bucket.
## Components of a task
Every InfluxDB task needs the following four components.
Every InfluxDB task needs the following components.
Their form and order can vary, but they are all essential parts of a task.
- [Task options](#define-task-options)
@ -42,28 +43,27 @@ Task options define specific information about the task.
The example below illustrates how task options are defined in your Flux script:
```js
option task = {
name: "cqinterval15m",
every: 1h,
offset: 0m,
concurrency: 1,
}
option task = {name: "downsample_5m_precision", every: 1h, offset: 0m}
```
_See [Task configuration options](/influxdb/v2.1/process-data/task-options) for detailed information
_See [Task configuration options](/influxdb/v2.2/process-data/task-options) for detailed information
about each option._
{{% note %}}
When creating a task in the InfluxDB user interface (UI), task options are defined in form fields.
The InfluxDB UI provides a form for defining task options.
{{% /note %}}
## Define a data source
Define a data source using Flux's [`from()` function](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/)
or any other [Flux input functions](/{{< latest "flux" >}}/function-types/#inputs).
For convenience, consider creating a variable that includes the sourced data with
the required time range and any relevant filters.
1. Use [`from()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/)
to query data from InfluxDB {{% cloud-only %}}Cloud{{% /cloud-only %}}.
Use
other [Flux input functions](/{{< latest "flux" >}}/function-types/#inputs)
to retrieve data from other sources.
2. Use [`range()`](/{{< latest "flux" >}}/stdlib/universe/range/) to define the time
range to return data from.
3. Use [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to filter
data based on column values.
```js
data = from(bucket: "example-bucket")
@ -72,9 +72,10 @@ data = from(bucket: "example-bucket")
```
{{% note %}}
#### Using task options in your Flux script
Task options are passed as part of a `task` option record and can be referenced in your Flux script.
#### Use task options in your Flux script
Task options are defined in a `task` option record and can be referenced in your Flux script.
In the example above, the time range is defined as `-task.every`.
`task.every` is dot notation that references the `every` property of the `task` option record.
@ -85,45 +86,58 @@ Using task options to define values in your Flux script can make reusing your ta
## Process or transform your data
The purpose of tasks is to process or transform data in some way.
What exactly happens and what form the output data takes is up to you and your
specific use case.
Tasks automatically process or transform data in some way
at regular intervals.
Data processing can include operations such as downsampling data, detecting
anomalies, sending notifications, and more.
{{% note %}}
#### Account for latent data with an offset
To account for latent data (like data streaming from your edge devices), use an offset in your task. For example, if you set a task interval on the hour with the options `every: 1h` and `offset: 5m`, a task executes 5 minutes after the task interval but the query [`now()`](/{{< latest "flux" >}}/stdlib/universe/now/) time is on the exact hour.
#### Use offset to account for latent data
Use the `offset` task option to account for potentially latent data (like data from edge devices).
A task that runs at one hour intervals (`every: 1h`) with an offset of five minutes (`offset: 5m`)
executes 5 minutes after the hour, but queries data from the original one hour interval.
{{% /note %}}
The example below illustrates a task that downsamples data by calculating the average of set intervals.
It uses the `data` variable defined [above](#define-a-data-source) as the data source.
It then windows the data into 5 minute intervals and calculates the average of each
window using the [`aggregateWindow()` function](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/).
The task example below downsamples data by calculating the average of set intervals.
It uses [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/)
to group points into 5 minute windows and calculate the average of each
window with [`mean()`](/{{< latest "flux" >}}/stdlib/universe/mean/).
```js
data
option task = {name: "downsample_5m_precision", every: 1h, offset: 5m}
from(bucket: "example-bucket")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")
|> aggregateWindow(every: 5m, fn: mean)
```
_See [Common tasks](/influxdb/v2.1/process-data/common-tasks) for examples of tasks commonly used with InfluxDB._
_See [Common tasks](/influxdb/v2.2/process-data/common-tasks) for examples of tasks commonly used with InfluxDB._
## Define a destination
In the vast majority of task use cases, once data is transformed, it needs to be sent and stored somewhere.
This could be a separate bucket or another measurement.
In most cases, you'll want to send and store data after the task has transformed it.
The destination could be a separate InfluxDB measurement or bucket.
The example below uses Flux's [`to()` function](/{{< latest "flux" >}}/stdlib/universe/to)
to send the transformed data to another bucket:
The example below uses [`to()`](/{{< latest "flux" >}}/stdlib/universe/to)
to write the transformed data back to another InfluxDB bucket:
```js
// ...
|> to(bucket: "example-downsampled", org: "my-org")
```
{{% note %}}
In order to write data into InfluxDB, you must have `_time`, `_measurement`, `_field`, and `_value` columns.
{{% /note %}}
To write data into InfluxDB, `to()` requires the following columns:
- `_time`
- `_measurement`
- `_field`
- `_value`
You can also write data to other destinations using
[Flux output functions](/{{< latest "flux" >}}/function-types/#outputs).
## Full example task script
@ -131,15 +145,13 @@ Below is a task script that combines all of the components described above:
```js
// Task options
option task = {name: "cqinterval15m", every: 1h, offset: 0m, concurrency: 1}
option task = {name: "downsample_5m_precision", every: 1h, offset: 0m}
// Data source
data = from(bucket: "example-bucket")
from(bucket: "example-bucket")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")
data
// Data transformation
// Data processing
|> aggregateWindow(every: 5m, fn: mean)
// Data destination
|> to(bucket: "example-downsampled")
@ -148,4 +160,3 @@ data
To learn more about InfluxDB tasks and how they work, watch the following video:
{{< youtube zgCmdtZaH9M >}}

View File

@ -61,7 +61,7 @@ influx [command]
## Commands
| Command | Description |
| :------------------------------------------------------------------ | :------------------------------------------------------------------------- |
|:--------------------------------------------------------------------|:---------------------------------------------------------------------------|
| [apply](/influxdb/v2.1/reference/cli/influx/apply/) | Apply an InfluxDB template |
| [auth](/influxdb/v2.1/reference/cli/influx/auth/) | API token management commands |
| [backup](/influxdb/v2.1/reference/cli/influx/backup/) | Back up data _(InfluxDB OSS only)_ |
@ -76,6 +76,8 @@ influx [command]
| [org](/influxdb/v2.1/reference/cli/influx/org/) | Organization management commands |
| [ping](/influxdb/v2.1/reference/cli/influx/ping/) | Check the InfluxDB `/health` endpoint |
| [query](/influxdb/v2.1/reference/cli/influx/query/) | Execute a Flux query |
| [remote](/influxdb/v2.1/reference/cli/influx/remote/) | Manage remote InfluxDB connections |
| [replicate](/influxdb/v2.1/reference/cli/influx/resplicate/) | Manage InfluxDB replication streams |
| [restore](/influxdb/v2.1/reference/cli/influx/restore/) | Restore backup data _(InfluxDB OSS only)_ |
| [secret](/influxdb/v2.1/reference/cli/influx/secret/) | Manage secrets |
| [setup](/influxdb/v2.1/reference/cli/influx/setup/) | Create default username, password, org, bucket, etc. _(InfluxDB OSS only)_ |

View File

@ -41,7 +41,7 @@ InfluxDB stores data on disk in [shards](/influxdb/v2.1/reference/glossary/#shar
Each shard belongs to a shard group and each shard group has a shard group duration.
The **shard group duration** defines the duration of time that each
shard in the shard group covers.
Each shard contains only points with timestamps in specific time range defined
Each shard contains only points with timestamps in specific a time range defined
by the shard group duration.
By default, shard group durations are set automatically based on the bucket retention
@ -78,4 +78,3 @@ deletes all shard groups with data that is **three to four days old** the next
time the service runs.
{{< html-diagram/data-retention >}}

View File

@ -72,7 +72,10 @@ The cache:
- Stores uncompressed data.
- Gets updates from the WAL each time the storage engine restarts.
The cache is queried at runtime and merged with the data stored in TSM files.
- Uses a maximum `maxSize` bytes of memory.
Cache snapshots are cache objects currently being written to TSM files.
They're kept in memory while flushing so they can be queried along with the cache.
Queries to the storage engine merge data from the cache with data from the TSM files.
Queries execute on a copy of the data that is made from the cache at query processing time.
This way writes that come in while a query is running do not affect the result.

View File

@ -0,0 +1,19 @@
---
title: InfluxDB OSS 2.2 documentation
description: >
InfluxDB OSS is an open source time series database designed to handle high write and query loads.
Learn how to use and leverage InfluxDB in use cases such as monitoring metrics, IoT data, and events.
layout: landing-influxdb
menu:
influxdb_2_2:
name: InfluxDB OSS 2.2
weight: 1
---
#### Welcome
Welcome to the InfluxDB v2.2 documentation!
InfluxDB is an open source time series database designed to handle high write and query workloads.
This documentation is meant to help you learn how to use and leverage InfluxDB to meet your needs.
Common use cases include infrastructure monitoring, IoT data collection, events handling, and more.
If your use case involves time series data, InfluxDB is purpose-built to handle it.

View File

@ -0,0 +1,33 @@
---
title: Develop with the InfluxDB API
seotitle: Use the InfluxDB API
description: Interact with InfluxDB 2.2 using a rich API for writing and querying data and more.
weight: 4
menu:
influxdb_2_2:
name: Develop with the API
influxdb/v2.2/tags: [api]
---
The InfluxDB v2 API provides a programmatic interface for interactions with InfluxDB.
Access the InfluxDB API using the `/api/v2/` endpoint.
## InfluxDB client libraries
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
For information about supported client libraries, see [InfluxDB client libraries](/{{< latest "influxdb" >}}/api-guide/client-libraries/).
## InfluxDB v2 API documentation
<a class="btn" href="/influxdb/v2.2/api/">InfluxDB OSS {{< current-version >}} API documentation</a>
#### View InfluxDB API documentation locally
InfluxDB API documentation is built into the `influxd` service and represents
the API specific to the current version of InfluxDB.
To view the API documentation locally, [start InfluxDB](/influxdb/v2.2/get-started/#start-influxdb)
and visit the `/docs` endpoint in a browser ([localhost:8086/docs](http://localhost:8086/docs)).
## InfluxDB v1 compatibility API documentation
The InfluxDB v2 API includes [InfluxDB 1.x compatibility endpoints](/influxdb/v2.2/reference/api/influxdb-1x/)
that work with InfluxDB 1.x client libraries and third-party integrations like
[Grafana](https://grafana.com) and others.
<a class="btn" href="/influxdb/v2.2/api/v1-compatibility/">View full v1 compatibility API documentation</a>

View File

@ -0,0 +1,75 @@
---
title: API Quick Start
seotitle: Use the InfluxDB API
description: Interact with InfluxDB 1 using a rich API for writing and querying data and more.
weight: 3
menu:
influxdb_2_2:
name: Quick Start
parent: Develop with the API
aliases:
- /influxdb/v2.2/tools/api/
influxdb/cloud/tags: [api]
---
InfluxDB offers a rich API and [client libraries](/influxdb/v2.2/api-guide/client-libraries) ready to integrate with your application. Use popular tools like Curl and [Postman](/influxdb/v2.2/api-guide/postman) for rapidly testing API requests.
This section will guide you through the most commonly used API methods.
For detailed documentation on the entire API, see [InfluxDBv2 API Reference](/influxdb/v2.2/reference/api/#influxdb-v2-api-documentation).
{{% note %}}
If you need to use InfluxDB {{< current-version >}} with **InfluxDB 1.x** API clients and integrations, see the [1.x compatibility API](/influxdb/v2.2/reference/api/influxdb-1x/).
{{% /note %}}
## Bootstrap your application
With most API requests, you'll need to provide a minimum of your InfluxDB URL, Organization, and Authorization Token.
[Install InfluxDB OSS v2.x](/influxdb/v2.2/install/) or upgrade to
an [InfluxDB Cloud account](/influxdb/cloud/sign-up).
### Authentication
InfluxDB uses [API tokens](/influxdb/v2.2/security/tokens/) to authorize API requests.
1. Before exploring the API, use the InfluxDB UI to
[create an initial API token](/influxdb/v2.2/security/tokens/create-token/) for your application.
2. Include your API token in an `Authentication: Token YOUR_API_TOKEN` HTTP header with each request.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[curl](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
{{% get-shared-text "api/v2.0/auth/oss/token-auth.sh" %}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
{{% get-shared-text "api/v2.0/auth/oss/token-auth.js" %}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Postman is another popular tool for exploring APIs. See how to [send authenticated requests with Postman](/{{< latest "influxdb" >}}/api-guide/postman/#send-authenticated-api-requests-with-postman).
## Buckets API
Before writing data you'll need to create a Bucket in InfluxDB.
[Create a bucket](/influxdb/v2.2/organizations/buckets/create-bucket/#create-a-bucket-using-the-influxdb-api) using an HTTP request to the InfluxDB API `/buckets` endpoint.
```sh
{{% get-shared-text "api/v2.0/buckets/oss/create.sh" %}}
```
## Write API
[Write data to InfluxDB](/influxdb/v2.2/write-data/developer-tools/api/) using an HTTP request to the InfluxDB API `/write` endpoint.
## Query API
[Query from InfluxDB](/influxdb/v2.2/query-data/execute-queries/influx-api/) using an HTTP request to the `/query` endpoint.

View File

@ -0,0 +1,26 @@
---
title: Use InfluxDB client libraries
description: >
InfluxDB client libraries are language-specific tools that integrate with the InfluxDB v2 API.
View the list of available client libraries.
weight: 101
aliases:
- /influxdb/v2.2/reference/client-libraries/
- /influxdb/v2.2/reference/api/client-libraries/
- /influxdb/v2.2/tools/client-libraries/
menu:
influxdb_2_2:
name: Client libraries
parent: Develop with the API
influxdb/v2.2/tags: [client libraries]
---
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
The following **InfluxDB v2** client libraries are available:
{{% note %}}
These client libraries are in active development and may not be feature-complete.
This list will continue to grow as more client libraries are released.
{{% /note %}}
{{< children type="list" >}}

View File

@ -0,0 +1,15 @@
---
title: Arduino client library
seotitle: Use the InfluxDB Arduino client library
list_title: Arduino
description: Use the InfluxDB Arduino client library to interact with InfluxDB.
external_url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino
list_note: _ contributed by [tobiasschuerg](https://github.com/tobiasschuerg)_
menu:
influxdb_2_2:
name: Arduino
parent: Client libraries
params:
url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino
weight: 201
---

View File

@ -0,0 +1,117 @@
---
title: JavaScript client library for web browsers
seotitle: Use the InfluxDB JavaScript client library for web browsers
list_title: JavaScript for browsers
description: >
Use the InfluxDB JavaScript client library to interact with InfluxDB in web clients.
menu:
influxdb_2_2:
name: JavaScript for browsers
identifier: client_js_browsers
parent: Client libraries
influxdb/v2.2/tags: [client libraries, JavaScript]
weight: 201
aliases:
- /influxdb/v2.2/reference/api/client-libraries/browserjs/
- /influxdb/v2.2/api-guide/client-libraries/browserjs/write
- /influxdb/v2.2/api-guide/client-libraries/browserjs/query
related:
- /influxdb/v2.2/api-guide/client-libraries/nodejs/write/
- /influxdb/v2.2/api-guide/client-libraries/nodejs/query/
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to interact with the InfluxDB API in browsers and front-end clients. This library supports both front-end and server-side environments and provides the following distributions:
* ECMAScript modules (ESM) and CommonJS modules (CJS)
* Bundled ESM
* Bundled UMD
This guide presumes some familiarity with JavaScript, browser environments, and InfluxDB.
If you're just getting started with InfluxDB, see [Get started with InfluxDB](/{{% latest "influxdb" %}}/get-started/).
{{% warn %}}
### Tokens in production applications
{{% api/browser-token-warning %}}
{{% /warn %}}
* [Before you begin](#before-you-begin)
* [Use with module bundlers](#use-with-module-bundlers)
* [Use bundled distributions with browsers and module loaders](#use-bundled-distributions-with-browsers-and-module-loaders)
* [Get started with the example app](#get-started-with-the-example-app)
## Before you begin
1. Install [Node.js](https://nodejs.org/en/download/package-manager/) to serve your front-end app.
2. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/{{% latest "influxdb" %}}/reference/urls/).
## Use with module bundlers
If you use a module bundler like Webpack or Parcel, install `@influxdata/influxdb-client-browser`.
For more information and examples, see [Node.js](/{{% latest "influxdb" %}}/api-guide/client-libraries/nodejs/).
## Use bundled distributions with browsers and module loaders
1. Configure InfluxDB properties for your script.
```html
<script>
window.INFLUX_ENV = {
url: 'http://localhost:8086',
token: 'YOUR_AUTH_TOKEN'
}
</script>
```
2. Import modules from the latest client library browser distribution.
`@influxdata/influxdb-client-browser` exports bundled ESM and UMD syntaxes.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[ESM](#import-esm)
[UMD](#import-umd)
{{% /code-tabs %}}
{{% code-tab-content %}}
```html
<script type="module">
import {InfluxDB, Point} from 'https://unpkg.com/@influxdata/influxdb-client-browser/dist/index.browser.mjs'
const influxDB = new InfluxDB({INFLUX_ENV.url, INFLUX_ENV.token})
</script>
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```html
<script src="https://unpkg.com/@influxdata/influxdb-client-browser"></script>
<script>
const Influx = window['@influxdata/influxdb-client']
const InfluxDB = Influx.InfluxDB
const influxDB = new InfluxDB({INFLUX_ENV.url, INFLUX_ENV.token})
</script>
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
After you've imported the client library, you're ready to [write data](/{{% latest "influxdb" %}}/api-guide/client-libraries/nodejs/write/?t=nodejs) to InfluxDB.
## Get started with the example app
This library includes an example browser app that queries from and writes to your InfluxDB instance.
1. Clone the [influxdb-client-js](https://github.com/influxdata/influxdb-client-js) repo.
2. Navigate to the `examples` directory:
```js
cd examples
```
3. Update `./env_browser.js` with your InfluxDB [url](/{{% latest "influxdb" %}}/reference/urls/), [bucket](/{{% latest "influxdb" %}}/organizations/buckets/), [organization](/{{% latest "influxdb" %}}/organizations/), and [token](/{{% latest "influxdb" %}}/security/tokens/)
4. Run the following command to start the application at [http://localhost:3001/examples/index.html]()
```sh
npm run browser
```
`index.html` loads the `env_browser.js` configuration, the client library ESM modules, and the application in your browser.

View File

@ -0,0 +1,14 @@
---
title: C# client library
list_title: C#
seotitle: Use the InfluxDB C# client library
description: Use the InfluxDB C# client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-csharp
menu:
influxdb_2_2:
name: C#
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-csharp
weight: 201
---

View File

@ -0,0 +1,14 @@
---
title: Dart client library
list_title: Dart
seotitle: Use the InfluxDB Dart client library
description: Use the InfluxDB Dart client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-dart
menu:
influxdb_2_2:
name: Dart
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-dart
weight: 201
---

View File

@ -0,0 +1,206 @@
---
title: Go client library
seotitle: Use the InfluxDB Go client library
list_title: Go
description: >
Use the InfluxDB Go client library to interact with InfluxDB.
menu:
influxdb_2_2:
name: Go
parent: Client libraries
influxdb/v2.2/tags: [client libraries, Go]
weight: 201
aliases:
- /influxdb/v2.2/reference/api/client-libraries/go/
- /influxdb/v2.2/tools/client-libraries/go/
---
Use the [InfluxDB Go client library](https://github.com/influxdata/influxdb-client-go) to integrate InfluxDB into Go scripts and applications.
This guide presumes some familiarity with Go and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/v2.2/get-started/).
## Before you begin
1. [Install Go 1.13 or later](https://golang.org/doc/install).
2. Add the client package your to your project dependencies.
```sh
# Add InfluxDB Go client package to your project go.mod
go get github.com/influxdata/influxdb-client-go/v2
```
3. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/v2.2/reference/urls/).
## Boilerplate for the InfluxDB Go Client Library
Use the Go library to write and query data from InfluxDB.
1. In your Go program, import the necessary packages and specify the entry point of your executable program.
```go
package main
import (
"context"
"fmt"
"time"
"github.com/influxdata/influxdb-client-go/v2"
)
```
2. Define variables for your InfluxDB [bucket](/influxdb/v2.2/organizations/buckets/), [organization](/influxdb/v2.2/organizations/), and [token](/influxdb/v2.2/security/tokens/).
```go
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "http://localhost:8086"
```
3. Create the the InfluxDB Go client and pass in the `url` and `token` parameters.
```go
client := influxdb2.NewClient(url, token)
```
4. Create a **write client** with the `WriteAPIBlocking` method and pass in the `org` and `bucket` parameters.
```go
writeAPI := client.WriteAPIBlocking(org, bucket)
```
5. To query data, create an InfluxDB **query client** and pass in your InfluxDB `org`.
```go
queryAPI := client.QueryAPI(org)
```
## Write data to InfluxDB with Go
Use the Go library to write data to InfluxDB.
1. Create a [point](/influxdb/v2.2/reference/glossary/#point) and write it to InfluxDB using the `WritePoint` method of the API writer struct.
2. Close the client to flush all pending writes and finish.
```go
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
writeAPI.WritePoint(context.Background(), p)
client.Close()
```
### Complete example write script
```go
func main() {
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "http://localhost:8086"
// Create new client with default option for server url authenticate by token
client := influxdb2.NewClient(url, token)
// User blocking write client for writes to desired bucket
writeAPI := client.WriteAPIBlocking(org, bucket)
// Create point using full params constructor
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
// Write point immediately
writeAPI.WritePoint(context.Background(), p)
// Ensures background processes finishes
client.Close()
}
```
## Query data from InfluxDB with Go
Use the Go library to query data to InfluxDB.
1. Create a Flux query and supply your `bucket` parameter.
```js
from(bucket:"<bucket>")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "stat")
```
The query client sends the Flux query to InfluxDB and returns the results as a FluxRecord object with a table structure.
**The query client includes the following methods:**
- `Query`: Sends the Flux query to InfluxDB.
- `Next`: Iterates over the query response.
- `TableChanged`: Identifies when the group key changes.
- `Record`: Returns the last parsed FluxRecord and gives access to value and row properties.
- `Value`: Returns the actual field value.
```go
result, err := queryAPI.Query(context.Background(), `from(bucket:"<bucket>")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
for result.Next() {
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
fmt.Printf("value: %v\n", result.Record().Value())
}
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
```
**The FluxRecord object includes the following methods for accessing your data:**
- `Table()`: Returns the index of the table the record belongs to.
- `Start()`: Returns the inclusive lower time bound of all records in the current table.
- `Stop()`: Returns the exclusive upper time bound of all records in the current table.
- `Time()`: Returns the time of the record.
- `Value() `: Returns the actual field value.
- `Field()`: Returns the field name.
- `Measurement()`: Returns the measurement name of the record.
- `Values()`: Returns a map of column values.
- `ValueByKey(<your_tags>)`: Returns a value from the record for given column key.
### Complete example query script
```go
func main() {
// Create client
client := influxdb2.NewClient(url, token)
// Get query client
queryAPI := client.QueryAPI(org)
// Get QueryTableResult
result, err := queryAPI.Query(context.Background(), `from(bucket:"my-bucket")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
// Iterate over query response
for result.Next() {
// Notice when group key has changed
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
// Access data
fmt.Printf("value: %v\n", result.Record().Value())
}
// Check for an error
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
// Ensures background processes finishes
client.Close()
}
```
For more information, see the [Go client README on GitHub](https://github.com/influxdata/influxdb-client-go).

View File

@ -0,0 +1,14 @@
---
title: Java client library
seotitle: Use the InfluxDB Java client library
list_title: Java
description: Use the Java client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-java
menu:
influxdb_2_2:
name: Java
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-java
weight: 201
---

View File

@ -0,0 +1,14 @@
---
title: Kotlin client library
seotitle: Use the Kotlin client library
list_title: Kotlin
description: Use the InfluxDB Kotlin client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin
menu:
influxdb_2_2:
name: Kotlin
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin
weight: 201
---

View File

@ -0,0 +1,23 @@
---
title: Node.js JavaScript client library
seotitle: Use the InfluxDB JavaScript client library
list_title: Node.js
description: >
Use the InfluxDB Node.js JavaScript client library to interact with InfluxDB.
menu:
influxdb_2_2:
name: Node.js
parent: Client libraries
influxdb/v2.2/tags: [client libraries, JavaScript]
weight: 201
aliases:
- /influxdb/v2.2/reference/api/client-libraries/nodejs/
- /influxdb/v2.2/reference/api/client-libraries/js/
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to integrate InfluxDB into your Node.js application.
In this guide, you'll start a Node.js project from scratch and code some simple API operations.
{{< children >}}
{{% api/v2dot0/nodejs/learn-more %}}

View File

@ -0,0 +1,97 @@
---
title: Install the InfluxDB JavaScript client library
seotitle: Install the InfluxDB Node.js JavaScript client library
description: >
Install the JavaScript client library to interact with the InfluxDB API in Node.js.
menu:
influxdb_2_2:
name: Install
parent: Node.js
influxdb/v2.2/tags: [client libraries, JavaScript]
weight: 100
aliases:
- /influxdb/v2.2/reference/api/client-libraries/nodejs/install
---
## Install Node.js
1. Install [Node.js](https://nodejs.org/en/download/package-manager/).
2. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/v2.2/reference/urls/).
3. Start a new Node.js project.
The `npm` package manager is included with Node.js.
```sh
npm init -y influx-node-app
```
## Install TypeScript
Many of the client library examples use [TypeScript](https://www.typescriptlang.org/). Follow these steps to initialize the TypeScript project.
1. Install TypeScript and type definitions for Node.js.
```sh
npm i -g typescript && npm i --save-dev @types/node
```
2. Create a TypeScript configuration with default values.
```sh
tsc --init
```
3. Run the TypeScript compiler. To recompile your code automatically as you make changes, pass the `watch` flag to the compiler.
```sh
tsc -w -p
```
## Install dependencies
The JavaScript client library contains two packages: `@influxdata/influxdb-client` and `@influxdata/influxdb-client-apis`.
Add both as dependencies of your project.
1. Open a new terminal window and install `@influxdata/influxdb-client` for querying and writing data:
```sh
npm install --save @influxdata/influxdb-client
```
3. Install `@influxdata/influxdb-client-apis` for access to the InfluxDB management APIs:
```sh
npm install --save @influxdata/influxdb-client-apis
```
## Next steps
Once you've installed the Javascript client library, you're ready to [write data](/influxdb/v2.2/api-guide/client-libraries/nodejs/write/) to InfluxDB or [get started](#get-started-with-examples) with other examples from the client library.
## Get started with examples
{{% note %}}
The client examples include an [`env`](https://github.com/influxdata/influxdb-client-js/blob/master/examples/env.js) module for accessing your InfluxDB properties from environment variables or from `env.js`.
The examples use these properties to interact with the InfluxDB API.
{{% /note %}}
1. Set environment variables or update `env.js` with your InfluxDB [bucket](/influxdb/v2.2/organizations/buckets/), [organization](/influxdb/v2.2/organizations/), [token](/influxdb/v2.2/security/tokens/), and [url](/influxdb/v2.2/reference/urls/).
```sh
export INFLUX_URL=http://localhost:8086
export INFLUX_TOKEN=YOUR_API_TOKEN
export INFLUX_ORG=YOUR_ORG
export INFLUX_BUCKET=YOUR_BUCKET
```
Replace the following:
- *`YOUR_API_TOKEN`*: InfluxDB API token
- *`YOUR_ORG`*: InfluxDB organization ID
- *`YOUR_BUCKET`*: InfluxDB bucket name
2. Run an example script.
```sh
query.ts
```
{{% api/v2dot0/nodejs/learn-more %}}

View File

@ -0,0 +1,94 @@
---
title: Query data with the InfluxDB JavaScript client library
description: >
Use the JavaScript client library to query data with the InfluxDB API in Node.js.
menu:
influxdb_2_2:
name: Query
parent: Node.js
influxdb/v2.2/tags: [client libraries, JavaScript]
weight: 201
aliases:
- /influxdb/v2.2/reference/api/client-libraries/nodejs/query
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB.
The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table.
## Before you begin
- [Install the client library and other dependencies](/influxdb/v2.2/api-guide/client-libraries/nodejs/install/).
## Query InfluxDB
1. Change to your new project directory and create a file for your query module.
```sh
cd influx-node-app && touch query.js
```
2. Instantiate an `InfluxDB` client. Provide your InfluxDB URL and API token.
Use the `getQueryApi()` method of the client.
Provide your InfluxDB organization ID to create a configured **query client**.
```js
import { InfluxDB, Point } from '@influxdata/influxdb-client'
const queryApi = new InfluxDB({YOUR_URL, YOUR_API_TOKEN}).getQueryApi(YOUR_ORG)
```
Replace the following:
- *`YOUR_URL`*: InfluxDB URL
- *`YOUR_API_TOKEN`*: InfluxDB API token
- *`YOUR_ORG`*: InfluxDB organization ID
3. Create a Flux query for your InfluxDB bucket. Store the query as a string variable.
{{% warn %}}
To prevent SQL injection attacks, avoid concatenating unsafe user input with queries.
{{% /warn %}}
```js
const fluxQuery =
'from(bucket: "YOUR_BUCKET")
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "temperature")'
```
Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket.
4. Use the `queryRows()` method of the query client to query InfluxDB.
`queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable).
`queryRows()` subscribes your observer to the observable.
Finally, the observer logs the rows from the response to the terminal.
```js
const observer = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
}
}
queryApi.queryRows(fluxQuery, observer)
```
### Complete example
```js
{{% get-shared-text "api/v2.0/query/query.mjs" %}}
```
To run the example from a file, set your InfluxDB environment variables and use `node` to execute the JavaScript file.
```sh
export INFLUX_URL=http://localhost:8086 && \
export INFLUX_TOKEN=YOUR_API_TOKEN && \
export INFLUX_ORG=YOUR_ORG && \
node query.js
```
{{% api/v2dot0/nodejs/learn-more %}}

View File

@ -0,0 +1,117 @@
---
title: Write data with the InfluxDB JavaScript client library
description: >
Use the JavaScript client library to write data with the InfluxDB API in Node.js.
menu:
influxdb_2_2:
name: Write
parent: Node.js
influxdb/v2.2/tags: [client libraries, JavaScript]
weight: 101
aliases:
- /influxdb/v2.2/reference/api/client-libraries/nodejs/write
related:
- /influxdb/v2.2/write-data/troubleshoot/
---
Use the [InfluxDB Javascript client library](https://github.com/influxdata/influxdb-client-js) to write data from a Node.js environment to InfluxDB.
The Javascript client library includes the following convenient features for writing data to InfluxDB:
- Apply default tags to data points.
- Buffer points into batches to optimize data transfer.
- Automatically retry requests on failure.
- Set an optional HTTP proxy address for your network.
### Before you begin
- [Install the client library and other dependencies](/influxdb/v2.2/api-guide/client-libraries/nodejs/install/).
### Write data with the client library
1. Instantiate an `InfluxDB` client. Provide your InfluxDB URL and API token.
```js
import {InfluxDB, Point} from '@influxdata/influxdb-client'
const influxDB = new InfluxDB({YOUR_URL, YOUR_API_TOKEN})
```
Replace the following:
- *`YOUR_URL`*: InfluxDB URL
- *`YOUR_API_TOKEN`*: InfluxDB API token
2. Use the `getWriteApi()` method of the client to create a **write client**.
Provide your InfluxDB organization ID and bucket name.
```js
const writeApi = influxDB.getWriteApi(YOUR_ORG, YOUR_BUCKET)
```
Replace the following:
- *`YOUR_ORG`*: InfluxDB organization ID
- *`YOUR_BUCKET`*: InfluxDB bucket name
3. To apply one or more [tags](/influxdb/v2.2/reference/glossary/#tag) to all points, use the `useDefaultTags()` method.
Provide tags as an object of key/value pairs.
```js
writeApi.useDefaultTags({region: 'west'})
```
4. Use the `Point()` constructor to create a [point](/influxdb/v2.2/reference/glossary/#point).
1. Call the constructor and provide a [measurement](/influxdb/v2.2/reference/glossary/#measurement).
2. To add one or more tags, chain the `tag()` method to the constructor.
Provide a `name` and `value`.
3. To add a field of type `float`, chain the `floatField()` method to the constructor.
Provide a `name` and `value`.
```js
const point1 = new Point('temperature')
.tag('sensor_id', 'TLM010')
.floatField('value', 24)
```
5. Use the `writePoint()` method to write the point to your InfluxDB bucket.
Finally, use the `close()` method to flush all pending writes.
The example logs the new data point followed by "WRITE FINISHED" to stdout.
```js
writeApi.writePoint(point1)
writeApi.close().then(() => {
console.log('WRITE FINISHED')
})
```
### Complete example
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Curl](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
{{< get-shared-text "api/v2.0/write/write.sh" >}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
{{< get-shared-text "api/v2.0/write/write.mjs" >}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
To run the example from a file, set your InfluxDB environment variables and use `node` to execute the JavaScript file.
```sh
export INFLUX_URL=http://localhost:8086 && \
export INFLUX_TOKEN=YOUR_API_TOKEN && \
export INFLUX_ORG=YOUR_ORG && \
export INFLUX_BUCKET=YOUR_BUCKET && \
node write.js
```
### Response codes
_For information about **InfluxDB API response codes**, see
[InfluxDB API Write documentation](/influxdb/cloud/api/#operation/PostWrite)._

View File

@ -0,0 +1,14 @@
---
title: PHP client library
seotitle: Use the InfluxDB PHP client library
list_title: PHP
description: Use the InfluxDB PHP client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-php
menu:
influxdb_2_2:
name: PHP
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-php
weight: 201
---

View File

@ -0,0 +1,176 @@
---
title: Python client library
seotitle: Use the InfluxDB Python client library
list_title: Python
description: >
Use the InfluxDB Python client library to interact with InfluxDB.
menu:
influxdb_2_2:
name: Python
parent: Client libraries
influxdb/v2.2/tags: [client libraries, python]
aliases:
- /influxdb/v2.2/reference/api/client-libraries/python/
- /influxdb/v2.2/reference/api/client-libraries/python-cl-guide/
- /influxdb/v2.2/tools/client-libraries/python/
weight: 201
---
Use the [InfluxDB Python client library](https://github.com/influxdata/influxdb-client-python) to integrate InfluxDB into Python scripts and applications.
This guide presumes some familiarity with Python and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/v2.2/get-started/).
## Before you begin
1. Install the InfluxDB Python library:
```sh
pip install influxdb-client
```
2. Ensure that InfluxDB is running.
If running InfluxDB locally, visit http://localhost:8086.
(If using InfluxDB Cloud, visit the URL of your InfluxDB Cloud UI.
For example: https://us-west-2-1.aws.cloud2.influxdata.com.)
## Write data to InfluxDB with Python
We are going to write some data in [line protocol](/influxdb/v2.2/reference/syntax/line-protocol/) using the Python library.
1. In your Python program, import the InfluxDB client library and use it to write data to InfluxDB.
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
```
2. Define a few variables with the name of your [bucket](/influxdb/v2.2/organizations/buckets/), [organization](/influxdb/v2.2/organizations/), and [token](/influxdb/v2.2/security/tokens/).
```python
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="http://localhost:8086"
```
3. Instantiate the client. The `InfluxDBClient` object takes three named parameters: `url`, `org`, and `token`. Pass in the named parameters.
```python
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
```
The `InfluxDBClient` object has a `write_api` method used for configuration.
4. Instantiate a **write client** using the `client` object and the `write_api` method. Use the `write_api` method to configure the writer object.
```python
write_api = client.write_api(write_options=SYNCHRONOUS)
```
5. Create a [point](/influxdb/v2.2/reference/glossary/#point) object and write it to InfluxDB using the `write` method of the API writer object. The write method requires three parameters: `bucket`, `org`, and `record`.
```python
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
### Complete example write script
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="http://localhost:8086"
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
write_api = client.write_api(write_options=SYNCHRONOUS)
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
## Query data from InfluxDB with Python
1. Instantiate the **query client**.
```python
query_api = client.query_api()
```
2. Create a Flux query, and then format it as a Python string.
```python
query = ' from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" ) '
```
The query client sends the Flux query to InfluxDB and returns a Flux object with a table structure.
3. Pass the `query()` method two named parameters:`org` and `query`.
```python
result = query_api.query(org=org, query=query)
```
4. Iterate through the tables and records in the Flux object.
- Use the `get_value()` method to return values.
- Use the `get_field()` method to return fields.
```python
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
**The Flux object provides the following methods for accessing your data:**
- `get_measurement()`: Returns the measurement name of the record.
- `get_field()`: Returns the field name.
- `get_value()`: Returns the actual field value.
- `values`: Returns a map of column values.
- `values.get("<your tag>")`: Returns a value from the record for given column.
- `get_time()`: Returns the time of the record.
- `get_start()`: Returns the inclusive lower time bound of all records in the current table.
- `get_stop()`: Returns the exclusive upper time bound of all records in the current table.
### Complete example query script
```python
query_api = client.query_api()
query = from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )
result = query_api.query(org=org, query=query)
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
For more information, see the [Python client README on GitHub](https://github.com/influxdata/influxdb-client-python).

View File

@ -0,0 +1,14 @@
---
title: R package client library
list_title: R
seotitle: Use the InfluxDB client R package
description: Use the InfluxDB client R package to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-r
menu:
influxdb_2_2:
name: R
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-r
weight: 201
---

View File

@ -0,0 +1,14 @@
---
title: Ruby client library
seotitle: Use the InfluxDB Ruby client library
list_title: Ruby
description: Use the InfluxDB Ruby client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-ruby
menu:
influxdb_2_2:
name: Ruby
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-ruby
weight: 201
---

View File

@ -0,0 +1,14 @@
---
title: Scala client library
seotitle: Use the InfluxDB Scala client library
list_title: Scala
description: Use the InfluxDB Scala client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala
menu:
influxdb_2_2:
name: Scala
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala
weight: 201
---

View File

@ -0,0 +1,14 @@
---
title: Swift client library
seotitle: Use the InfluxDB Swift client library
list_title: Swift
description: Use the InfluxDB Swift client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-swift
menu:
influxdb_2_2:
name: Swift
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-swift
weight: 201
---

View File

@ -0,0 +1,57 @@
---
title: Use Postman with the InfluxDB API
description: >
Use [Postman](https://www.postman.com/), a popular tool for exploring APIs,
to interact with the [InfluxDB API](/influxdb/v2.2/api-guide/).
menu:
influxdb_2_2:
parent: Tools & integrations
name: Use Postman
weight: 105
influxdb/v2.2/tags: [api, authentication]
aliases:
- /influxdb/v2.2/reference/api/postman/
---
Use [Postman](https://www.postman.com/), a popular tool for exploring APIs,
to interact with the [InfluxDB API](/influxdb/v2.2/api-guide/).
## Install Postman
Download Postman from the [official downloads page](https://www.postman.com/downloads/).
Or to install with Homebrew on macOS, run the following command:
```sh
brew cask install postman
```
## Send authenticated API requests with Postman
All requests to the [InfluxDB v2 API](/influxdb/v2.2/api-guide/) must include an [InfluxDB API token](/influxdb/v2.2/security/tokens/).
{{% note %}}
#### Authenticate with a username and password
If you need to send a username and password (`Authorization: Basic`) to the [InfluxDB 1.x compatibility API](/influxdb/v2.2/reference/api/influxdb-1x/), see how to [authenticate with a username and password scheme](/influxdb/v2.2/reference/api/influxdb-1x/#authenticate-with-the-token-scheme).
{{% /note %}}
To configure Postman to send an [InfluxDB API token](/influxdb/v2.2/security/tokens/) with the `Authorization: Token` HTTP header, do the following:
1. If you have not already, [create a token](/influxdb/v2.2/security/tokens/create-token/).
2. In the Postman **Authorization** tab, select **API Key** in the **Type** dropdown.
3. For **Key**, enter `Authorization`.
4. For **Value**, enter `Token INFLUX_API_TOKEN`, replacing *`INFLUX_API_TOKEN`* with the token generated in step 1.
5. Ensure that the **Add to** option is set to **Header**.
#### Test authentication credentials
To test the authentication, in Postman, enter your InfluxDB API `/api/v2/` root endpoint URL and click **Send**.
###### InfluxDB v2 API root endpoint
```sh
http://localhost:8086/api/v2
```

View File

@ -0,0 +1,17 @@
---
title: Back up and restore data
seotitle: Backup and restore data with InfluxDB
description: >
InfluxDB provides tools that let you back up and restore data and metadata stored
in InfluxDB.
influxdb/v2.2/tags: [backup, restore]
menu:
influxdb_2_2:
name: Back up & restore data
weight: 9
products: [oss]
---
InfluxDB provides tools to back up and restore data and metadata stored in InfluxDB.
{{< children >}}

View File

@ -0,0 +1,49 @@
---
title: Back up data
seotitle: Back up data in InfluxDB
description: >
Use the `influx backup` command to back up data and metadata stored in InfluxDB.
menu:
influxdb_2_2:
parent: Back up & restore data
weight: 101
related:
- /influxdb/v2.2/backup-restore/restore/
- /influxdb/v2.2/reference/cli/influx/backup/
products: [oss]
---
Use the [`influx backup` command](/influxdb/v2.2/reference/cli/influx/backup/) to back up
data and metadata stored in InfluxDB.
InfluxDB copies all data and metadata to a set of files stored in a specified directory
on your local filesystem.
{{% note %}}
#### InfluxDB 1.x/2.x compatibility
The InfluxDB {{< current-version >}} `influx backup` command is not compatible with versions of InfluxDB prior to 2.0.0.
**For information about migrating data between InfluxDB 1.x and {{< current-version >}}, see:**
- [Automatically upgrade from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.2/upgrade/v1-to-v2/automatic-upgrade/)
- [Manually upgrade from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.2/upgrade/v1-to-v2/manual-upgrade/)
{{% /note %}}
{{% cloud %}}
The `influx backup` command **cannot** back up data stored in **{{< cloud-name "short" >}}**.
{{% /cloud %}}
The `influx backup` command requires:
- The directory path for where to store the backup file set
- The **root authorization token** (the token created for the first user in the
[InfluxDB setup process](/influxdb/v2.2/get-started/)).
##### Back up data with the influx CLI
```sh
# Syntax
influx backup <backup-path> -t <root-token>
# Example
influx backup \
path/to/backup_$(date '+%Y-%m-%d_%H-%M') \
-t xXXXX0xXX0xxX0xx_x0XxXxXXXxxXX0XXX0XXxXxX0XxxxXX0Xx0xx==
```

View File

@ -0,0 +1,141 @@
---
title: Restore data
seotitle: Restore data in InfluxDB
description: >
Use the `influx restore` command to restore backup data and metadata from InfluxDB.
menu:
influxdb_2_2:
parent: Back up & restore data
weight: 101
influxdb/v2.2/tags: [restore]
related:
- /influxdb/v2.2/backup-restore/backup/
- /influxdb/v2.2/reference/cli/influxd/restore/
products: [oss]
---
{{% cloud %}}
Restores **not supported in {{< cloud-name "short" >}}**.
{{% /cloud %}}
Use the `influx restore` command to restore backup data and metadata from InfluxDB OSS.
- [Restore data with the influx CLI](#restore-data-with-the-influx-cli)
- [Recover from a failed restore](#recover-from-a-failed-restore)
InfluxDB moves existing data and metadata to a temporary location.
If the restore fails, InfluxDB preserves temporary data for recovery,
otherwise this data is deleted.
_See [Recover from a failed restore](#recover-from-a-failed-restore)._
{{% note %}}
#### Cannot restore to existing buckets
The `influx restore` command cannot restore data to existing buckets.
Use the `--new-bucket` flag to create a new bucket to restore data to.
To restore data and retain bucket names, [delete existing buckets](/influxdb/v2.2/organizations/buckets/delete-bucket/)
and then begin the restore process.
{{% /note %}}
## Restore data with the influx CLI
Use the `influx restore` command and specify the path to the backup directory.
_For more information about restore options and flags, see the
[`influx restore` documentation](/influxdb/v2.2/reference/cli/influx/restore/)._
- [Restore all time series data](#restore-all-time-series-data)
- [Restore data from a specific bucket](#restore-data-from-a-specific-bucket)
- [Restore and replace all InfluxDB data](#restore-and-replace-all-influxdb-data)
### Restore all time series data
To restore all time series data from a backup directory, provide the following:
- backup directory path
```sh
influx restore /backups/2020-01-20_12-00/
```
### Restore data from a specific bucket
To restore data from a specific backup bucket, provide the following:
- backup directory path
- bucket name or ID
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--bucket example-bucket
# OR
influx restore \
/backups/2020-01-20_12-00/ \
--bucket-id 000000000000
```
If a bucket with the same name as the backed up bucket already exists in InfluxDB,
use the `--new-bucket` flag to create a new bucket with a different name and
restore data into it.
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--bucket example-bucket \
--new-bucket new-example-bucket
```
### Restore and replace all InfluxDB data
To restore and replace all time series data _and_ InfluxDB key-value data such as
tokens, users, dashboards, etc., include the following:
- `--full` flag
- backup directory path
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--full
```
{{% note %}}
#### Restore to a new InfluxDB server
If using a backup to populate a new InfluxDB server:
1. Retrieve the [admin token](/influxdb/v2.2/security/tokens/#admin-token) from your source InfluxDB instance.
2. Set up your new InfluxDB instance, but use the `-t`, `--token` flag to use the
**admin token** from your source instance as the admin token on your new instance.
```sh
influx setup --token My5uP3rSecR37t0keN
```
3. Restore the backup to the new server.
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--full
```
If you do not provide the admin token from your source InfluxDB instance as the
admin token in your new instance, the restore process and all subsequent attempts
to authenticate with the new server will fail.
1. The first restore API call uses the auto-generated token to authenticate with
the new server and overwrites the entire key-value store in the new server, including
the auto-generated token.
2. The second restore API call attempts to upload time series data, but uses the
auto-generated token to authenticate with new server.
That token is overwritten in first restore API call and the process fails to authenticate.
{{% /note %}}
## Recover from a failed restore
If the restoration process fails, InfluxDB preserves existing data in a `tmp`
directory in the [target engine path](/influxdb/v2.2/reference/cli/influx/restore/#flags)
(default is `~/.influxdbv2/engine`).
To recover from a failed restore:
1. Copy the temporary files back into the `engine` directory.
2. Remove the `.tmp` extensions from each of the copied files.
3. Restart the `influxd` server.

View File

@ -0,0 +1,59 @@
---
title: Get started with InfluxDB
description: >
Start collecting, processing, and visualizing data in InfluxDB OSS.
menu:
influxdb_2_2:
name: Get started
weight: 3
influxdb/v2.2/tags: [get-started]
aliases:
- /influxdb/v2.2/introduction/get-started/
---
After you've [installed InfluxDB OSS](/influxdb/v2.2/install/), you're ready to get started. Explore the following ways to work with your data:
- [Collect and write data](#collect-and-write-data)
- [Query data](#query-data)
- [Process data](#process-data)
- [Visualize data](#visualize-data)
- [Monitor and alert](#monitor-and-alert)
### Collect and write data
Collect and write data to InfluxDB using the Telegraf plugins, the InfluxDB v2 API, the `influx` command line interface (CLI), the InfluxDB UI (the user interface for InfluxDB 2.2), or the InfluxDB v2 API client libraries.
#### Use Telegraf
Use Telegraf to quickly write data to {{< cloud-name >}}.
Create new Telegraf configurations automatically in the InfluxDB UI, or manually update an existing Telegraf configuration to send data to your {{< cloud-name "short" >}} instance.
For details, see [Automatically configure Telegraf](/influxdb/v2.2/write-data/no-code/use-telegraf/auto-config/)
and [Manually update Telegraf configurations](/influxdb/v2.2/write-data/no-code/use-telegraf/manual-config/).
#### Scrape data
**InfluxDB OSS** lets you scrape Prometheus-formatted metrics from HTTP endpoints. For details, see [Scrape data](/influxdb/v2.2/write-data/no-code/scrape-data/).
#### API, CLI, and client libraries
For information about using the InfluxDB v2 API, `influx` CLI, and client libraries to write data, see [Write data to InfluxDB](/influxdb/v2.2/write-data/).
### Query data
Query data using Flux, the UI, and the `influx` command line interface.
See [Query data](/influxdb/v2.2/query-data/).
### Process data
Use InfluxDB tasks to process and downsample data. See [Process data](/influxdb/v2.2/process-data/).
### Visualize data
Build custom dashboards to visualize your data.
See [Visualize data](/influxdb/v2.2/visualize-data/).
### Monitor and alert
Monitor your data and sends alerts based on specified logic.
See [Monitor and alert](/influxdb/v2.2/monitor-alert/).

View File

@ -0,0 +1,98 @@
---
title: InfluxDB templates
description: >
InfluxDB templates are prepackaged InfluxDB configurations that contain everything
from dashboards and Telegraf configurations to notifications and alerts.
menu: influxdb_2_2
weight: 10
influxdb/v2.2/tags: [templates]
---
InfluxDB templates are prepackaged InfluxDB configurations that contain everything
from dashboards and Telegraf configurations to notifications and alerts.
Use templates to monitor your technology stack,
set up a fresh instance of InfluxDB, back up your dashboard configuration, or
[share your configuration](https://github.com/influxdata/community-templates/) with the InfluxData community.
**InfluxDB templates do the following:**
- Reduce setup time by giving you resources that are already configured for your use-case.
- Facilitate secure, portable, and source-controlled InfluxDB resource states.
- Simplify sharing and using pre-built InfluxDB solutions.
{{< youtube 2JjW4Rym9XE >}}
<a class="btn github" href="https://github.com/influxdata/community-templates/" target="_blank">View InfluxDB community templates</a>
## Template manifests
A template **manifest** is a file that defines
InfluxDB [resources](#template-resources).
Template manifests support the following formats:
- [YAML](https://yaml.org/)
- [JSON](https://www.json.org/)
- [Jsonnet](https://jsonnet.org/)
{{% note %}}
Template manifests are compatible with
[Kubernetes Custom Resource Definitions (CRD)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/).
{{% /note %}}
The `metadata.name` field in manifests uniquely identifies each resource in the template.
`metadata.name` values must be [DNS-1123](https://tools.ietf.org/html/rfc1123) compliant.
The `spec` object contains the resource configuration.
#### Example
```yaml
# bucket-template.yml
# Template manifest that defines two buckets.
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
name: thirsty-shaw-91b005
spec:
description: My IoT Center Bucket
name: iot-center
retentionRules:
- everySeconds: 86400
type: expire
---
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
name: upbeat-fermat-91b001
spec:
name: air_sensor
---
```
_See [Create an InfluxDB template](/influxdb/v2.2/influxdb-templates/create/) for information about
generating template manifests._
### Template resources
Templates may contain the following InfluxDB resources:
- [buckets](/influxdb/v2.2/organizations/buckets/create-bucket/)
- [checks](/influxdb/v2.2/monitor-alert/checks/create/)
- [dashboards](/influxdb/v2.2/visualize-data/dashboards/create-dashboard/)
- [dashboard variables](/influxdb/v2.2/visualize-data/variables/create-variable/)
- [labels](/influxdb/v2.2/visualize-data/labels/)
- [notification endpoints](/influxdb/v2.2/monitor-alert/notification-endpoints/create/)
- [notification rules](/influxdb/v2.2/monitor-alert/notification-rules/create/)
- [tasks](/influxdb/v2.2/process-data/manage-tasks/create-task/)
- [Telegraf configurations](/influxdb/v2.2/write-data/no-code/use-telegraf/)
## Stacks
Use **InfluxDB stacks** to manage InfluxDB templates.
When you apply a template, InfluxDB associates resources in the template with a stack.
Use stacks to add, update, or remove InfluxDB templates over time.
For more information, see [InfluxDB stacks](#influxdb-stacks) below.
---
{{< children >}}

View File

@ -0,0 +1,291 @@
---
title: Create an InfluxDB template
description: >
Use the InfluxDB UI and the `influx export` command to create InfluxDB templates.
menu:
influxdb_2_2:
parent: InfluxDB templates
name: Create a template
identifier: Create an InfluxDB template
weight: 103
influxdb/v2.2/tags: [templates]
related:
- /influxdb/v2.2/reference/cli/influx/export/
- /influxdb/v2.2/reference/cli/influx/export/all/
---
Use the InfluxDB user interface (UI) and the [`influx export` command](/influxdb/v2.2/reference/cli/influx/export/) to
create InfluxDB templates from [resources](/influxdb/v2.2/influxdb-templates/#template-resources) in an organization.
Add buckets, Telegraf configurations, tasks, and more in the InfluxDB
UI and then export those resources as a template.
{{< youtube 714uHkxKM6U >}}
- [Create a template](#create-a-template)
- [Export resources to a template](#export-resources-to-a-template)
- [Include user-definable resource names](#include-user-definable-resource-names)
- [Troubleshoot template results and permissions](#troubleshoot-template-results-and-permissions)
- [Share your InfluxDB templates](#share-your-influxdb-templates)
## Create a template
Creating a new organization to contain only your template resources is an easy way
to ensure you export the resources you want.
Follow these steps to create a template from a new organization.
1. [Start InfluxDB](/influxdb/v2.2/get-started/).
2. [Create a new organization](/influxdb/v2.2/organizations/create-org/).
3. In the InfluxDB UI, add one or more [resources](/influxdb/v2.2/influxdb-templates/#template-resources).
4. [Create an **All-Access** API token](/influxdb/v2.2/security/tokens/create-token/) (or a token that has **read** access to the organization).
5. Use the API token from **Step 4** with the [`influx export all` subcommand](/influxdb/v2.2/reference/cli/influx/export/all/) to [export all resources]() in the organization to a template file.
```sh
influx export all \
-o YOUR_INFLUX_ORG \
-t YOUR_ALL_ACCESS_TOKEN \
-f ~/templates/template.yml
```
## Export resources to a template
The [`influx export` command](/influxdb/v2.2/reference/cli/influx/export/) and subcommands let you
export [resources](#template-resources) from an organization to a template manifest.
Your [API token](/influxdb/v2.2/security/tokens/) must have **read** access to resources that you want to export.
If you want to export resources that depend on other resources, be sure to export the dependencies.
{{< cli/influx-creds-note >}}
To create a template that **adds, modifies, and deletes resources** when applied to an organization, use [InfluxDB stacks](/influxdb/v2.2/influxdb-templates/stacks/).
First, [initialize the stack](/influxdb/v2.2/influxdb-templates/stacks/init/)
and then [export the stack](#export-a-stack).
To create a template that only **adds resources** when applied to an organization (and doesn't modify existing resources there), choose one of the following:
- [Export all resources](#export-all-resources) to export all resources or a filtered
subset of resources to a template.
- [Export specific resources](#export-specific-resources) by name or ID to a template.
### Export all resources
To export all [resources](/influxdb/v2.2/influxdb-templates/#template-resources)
within an organization to a template manifest file, use the
[`influx export all` subcommand](/influxdb/v2.2/reference/cli/influx/export/all/)
with the `--file` (`-f`) option.
Provide the following:
- **Destination path and filename** for the template manifest.
The filename extension determines the output format:
- `your-template.yml`: [YAML](https://yaml.org/) format
- `your-template.json`: [JSON](https://json.org/) format
```sh
# Syntax
influx export all -f <FILE_PATH>
```
#### Export resources filtered by labelName or resourceKind
The [`influx export all` subcommand](/influxdb/v2.2/reference/cli/influx/export/all/)
accepts a `--filter` option that exports
only resources that match specified label names or resource kinds.
To filter on label name *and* resource kind, provide a `--filter` for each.
#### Export only dashboards and buckets with specific labels
The following example exports resources that match this predicate logic:
```js
(resourceKind == "Bucket" or resourceKind == "Dashboard")
and
(labelName == "Example1" or labelName == "Example2")
```
```sh
influx export all \
-f ~/templates/template.yml \
--filter=resourceKind=Bucket \
--filter=resourceKind=Dashboard \
--filter=labelName=Example1 \
--filter=labelName=Example2
```
For more options and examples, see the
[`influx export all` subcommand](/influxdb/v2.2/reference/cli/influx/export/all/).
### Export specific resources
To export specific [resources](/influxdb/v2.2/influxdb-templates/#template-resources) by name or ID, use the **[`influx export` command](/influxdb/v2.2/reference/cli/influx/export/)** with one or more lists of resources to include.
Provide the following:
- **Destination path and filename** for the template manifest.
The filename extension determines the output format:
- `your-template.yml`: [YAML](https://yaml.org/) format
- `your-template.json`: [JSON](https://json.org/) format
- **Resource options** with corresponding lists of resource IDs or resource names to include in the template.
For information about what resource options are available, see the
[`influx export` command](/influxdb/v2.2/reference/cli/influx/export/).
```sh
# Syntax
influx export -f <file-path> [resource-flags]
```
#### Export specific resources by ID
```sh
influx export \
--org-id ed32b47572a0137b \
-f ~/templates/template.yml \
-t $INFLUX_TOKEN \
--buckets=00x000ooo0xx0xx,o0xx0xx00x000oo \
--dashboards=00000xX0x0X00x000 \
--telegraf-configs=00000x0x000X0x0X0
```
#### Export specific resources by name
```sh
influx export \
--org-id ed32b47572a0137b \
-f ~/templates/template.yml \
--bucket-names=bucket1,bucket2 \
--dashboard-names=dashboard1,dashboard2 \
--telegraf-config-names=telegrafconfig1,telegrafconfig2
```
### Export a stack
To export an InfluxDB [stack](/influxdb/v2.2/influxdb-templates/stacks/) and all its associated resources as a template, use the
`influx export stack` command.
Provide the following:
- **Organization name** or **ID**
- **API token** with read access to the organization
- **Destination path and filename** for the template manifest.
The filename extension determines the output format:
- `your-template.yml`: [YAML](https://yaml.org/) format
- `your-template.json`: [JSON](https://json.org/) format
- **Stack ID**
#### Export a stack as a template
```sh
# Syntax
influx export stack \
-o <INFLUX_ORG> \
-t <INFLUX_TOKEN> \
-f <FILE_PATH> \
<STACK_ID>
# Example
influx export stack \
-o my-org \
-t mYSuP3RS3CreTt0K3n
-f ~/templates/awesome-template.yml \
05dbb791a4324000
```
## Include user-definable resource names
After exporting a template manifest, replace resource names with **environment references**
to let users customize resource names when installing your template.
1. [Export a template](#export-a-template).
2. Select any of the following resource fields to update:
- `metadata.name`
- `associations[].name`
- `endpointName` _(unique to `NotificationRule` resources)_
3. Replace the resource field value with an `envRef` object with a `key` property
that references the key of a key-value pair the user provides when installing the template.
During installation, the `envRef` object is replaced by the value of the
referenced key-value pair.
If the user does not provide the environment reference key-value pair, InfluxDB
uses the `key` string as the default value.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[YAML](#)
[JSON](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```yml
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
name:
envRef:
key: bucket-name-1
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```json
{
"apiVersion": "influxdata.com/v2alpha1",
"kind": "Bucket",
"metadata": {
"name": {
"envRef": {
"key": "bucket-name-1"
}
}
}
}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Using the example above, users are prompted to provide a value for `bucket-name-1`
when [applying the template](/influxdb/v2.2/influxdb-templates/use/#apply-templates).
Users can also include the `--env-ref` flag with the appropriate key-value pair
when installing the template.
```sh
# Set bucket-name-1 to "myBucket"
influx apply \
-f /path/to/template.yml \
--env-ref=bucket-name-1=myBucket
```
_If sharing your template, we recommend documenting what environment references
exist in the template and what keys to use to replace them._
{{% note %}}
#### Resource fields that support environment references
Only the following fields support environment references:
- `metadata.name`
- `spec.endpointName`
- `spec.associations.name`
{{% /note %}}
## Troubleshoot template results and permissions
If you get unexpected results, missing resources, or errors when exporting
templates, check the following:
- [Ensure `read` access](#ensure-read-access)
- [Use Organization ID](#use-organization-id)
- [Check for resource dependencies](#check-for-resource-dependencies)
### Ensure read access
The [API token](/influxdb/v2.2/security/tokens/) must have **read** access to resources that you want to export. The `influx export all` command only exports resources that the API token can read. For example, to export all resources in an organization that has ID `abc123`, the API token must have the `read:/orgs/abc123` permission.
To learn more about permissions, see [how to view authorizations](/influxdb/v2.2/security/tokens/view-tokens/) and [how to create a token](/influxdb/v2.2/security/tokens/create-token/) with specific permissions.
### Use Organization ID
If your token doesn't have **read** access to the organization and you want to [export specific resources](#export-specific-resources), use the `--org-id <org-id>` flag (instead of `-o <org-name>` or `--org <org-name>`) to provide the organization.
### Check for resource dependencies
If you want to export resources that depend on other resources, be sure to export the dependencies as well. Otherwise, the resources may not be usable.
## Share your InfluxDB templates
Share your InfluxDB templates with the entire InfluxData community.
Contribute your template to the [InfluxDB Community Templates](https://github.com/influxdata/community-templates/) repository on GitHub.
<a class="btn" href="https://github.com/influxdata/community-templates/" target="\_blank">View InfluxDB Community Templates</a>

View File

@ -0,0 +1,26 @@
---
title: InfluxDB stacks
description: >
Use an InfluxDB stack to manage your InfluxDB templates—add, update, or remove templates over time.
menu:
influxdb_2_2:
parent: InfluxDB templates
weight: 105
related:
- /influxdb/v2.2/reference/cli/influx/pkg/stack/
---
Use InfluxDB stacks to manage [InfluxDB templates](/influxdb/v2.2/influxdb-templates).
When you apply a template, InfluxDB associates resources in the template with a stack. Use the stack to add, update, or remove InfluxDB templates over time.
{{< children type="anchored-list" >}}
{{< children readmore=true >}}
{{% note %}}
**Key differences between stacks and templates**:
- A template defines a set of resources in a text file outside of InfluxDB. When you apply a template, a stack is automatically created to manage the applied template.
- Stacks add, modify or delete resources in an instance.
- Templates do not recognize resources in an instance. All resources in the template are added, creating duplicate resources if a resource already exists.
{{% /note %}}

View File

@ -0,0 +1,73 @@
---
title: Initialize an InfluxDB stack
list_title: Initialize a stack
description: >
InfluxDB automatically creates a new stack each time you [apply an InfluxDB template](/influxdb/v2.2/influxdb-templates/use/)
**without providing a stack ID**.
To manually create or initialize a new stack, use the [`influx stacks init` command](/influxdb/v2.2/reference/cli/influx/stacks/init/).
menu:
influxdb_2_2:
parent: InfluxDB stacks
name: Initialize a stack
weight: 202
related:
- /influxdb/v2.2/reference/cli/influx/stacks/init/
list_code_example: |
```sh
influx apply \
-o example-org \
-f path/to/template.yml
```
```sh
influx stacks init \
-o example-org \
-n "Example Stack" \
-d "InfluxDB stack for monitoring some awesome stuff" \
-u https://example.com/template-1.yml \
-u https://example.com/template-2.yml
```
---
InfluxDB automatically creates a new stack each time you [apply an InfluxDB template](/influxdb/v2.2/influxdb-templates/use/)
**without providing a stack ID**.
To manually create or initialize a new stack, use the [`influx stacks init` command](/influxdb/v2.2/reference/cli/influx/stacks/init/).
## Initialize a stack when applying a template
To automatically create a new stack when [applying an InfluxDB template](/influxdb/v2.2/influxdb-templates/use/)
**don't provide a stack ID**.
InfluxDB applies the resources in the template to a new stack and provides the **stack ID** the output.
```sh
influx apply \
-o example-org \
-f path/to/template.yml
```
## Manually initialize a new stack
Use the [`influx stacks init` command](/influxdb/v2.2/reference/cli/influx/stacks/init/)
to create or initialize a new InfluxDB stack.
**Provide the following:**
- Organization name or ID
- Stack name
- Stack description
- InfluxDB template URLs
<!-- -->
```sh
# Syntax
influx stacks init \
-o <org-name> \
-n <stack-name> \
-d <stack-description \
-u <package-url>
# Example
influx stacks init \
-o example-org \
-n "Example Stack" \
-d "InfluxDB stack for monitoring some awesome stuff" \
-u https://example.com/template-1.yml \
-u https://example.com/template-2.yml
```

View File

@ -0,0 +1,39 @@
---
title: Remove an InfluxDB stack
list_title: Remove a stack
description: >
Use the [`influx stacks remove` command](/influxdb/v2.2/reference/cli/influx/stacks/remove/)
to remove an InfluxDB stack and all its associated resources.
menu:
influxdb_2_2:
parent: InfluxDB stacks
name: Remove a stack
weight: 205
related:
- /influxdb/v2.2/reference/cli/influx/stacks/remove/
list_code_example: |
```sh
influx stacks remove \
-o example-org \
--stack-id=12ab34cd56ef
```
---
Use the [`influx stacks remove` command](/influxdb/v2.2/reference/cli/influx/stacks/remove/)
to remove an InfluxDB stack and all its associated resources.
**Provide the following:**
- Organization name or ID
- Stack ID
<!-- -->
```sh
# Syntax
influx stacks remove -o <org-name> --stack-id=<stack-id>
# Example
influx stacks remove \
-o example-org \
--stack-id=12ab34cd56ef
```

View File

@ -0,0 +1,165 @@
---
title: Save time with InfluxDB stacks
list_title: Save time with stacks
description: >
Discover how to use InfluxDB stacks to save time.
menu:
influxdb_2_2:
parent: InfluxDB stacks
name: Save time with stacks
weight: 201
related:
- /influxdb/v2.2/reference/cli/influx/stacks/
---
Save time and money using InfluxDB stacks. Here's a few ideal use cases:
- [Automate deployments with GitOps and stacks](#automate-deployments-with-gitops-and-stacks)
- [Apply updates from source-controlled templates](#apply-updates-from-source-controlled-templates)
- [Apply template updates across multiple InfluxDB instances](#apply-template-updates-across-multiple-influxdb-instances)
- [Develop templates](#develop-templates)
### Automate deployments with GitOps and stacks
GitOps is popular way to configure and automate deployments. Use InfluxDB stacks in a GitOps workflow
to automatically update distributed instances of InfluxDB OSS or InfluxDB Cloud.
To automate an InfluxDB deployment with GitOps and stacks, complete the following steps:
1. [Set up a GitHub repository](#set-up-a-github-repository)
2. [Add existing resources to the GitHub repository](#add-existing-resources-to-the-github-repository)
3. [Automate the creation of a stack for each folder](#automate-the-creation-of-a-stack-for-each-folder)
4. [Set up Github Actions or CircleCI](#set-up-github-actions-or-circleci)
#### Set up a GitHub repository
Set up a GitHub repository to back your InfluxDB instance. Determine how you want to organize the resources in your stacks within your Github repository. For example, organize resources under folders for specific teams or functions.
We recommend storing all resources for one stack in the same folder. For example, if you monitor Redis, create a `redis` stack and put your Redis monitoring resources (a Telegraf configuration, four dashboards, a label, and two alert checks) into one Redis folder, each resource in a separate file. Then, when you need to update a Redis resource, it's easy to find and make changes in one location.
{{% note %}}
Typically, we **do not recommend** using the same resource in multiple stacks. If your organization uses the same resource in multiple stacks, before you delete a stack, verify the stack does not include resources that another stack depends on. Stacks with buckets often contain data used by many different templates. Because of this, we recommend keeping buckets separate from the other stacks.
{{% /note %}}
#### Add existing resources to the GitHub repository
Skip this section if you are starting from scratch or dont have existing resources you want to add to your stack.
Use the `influx export` command to quickly export resources. Keep all your resources in a single file or have files for each one. You can always split or combine them later.
For example, if you export resources for three stacks: `buckets`, `redis`, and `mysql`, your folder structure might look something like this when you are done:
```sh
influxdb-assets/
├── buckets/
│ ├── telegraf_bucket.yml
├── redis/
│ ├── redis_overview_dashboard.yml
│ ├── redis_label.yml
│ ├── redis_cpu_check.yml
│ └── redis_mem_check.yml
├── mysql/
│ ├── mysql_assets.yml
└── README.md
```
{{% note %}}
When you export a resource, InfluxDB creates a `meta.name` for that resource. These resource names should be unique inside your InfluxDB instance. Use a good naming convention to prevent duplicate `meta.names`. Changing the `meta.name` of the InfluxDB resource will cause the stack to orphan the resource with the previous name and create a new resource with the updated name.
{{% /note %}}
Add the exported resources to your new GitHub repository.
#### Automate the creation of a stack for each folder
To automatically create a stack from each folder in your GitHub repository, create a shell script to check for an existing stack and if the stack isn't found, use the `influx stacks init` command to create a new stack. The following sample script creates a `redis` stack and automatically applies those changes to your instance:
```sh
echo "Checking for existing redis stack..."
REDIS_STACK_ID=$(influx stacks --stack-name redis --json | jq -r '.[0].ID')
if [ "$REDIS_STACK_ID" == "null" ]; then
echo "No stack found. Initializing our stack..."
REDIS_STACK_ID=$(influx stacks init -n redis --json | jq -r '.ID')
fi
# Setting the base path
BASE_PATH="$(pwd)"
echo "Applying our redis stack..."
cat $BASE_PATH/redis/*.yml | \
influx apply --force true --stack-id $REDIS_STACK_ID -q
```
{{% note %}}
The `--json` flag in the InfluxDB CLI is very useful when scripting against the CLI. This flag lets you grab important information easily using [`jq`](https://stedolan.github.io/jq/manual/v1.6/).
{{% /note %}}
Repeat this step for each of the stacks in your repository. When a resource in your stack changes, re-run this script to apply updated resources to your InfluxDB instance. Re-applying a stack with an updated resource won't add, delete, or duplicate resources.
#### Set up Github Actions or CircleCI
Once you have a script to apply changes being made to your local instance, automate the deployment to other environments as needed. Use the InfluxDB CLI to maintain multiple [configuration profiles]() to easily switch profile and issue commands against other InfluxDB instances. To apply the same script to a different InfluxDB instance, change your active configuration profile using the `influx config set` command. Or set the desired profile dynamically using the `-c, --active-config` flag.
{{% note %}}
Before you run automation scripts against shared environments, we recommend manually running the steps in your script.
{{% /note %}}
Verify your deployment automation software lets you run a custom script, and then set up the custom script you've built locally another environment. For example, here's a custom Github Action that automates deployment:
```yml
name: deploy-influxdb-resources
on:
push:
branches: [ master ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.ref }}
- name: Deploys repo to cloud
env:
# These secrets can be configured in the Github repo to connect to
# your InfluxDB instance.
INFLUX_TOKEN: ${{ secrets.INFLUX_TOKEN }}
INFLUX_ORG: ${{ secrets.INFLUX_ORG }}
INFLUX_URL: ${{ secrets.INFLUX_URL }}
GITHUB_REPO: ${{ github.repository }}
GITHUB_BRANCH: ${{ github.ref }}
run: |
cd /tmp
wget https://dl.influxdata.com/platform/nightlies/influx_nightly_linux_amd64.tar.gz
tar xvfz influx_nightly_linux_amd64.tar.gz
sudo cp influx_nightly_linux_amd64/influx /usr/local/bin/
cd $GITHUB_WORKSPACE
# This runs the script to set up your stacks
chmod +x ./setup.sh
./setup.sh prod
```
For more information about using GitHub Actions in your project, check out the complete [Github Actions documentation](https://github.com/features/actions).
### Apply updates from source-controlled templates
You can use a variety of InfluxDB templates from many different sources including
[Community Templates](https://github.com/influxdata/community-templates/) or
self-built custom templates.
As templates are updated over time, stacks let you gracefully
apply updates without creating duplicate resources.
### Apply template updates across multiple InfluxDB instances
In many cases, you may have more than one instance of InfluxDB running and want to apply
the same template to each separate instance.
Using stacks, you can make changes to a stack on one instance,
[export the stack as a template](/influxdb/v2.2/influxdb-templates/create/#export-a-stack)
and then apply the changes to your other InfluxDB instances.
### Develop templates
InfluxDB stacks aid in developing and maintaining InfluxDB templates.
Stacks let you modify and update template manifests and apply those changes in
any stack that uses the template.

View File

@ -0,0 +1,56 @@
---
title: Update an InfluxDB stack
list_title: Update a stack
description: >
Use the [`influx apply` command](/influxdb/v2.2/reference/cli/influx/apply/)
to update a stack with a modified template.
When applying a template to an existing stack, InfluxDB checks to see if the
resources in the template match existing resources.
InfluxDB updates, adds, and removes resources to resolve differences between
the current state of the stack and the newly applied template.
menu:
influxdb_2_2:
parent: InfluxDB stacks
name: Update a stack
weight: 203
related:
- /influxdb/v2.2/reference/cli/influx/apply
- /influxdb/v2.2/reference/cli/influx/stacks/update/
list_code_example: |
```sh
influx apply \
-o example-org \
-u http://example.com/template-1.yml \
-u http://example.com/template-2.yml \
--stack-id=12ab34cd56ef
```
---
Use the [`influx apply` command](/influxdb/v2.2/reference/cli/influx/apply/)
to update a stack with a modified template.
When applying a template to an existing stack, InfluxDB checks to see if the
resources in the template match existing resources.
InfluxDB updates, adds, and removes resources to resolve differences between
the current state of the stack and the newly applied template.
Each stack is uniquely identified by a **stack ID**.
For information about retrieving your stack ID, see [View stacks](/influxdb/v2.2/influxdb-templates/stacks/view/).
**Provide the following:**
- Organization name or ID
- Stack ID
- InfluxDB template URLs to apply
<!-- -->
```sh
influx apply \
-o example-org \
-u http://example.com/template-1.yml \
-u http://example.com/template-2.yml \
--stack-id=12ab34cd56ef
```
Template resources are uniquely identified by their `metadata.name` field.
If errors occur when applying changes to a stack, all applied changes are
reversed and the stack is returned to its previous state.

View File

@ -0,0 +1,69 @@
---
title: View InfluxDB stacks
list_title: View stacks
description: >
Use the [`influx stacks` command](/influxdb/v2.2/reference/cli/influx/stacks/)
to view installed InfluxDB stacks and their associated resources.
menu:
influxdb_2_2:
parent: InfluxDB stacks
name: View stacks
weight: 204
related:
- /influxdb/v2.2/reference/cli/influx/stacks/
list_code_example: |
```sh
influx stacks -o example-org
```
---
Use the [`influx stacks` command](/influxdb/v2.2/reference/cli/influx/stacks/)
to view installed InfluxDB stacks and their associated resources.
**Provide the following:**
- Organization name or ID
<!-- -->
```sh
# Syntax
influx stacks -o <org-name>
# Example
influx stacks -o example-org
```
### Filter stacks
To output information about specific stacks, use the `--stack-name` or `--stack-id`
flags to filter output by stack names or stack IDs.
##### Filter by stack name
```sh
# Syntax
influx stacks \
-o <org-name> \
--stack-name=<stack-name>
# Example
influx stacks \
-o example-org \
--stack-name=stack1 \
--stack-name=stack2
```
### Filter by stack ID
```sh
# Syntax
influx stacks \
-o <org-name> \
--stack-id=<stack-id>
# Example
influx stacks \
-o example-org \
--stack-id=12ab34cd56ef \
--stack-id=78gh910i11jk
```

View File

@ -0,0 +1,241 @@
---
title: Use InfluxDB templates
description: >
Use the `influx` command line interface (CLI) to summarize, validate, and apply
templates from your local filesystem and from URLs.
menu:
influxdb_2_2:
parent: InfluxDB templates
name: Use templates
weight: 102
influxdb/v2.2/tags: [templates]
related:
- /influxdb/v2.2/reference/cli/influx/apply/
- /influxdb/v2.2/reference/cli/influx/template/
- /influxdb/v2.2/reference/cli/influx/template/validate/
---
Use the `influx` command line interface (CLI) to summarize, validate, and apply
templates from your local filesystem and from URLs.
- [Use InfluxDB community templates](#use-influxdb-community-templates)
- [View a template summary](#view-a-template-summary)
- [Validate a template](#validate-a-template)
- [Apply templates](#apply-templates)
## Use InfluxDB community templates
The [InfluxDB community templates repository](https://github.com/influxdata/community-templates/)
is home to a growing number of InfluxDB templates developed and maintained by
others in the InfluxData community.
Apply community templates directly from GitHub using a template's download URL
or download the template.
{{< youtube 2JjW4Rym9XE >}}
{{% note %}}
When attempting to access the community templates via the URL, the templates use the following
as the root of the URL:
```sh
https://raw.githubusercontent.com/influxdata/community-templates/master/
```
For example, the Docker community template can be accessed via:
```sh
https://raw.githubusercontent.com/influxdata/community-templates/master/docker/docker.yml
```
{{% /note %}}
<a class="btn" href="https://github.com/influxdata/community-templates/" target="\_blank">View InfluxDB Community Templates</a>
## View a template summary
To view a summary of what's included in a template before applying the template,
use the [`influx template` command](/influxdb/v2.2/reference/cli/influx/template/).
View a summary of a template stored in your local filesystem or from a URL.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[From a file](#)
[From a URL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template -f <FILE_PATH>
# Example
influx template -f /path/to/template.yml
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template -u <FILE_URL>
# Example
influx template -u https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Validate a template
To validate a template before you install it or troubleshoot a template, use
the [`influx template validate` command](/influxdb/v2.2/reference/cli/influx/template/validate/).
Validate a template stored in your local filesystem or from a URL.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[From a file](#)
[From a URL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template validate -f <FILE_PATH>
# Example
influx template validate -f /path/to/template.yml
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template validate -u <FILE_URL>
# Example
influx template validate -u https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Apply templates
Use the [`influx apply` command](/influxdb/v2.2/reference/cli/influx/apply/) to install templates
from your local filesystem or from URLs.
- [Apply a template from a file](#apply-a-template-from-a-file)
- [Apply all templates in a directory](#apply-all-templates-in-a-directory)
- [Apply a template from a URL](#apply-a-template-from-a-url)
- [Apply templates from both files and URLs](#apply-templates-from-both-files-and-urls)
- [Define environment references](#define-environment-references)
- [Include a secret when installing a template](#include-a-secret-when-installing-a-template)
{{% note %}}
#### Apply templates to an existing stack
To apply a template to an existing stack, include the stack ID when applying the template.
Any time you apply a template without a stack ID, InfluxDB initializes a new stack
and all new resources.
For more information, see [InfluxDB stacks](/influxdb/v2.2/influxdb-templates/stacks/).
{{% /note %}}
### Apply a template from a file
To install templates stored on your local machine, use the `-f` or `--file` flag
to provide the **file path** of the template manifest.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -f <FILE_PATH>
# Examples
# Apply a single template
influx apply -o example-org -f /path/to/template.yml
# Apply multiple templates
influx apply -o example-org \
-f /path/to/this/template.yml \
-f /path/to/that/template.yml
```
### Apply all templates in a directory
To apply all templates in a directory, use the `-f` or `--file` flag to provide
the **directory path** of the directory where template manifests are stored.
By default, this only applies templates stored in the specified directory.
To apply all templates stored in the specified directory and its subdirectories,
include the `-R`, `--recurse` flag.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -f <DIRECTORY_PATH>
# Examples
# Apply all templates in a directory
influx apply -o example-org -f /path/to/template/dir/
# Apply all templates in a directory and its subdirectories
influx apply -o example-org -f /path/to/template/dir/ -R
```
### Apply a template from a URL
To apply templates from a URL, use the `-u` or `--template-url` flag to provide the URL
of the template manifest.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -u <FILE_URL>
# Examples
# Apply a single template from a URL
influx apply -o example-org -u https://example.com/templates/template.yml
# Apply multiple templates from URLs
influx apply -o example-org \
-u https://example.com/templates/template1.yml \
-u https://example.com/templates/template2.yml
```
### Apply templates from both files and URLs
To apply templates from both files and URLs in a single command, include multiple
file or directory paths and URLs, each with the appropriate `-f` or `-u` flag.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -u <FILE_URL> -f <FILE_PATH>
# Example
influx apply -o example-org \
-u https://example.com/templates/template1.yml \
-u https://example.com/templates/template2.yml \
-f ~/templates/custom-template.yml \
-f ~/templates/iot/home/ \
--recurse
```
### Define environment references
Some templates include [environment references](/influxdb/v2.2/influxdb-templates/create/#include-user-definable-resource-names) that let you provide custom resource names.
The `influx apply` command prompts you to provide a value for each environment
reference in the template.
You can also provide values for environment references by including an `--env-ref`
flag with a key-value pair comprised of the environment reference key and the
value to replace it.
```sh
influx apply -o example-org -f /path/to/template.yml \
--env-ref=bucket-name-1=myBucket
--env-ref=label-name-1=Label1 \
--env-ref=label-name-2=Label2
```
### Include a secret when installing a template
Some templates use [secrets](/influxdb/v2.2/security/secrets/) in queries.
Secret values are not included in templates.
To define secret values when installing a template, include the `--secret` flag
with the secret key-value pair.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -f <FILE_PATH> \
--secret=<secret-key>=<secret-value>
# Examples
# Define a single secret when applying a template
influx apply -o example-org -f /path/to/template.yml \
--secret=FOO=BAR
# Define multiple secrets when applying a template
influx apply -o example-org -f /path/to/template.yml \
--secret=FOO=bar \
--secret=BAZ=quz
```
_To add a secret after applying a template, see [Add secrets](/influxdb/v2.2/security/secrets/manage-secrets/add/)._

View File

@ -0,0 +1,757 @@
---
title: Install InfluxDB
description: Download, install, and set up InfluxDB OSS.
menu: influxdb_2_2
weight: 2
influxdb/v2.2/tags: [install]
---
The InfluxDB {{< current-version >}} time series platform is purpose-built to collect, store,
process and visualize metrics and events.
Download, install, and set up InfluxDB OSS.
{{< tabs-wrapper >}}
{{% tabs %}}
[macOS](#)
[Linux](#)
[Windows](#)
[Docker](#)
[Kubernetes](#)
[Raspberry Pi](#)
{{% /tabs %}}
<!-------------------------------- BEGIN macOS -------------------------------->
{{% tab-content %}}
## Install InfluxDB v{{< current-version >}}
Do one of the following:
- [Use Homebrew](#use-homebrew)
- [Manually download and install](#manually-download-and-install)
{{% note %}}
#### InfluxDB and the influx CLI are separate packages
The InfluxDB server ([`influxd`](/influxdb/v2.2/reference/cli/influxd/)) and the
[`influx` CLI](/influxdb/v2.2/reference/cli/influx/) are packaged and
versioned separately.
For information about installing the `influx` CLI, see
[Install and use the influx CLI](/influxdb/v2.2/tools/influx-cli/).
{{% /note %}}
### Use Homebrew
We recommend using [Homebrew](https://brew.sh/) to install InfluxDB v{{< current-version >}} on macOS:
```sh
brew update
brew install influxdb
```
{{% note %}}
Homebrew also installs `influx-cli` as a dependency.
For information about using the `influx` CLI, see the
[`influx` CLI reference documentation](/influxdb/v2.2/reference/cli/influx/).
{{% /note %}}
### Manually download and install
To download the InfluxDB v{{< current-version >}} binaries for macOS directly,
do the following:
1. **Download the InfluxDB package.**
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz" download>InfluxDB v{{< current-version >}} (macOS)</a>
2. **Unpackage the InfluxDB binary.**
Do one of the following:
- Double-click the downloaded package file in **Finder**.
- Run the following command in a macOS command prompt application such
**Terminal** or **[iTerm2](https://www.iterm2.com/)**:
```sh
# Unpackage contents to the current working directory
tar zxvf ~/Downloads/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz
```
3. **(Optional) Place the binary in your `$PATH`**
```sh
# (Optional) Copy the influxd binary to your $PATH
sudo cp influxdb2-{{< latest-patch >}}-darwin-amd64/influxd /usr/local/bin/
```
If you do not move the `influxd` binary into your `$PATH`, prefix the executable
`./` to run it in place.
{{< expand-wrapper >}}
{{% expand "<span class='req'>Recommended</span> Verify the authenticity of downloaded binary" %}}
For added security, use `gpg` to verify the signature of your download.
(Most operating systems include the `gpg` command by default.
If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/) for installation instructions.)
1. Download and import InfluxData's public key:
```
curl -s https://repos.influxdata.com/influxdb2.key | gpg --import -
```
2. Download the signature file for the release by adding `.asc` to the download URL.
For example:
```
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz.asc
```
3. Verify the signature with `gpg --verify`:
```
gpg --verify influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz.asc influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz
```
The output from this command should include the following:
```
gpg: Good signature from "InfluxData <support@influxdata.com>" [unknown]
```
{{% /expand %}}
{{< /expand-wrapper >}}
{{% note %}}
Both InfluxDB 1.x and 2.x have associated `influxd` and `influx` binaries.
If InfluxDB 1.x binaries are already in your `$PATH`, run the {{< current-version >}} binaries in place
or rename them before putting them in your `$PATH`.
If you rename the binaries, all references to `influxd` and `influx` in this documentation refer to your renamed binaries.
{{% /note %}}
#### Networking ports
By default, InfluxDB uses TCP port `8086` for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.2/reference/api/).
### Start InfluxDB
Start InfluxDB by running the `influxd` daemon:
```bash
influxd
```
{{% note %}}
#### Run InfluxDB on macOS Catalina
macOS Catalina requires downloaded binaries to be signed by registered Apple developers.
Currently, when you first attempt to run `influxd`, macOS will prevent it from running.
To manually authorize the `influxd` binary:
1. Attempt to run `influxd`.
2. Open **System Preferences** and click **Security & Privacy**.
3. Under the **General** tab, there is a message about `influxd` being blocked.
Click **Open Anyway**.
We are in the process of updating our build process to ensure released binaries are signed by InfluxData.
{{% /note %}}
{{% warn %}}
#### "too many open files" errors
After running `influxd`, you might see an error in the log output like the
following:
```sh
too many open files
```
To resolve this error, follow the
[recommended steps](https://unix.stackexchange.com/a/221988/471569) to increase
file and process limits for your operating system version then restart `influxd`.
{{% /warn %}}
_See the [`influxd` documentation](/influxdb/v2.2/reference/cli/influxd) for information about
available flags and options._
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
influxd --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!--------------------------------- END macOS --------------------------------->
<!-------------------------------- BEGIN Linux -------------------------------->
{{% tab-content %}}
## Download and install InfluxDB v{{< current-version >}}
Do one of the following:
- [Install InfluxDB as a service with systemd](#install-influxdb-as-a-service-with-systemd)
- [Manually download and install the influxd binary](#manually-download-and-install-the-influxd-binary)
{{% note %}}
#### InfluxDB and the influx CLI are separate packages
The InfluxDB server ([`influxd`](/influxdb/v2.2/reference/cli/influxd/)) and the
[`influx` CLI](/influxdb/v2.2/reference/cli/influx/) are packaged and
versioned separately.
For information about installing the `influx` CLI, see
[Install and use the influx CLI](/influxdb/v2.2/tools/influx-cli/).
{{% /note %}}
### Install InfluxDB as a service with systemd
1. Download and install the appropriate `.deb` or `.rpm` file using a URL from the
[InfluxData downloads page](https://portal.influxdata.com/downloads/)
with the following commands:
```sh
# Ubuntu/Debian
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-xxx.deb
sudo dpkg -i influxdb2-{{< latest-patch >}}-xxx.deb
# Red Hat/CentOS/Fedora
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-xxx.rpm
sudo yum localinstall influxdb2-{{< latest-patch >}}-xxx.rpm
```
_Use the exact filename of the download of `.rpm` package (for example, `influxdb2-{{< latest-patch >}}-amd64.rpm`)._
2. Start the InfluxDB service:
```sh
sudo service influxdb start
```
Installing the InfluxDB package creates a service file at `/lib/systemd/services/influxdb.service`
to start InfluxDB as a background service on startup.
3. Restart your system and verify that the service is running correctly:
```
$ sudo service influxdb status
● influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enable>
Active: active (running)
```
For information about where InfluxDB stores data on disk when running as a service,
see [File system layout](/influxdb/v2.2/reference/internals/file-system-layout/?t=Linux#installed-as-a-package).
To customize your InfluxDB configuration, use either
[command line flags (arguments)](#pass-arguments-to-systemd), environment variables, or an InfluxDB configuration file.
See InfluxDB [configuration options](/influxdb/v2.2/reference/config-options/) for more information.
#### Pass arguments to systemd
1. Add one or more lines like the following containing arguments for `influxd` to `/etc/default/influxdb2`:
```sh
ARG1="--http-bind-address :8087"
ARG2="<another argument here>"
```
2. Edit the `/lib/systemd/system/influxdb.service` file as follows:
```sh
ExecStart=/usr/bin/influxd $ARG1 $ARG2
```
### Manually download and install the influxd binary
1. **Download the InfluxDB binary.**
Download the InfluxDB binary [from your browser](#download-from-your-browser)
or [from the command line](#download-from-the-command-line).
#### Download from your browser
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz" download >InfluxDB v{{< current-version >}} (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz" download >InfluxDB v{{< current-version >}} (arm)</a>
#### Download from the command line
```sh
# amd64
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz
# arm
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz
```
4. **Extract the downloaded binary.**
_**Note:** The following commands are examples. Adjust the filenames, paths, and utilities if necessary._
```sh
# amd64
tar xvzf path/to/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz
# arm
tar xvzf path/to/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz
```
3. **(Optional) Place the extracted `influxd` executable binary in your system `$PATH`.**
```sh
# amd64
sudo cp influxdb2-{{< latest-patch >}}-linux-amd64/influxd /usr/local/bin/
# arm
sudo cp influxdb2-{{< latest-patch >}}-linux-arm64/influxd /usr/local/bin/
```
If you do not move the `influxd` binary into your `$PATH`, prefix the executable
`./` to run it in place.
{{< expand-wrapper >}}
{{% expand "<span class='req'>Recommended</span> Verify the authenticity of downloaded binary" %}}
For added security, use `gpg` to verify the signature of your download.
(Most operating systems include the `gpg` command by default.
If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/) for installation instructions.)
1. Download and import InfluxData's public key:
```
curl -s https://repos.influxdata.com/influxdb2.key | gpg --import -
```
2. Download the signature file for the release by adding `.asc` to the download URL.
For example:
```
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz.asc
```
3. Verify the signature with `gpg --verify`:
```
gpg --verify influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz.asc influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz
```
The output from this command should include the following:
```
gpg: Good signature from "InfluxData <support@influxdata.com>" [unknown]
```
{{% /expand %}}
{{< /expand-wrapper >}}
## Start InfluxDB
If InfluxDB was installed as a systemd service, systemd manages the `influxd` daemon and no further action is required.
If the binary was manually downloaded and added to the system `$PATH`, start the `influxd` daemon with the following command:
```bash
influxd
```
_See the [`influxd` documentation](/influxdb/v2.2/reference/cli/influxd) for information about
available flags and options._
### Networking ports
By default, InfluxDB uses TCP port `8086` for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.2/reference/api/).
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
influxd --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!--------------------------------- END Linux --------------------------------->
<!------------------------------- BEGIN Windows ------------------------------->
{{% tab-content %}}
{{% note %}}
#### System requirements
- Windows 10
- 64-bit AMD architecture
- [Powershell](https://docs.microsoft.com/powershell/) or
[Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/)
#### Command line examples
Use **Powershell** or **WSL** to execute `influx` and `influxd` commands.
The command line examples in this documentation use `influx` and `influxd` as if
installed on the system `PATH`.
If these binaries are not installed on your `PATH`, replace `influx` and `influxd`
in the provided examples with `./influx` and `./influxd` respectively.
{{% /note %}}
## Download and install InfluxDB v{{< current-version >}}
{{% note %}}
#### InfluxDB and the influx CLI are separate packages
The InfluxDB server ([`influxd`](/influxdb/v2.2/reference/cli/influxd/)) and the
[`influx` CLI](/influxdb/v2.2/reference/cli/influx/) are packaged and
versioned separately.
For information about installing the `influx` CLI, see
[Install and use the influx CLI](/influxdb/v2.2/tools/influx-cli/).
{{% /note %}}
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-windows-amd64.zip" download >InfluxDB v{{< current-version >}} (Windows)</a>
Expand the downloaded archive into `C:\Program Files\InfluxData\` and rename the files if desired.
```powershell
> Expand-Archive .\influxdb2-{{< latest-patch >}}-windows-amd64.zip -DestinationPath 'C:\Program Files\InfluxData\'
> mv 'C:\Program Files\InfluxData\influxdb2-{{< latest-patch >}}-windows-amd64' 'C:\Program Files\InfluxData\influxdb'
```
## Networking ports
By default, InfluxDB uses TCP port `8086` for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.2/reference/api/).
## Start InfluxDB
In **Powershell**, navigate into `C:\Program Files\InfluxData\influxdb` and start
InfluxDB by running the `influxd` daemon:
```powershell
> cd -Path 'C:\Program Files\InfluxData\influxdb'
> ./influxd
```
_See the [`influxd` documentation](/influxdb/v2.2/reference/cli/influxd) for information about
available flags and options._
{{% note %}}
#### Grant network access
When starting InfluxDB for the first time, **Windows Defender** will appear with
the following message:
> Windows Defender Firewall has blocked some features of this app.
1. Select **Private networks, such as my home or work network**.
2. Click **Allow access**.
{{% /note %}}
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
./influxd --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!-------------------------------- END Windows -------------------------------->
<!-------------------------------- BEGIN Docker ------------------------------->
{{% tab-content %}}
## Download and run InfluxDB v{{< current-version >}}
Use `docker run` to download and run the InfluxDB v{{< current-version >}} Docker image.
Expose port `8086`, which InfluxDB uses for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.2/reference/api/).
```sh
docker run --name influxdb -p 8086:8086 influxdb:{{< latest-patch >}}
```
_To run InfluxDB in [detached mode](https://docs.docker.com/engine/reference/run/#detached-vs-foreground), include the `-d` flag in the `docker run` command._
## Persist data outside the InfluxDB container
1. Create a new directory to store your data in and navigate into the directory.
```sh
mkdir path/to/influxdb-docker-data-volume && cd $_
```
2. From within your new directory, run the InfluxDB Docker container with the `--volume` flag to
persist data from `/var/lib/influxdb2` _inside_ the container to the current working directory in
the host file system.
```sh
docker run \
--name influxdb \
-p 8086:8086 \
--volume $PWD:/var/lib/influxdb2 \
influxdb:{{< latest-patch >}}
```
## Configure InfluxDB with Docker
To mount an InfluxDB configuration file and use it from within Docker:
1. [Persist data outside the InfluxDB container](#persist-data-outside-the-influxdb-container).
2. Use the command below to generate the default configuration file on the host file system:
```sh
docker run \
--rm influxdb:{{< latest-patch >}} \
influxd print-config > config.yml
```
3. Modify the default configuration, which will now be available under `$PWD`.
4. Start the InfluxDB container:
```sh
docker run -p 8086:8086 \
-v $PWD/config.yml:/etc/influxdb2/config.yml \
influxdb:{{< latest-patch >}}
```
(Find more about configuring InfluxDB [here](https://docs.influxdata.com/influxdb/v2.2/reference/config-options/).)
## Open a shell in the InfluxDB container
To use the `influx` command line interface, open a shell in the `influxdb` Docker container:
```sh
docker exec -it influxdb /bin/bash
```
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting the InfluxDB container.
```sh
docker run -p 8086:8086 influxdb:{{< latest-patch >}} --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!--------------------------------- END Docker -------------------------------->
<!-------------------------------- BEGIN kubernetes---------------------------->
{{% tab-content %}}
## Install InfluxDB in a Kubernetes cluster
The instructions below use **minikube** or **kind**, but the steps should be similar in any Kubernetes cluster.
InfluxData also makes [Helm charts](https://github.com/influxdata/helm-charts) available.
1. Install [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) or
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
2. Start a local cluster:
```sh
# with minikube
minikube start
# with kind
kind create cluster
```
3. Apply the [sample InfluxDB configuration](https://github.com/influxdata/docs-v2/blob/master/static/downloads/influxdb-k8-minikube.yaml) by running:
```sh
kubectl apply -f https://raw.githubusercontent.com/influxdata/docs-v2/master/static/downloads/influxdb-k8-minikube.yaml
```
This creates an `influxdb` Namespace, Service, and StatefulSet.
A PersistentVolumeClaim is also created to store data written to InfluxDB.
**Important**: Always inspect YAML manifests before running `kubectl apply -f <url>`!
4. Ensure the Pod is running:
```sh
kubectl get pods -n influxdb
```
5. Ensure the Service is available:
```sh
kubectl describe service -n influxdb influxdb
```
You should see an IP address after `Endpoints` in the command's output.
6. Forward port 8086 from inside the cluster to localhost:
```sh
kubectl port-forward -n influxdb service/influxdb 8086:8086
```
{{% /tab-content %}}
<!--------------------------------- END kubernetes ---------------------------->
<!--------------------------------- BEGIN Rasberry Pi ------------------------->
{{% tab-content %}}
## Install InfluxDB v{{< current-version >}} on Raspberry Pi
{{% note %}}
#### Requirements
To run InfluxDB on Raspberry Pi, you need:
- a Raspberry Pi 4+ or 400
- a 64-bit operating system.
We recommend installing a [64-bit version of Ubuntu](https://ubuntu.com/download/raspberry-pi)
of Ubuntu Desktop or Ubuntu Server compatible with 64-bit Raspberry Pi.
{{% /note %}}
### Install Linux binaries
Follow the [Linux installation instructions](/influxdb/v2.2/install/?t=Linux)
to install InfluxDB on a Raspberry Pi.
### Monitor your Raspberry Pi
Use the [InfluxDB Raspberry Pi template](/influxdb/cloud/monitor-alert/templates/infrastructure/raspberry-pi/)
to easily configure collecting and visualizing system metrics for the Raspberry Pi.
#### Monitor 32-bit Raspberry Pi systems
If you have a 32-bit Raspberry Pi, [use Telegraf](/{{< latest "telegraf" >}}/)
to collect and send data to:
- [InfluxDB OSS](/influxdb/v2.2/), running on a 64-bit system
- InfluxDB Cloud with a [**Free Tier**](/influxdb/cloud/account-management/pricing-plans/#free-plan) account
- InfluxDB Cloud with a paid [**Usage-Based**](/influxdb/cloud/account-management/pricing-plans/#usage-based-plan) account with relaxed resource restrictions.
{{% /tab-content %}}
<!--------------------------------- END Rasberry Pi --------------------------->
{{< /tabs-wrapper >}}
## Download and install the influx CLI
The [`influx` CLI](/influxdb/v2.2/reference/cli/influx/) lets you manage InfluxDB
from your command line.
<a class="btn" href="/influxdb/v2.2/tools/influx-cli/" target="_blank">Download and install the influx CLI</a>
## Set up InfluxDB
The initial setup process for InfluxDB walks through creating a default organization,
user, bucket, and Operator API token.
The setup process is available in both the InfluxDB user interface (UI) and in
the `influx` command line interface (CLI).
{{% note %}}
#### Operator token permissions
The **Operator token** created in the InfluxDB setup process has
**full read and write access to all organizations** in the database.
To prevent accidental interactions across organizations, we recommend
[creating an All-Access token](/influxdb/v2.2/security/tokens/create-token/)
for each organization and using those to manage InfluxDB.
{{% /note %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[UI Setup](#)
[CLI Setup](#)
{{% /tabs %}}
<!------------------------------- BEGIN UI Setup ------------------------------>
{{% tab-content %}}
### Set up InfluxDB through the UI
1. With InfluxDB running, visit [localhost:8086](http://localhost:8086).
2. Click **Get Started**
#### Set up your initial user
1. Enter a **Username** for your initial user.
2. Enter a **Password** and **Confirm Password** for your user.
3. Enter your initial **Organization Name**.
4. Enter your initial **Bucket Name**.
5. Click **Continue**.
InfluxDB is now initialized with a primary user, organization, and bucket.
You are ready to [write or collect data](/influxdb/v2.2/write-data).
### (Optional) Set up and use the influx CLI
To avoid having to pass your InfluxDB
[API token](/influxdb/v2.2/security/tokens/) with each `influx` command, set up a configuration profile to store your credentials. To do this, complete the following steps:
1. In a terminal, run the following command:
```sh
# Set up a configuration profile
influx config create -n default \
-u http://localhost:8086 \
-o example-org \
-t mySuP3rS3cr3tT0keN \
-a
```
This configures a new profile named `default` and makes the profile active
so your `influx` CLI commands run against the specified InfluxDB instance.
For more detail, see [`influx config`](/influxdb/v2.2/reference/cli/influx/config/).
2. Learn `influx` CLI commands. To see all available `influx` commands, type
`influx -h` or check out [influx - InfluxDB command line interface](/influxdb/v2.2/reference/cli/influx/).
{{% /tab-content %}}
<!-------------------------------- END UI Setup ------------------------------->
<!------------------------------ BEGIN CLI Setup ------------------------------>
{{% tab-content %}}
### Set up InfluxDB through the influx CLI
Begin the InfluxDB setup process via the [`influx` CLI](/influxdb/v2.2/reference/cli/influx/) by running:
```bash
influx setup
```
1. Enter a **primary username**.
2. Enter a **password** for your user.
3. **Confirm your password** by entering it again.
4. Enter a name for your **primary organization**.
5. Enter a name for your **primary bucket**.
6. Enter a **retention period** for your primary bucket—valid units are
nanoseconds (`ns`), microseconds (`us` or `µs`), milliseconds (`ms`),
seconds (`s`), minutes (`m`), hours (`h`), days (`d`), and weeks (`w`).
Enter nothing for an infinite retention period.
7. Confirm the details for your primary user, organization, and bucket.
InfluxDB is now initialized with a primary user, organization, bucket, and API token.
InfluxDB also creates a configuration profile for you so that you don't have to
add your InfluxDB host, organization, and token to every command.
To view that config profile, use the [`influx config list`](/influxdb/v2.2/reference/cli/influx/config) command.
To continue to use InfluxDB via the CLI, you need the API token created during setup.
To view the token, log into the UI with the credentials created above.
(For instructions, see [View tokens in the InfluxDB UI](/influxdb/v2.2/security/tokens/view-tokens/#view-tokens-in-the-influxdb-ui).)
You are ready to [write or collect data](/influxdb/v2.2/write-data).
{{% note %}}
To automate the setup process, use [flags](/influxdb/v2.2/reference/cli/influx/setup/#flags)
to provide the required information.
{{% /note %}}
{{% /tab-content %}}
<!------------------------------- END UI Setup -------------------------------->
{{< /tabs-wrapper >}}
After youve installed InfluxDB, youre ready to [get started working with your data in InfluxDB](/influxdb/v2.2/get-started/).

View File

@ -0,0 +1,15 @@
---
title: Migrate data to InfluxDB
description: >
Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS
and InfluxDB Cloud.
menu:
influxdb_2_2:
name: Migrate data
weight: 9
---
Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS
and InfluxDB Cloud.
{{< children >}}

View File

@ -0,0 +1,372 @@
---
title: Migrate data from InfluxDB Cloud to InfluxDB OSS
description: >
To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data from
InfluxDB Cloud in time-based batches and write the data to InfluxDB OSS.
menu:
influxdb_2_2:
name: Migrate from Cloud to OSS
parent: Migrate data
weight: 102
---
To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data
from InfluxDB Cloud and write the data to InfluxDB OSS.
Because full data migrations will likely exceed your organization's limits and
adjustable quotas, migrate your data in batches.
The following guide provides instructions for setting up an InfluxDB OSS task
that queries data from an InfluxDB Cloud bucket in time-based batches and writes
each batch to an InfluxDB OSS bucket.
{{% cloud %}}
All queries against data in InfluxDB Cloud are subject to your organization's
[rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/).
{{% /cloud %}}
- [Set up the migration](#set-up-the-migration)
- [Migration task](#migration-task)
- [Configure the migration](#configure-the-migration)
- [Migration Flux script](#migration-flux-script)
- [Configuration help](#configuration-help)
- [Monitor the migration progress](#monitor-the-migration-progress)
- [Troubleshoot migration task failures](#troubleshoot-migration-task-failures)
## Set up the migration
1. [Install and set up InfluxDB OSS](/influxdb/{{< current-version-link >}}/install/).
2. **In InfluxDB Cloud**, [create an API token](/influxdb/cloud/security/tokens/create-token/)
with **read access** to the bucket you want to migrate.
3. **In InfluxDB OSS**:
1. Add your **InfluxDB Cloud API token** as a secret using the key,
`INFLUXDB_CLOUD_TOKEN`.
_See [Add secrets](/influxdb/{{< current-version-link >}}/security/secrets/add/) for more information._
2. [Create a bucket](/influxdb/{{< current-version-link >}}/organizations/buckets/create-bucket/)
**to migrate data to**.
3. [Create a bucket](/influxdb/{{< current-version-link >}}/organizations/buckets/create-bucket/)
**to store temporary migration metadata**.
4. [Create a new task](/influxdb/{{< current-version-link >}}/process-data/manage-tasks/create-task/)
using the provided [migration task](#migration-task).
Update the necessary [migration configuration options](#configure-the-migration).
5. _(Optional)_ Set up [migration monitoring](#monitor-the-migration-progress).
6. Save the task.
{{% note %}}
Newly-created tasks are enabled by default, so the data migration begins when you save the task.
{{% /note %}}
**After the migration is complete**, each subsequent migration task execution
will fail with the following error:
```
error exhausting result iterator: error calling function "die" @41:9-41:86:
Batch range is beyond the migration range. Migration is complete.
```
## Migration task
### Configure the migration
1. Specify how often you want the task to run using the `task.every` option.
_See [Determine your task interval](#determine-your-task-interval)._
2. Define the following properties in the `migration`
[record](/{{< latest "flux" >}}/data-types/composite/record/):
##### migration
- **start**: Earliest time to include in the migration.
_See [Determine your migration start time](#determine-your-migration-start-time)._
- **stop**: Latest time to include in the migration.
- **batchInterval**: Duration of each time-based batch.
_See [Determine your batch interval](#determine-your-batch-interval)._
- **batchBucket**: InfluxDB OSS bucket to store migration batch metadata in.
- **sourceHost**: [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions)
to migrate data from.
- **sourceOrg**: InfluxDB Cloud organization to migrate data from.
- **sourceToken**: InfluxDB Cloud API token. To keep the API token secure, store
it as a secret in InfluxDB OSS.
- **sourceBucket**: InfluxDB Cloud bucket to migrate data from.
- **destinationBucket**: InfluxDB OSS bucket to migrate data to.
### Migration Flux script
```js
import "array"
import "experimental"
import "influxdata/influxdb/secrets"
// Configure the task
option task = {every: 5m, name: "Migrate data from InfluxDB Cloud"}
// Configure the migration
migration = {
start: 2022-01-01T00:00:00Z,
stop: 2022-02-01T00:00:00Z,
batchInterval: 1h,
batchBucket: "migration",
sourceHost: "https://cloud2.influxdata.com",
sourceOrg: "example-cloud-org",
sourceToken: secrets.get(key: "INFLUXDB_CLOUD_TOKEN"),
sourceBucket: "example-cloud-bucket",
destinationBucket: "example-oss-bucket",
}
// batchRange dynamically returns a record with start and stop properties for
// the current batch. It queries migration metadata stored in the
// `migration.batchBucket` to determine the stop time of the previous batch.
// It uses the previous stop time as the new start time for the current batch
// and adds the `migration.batchInterval` to determine the current batch stop time.
batchRange = () => {
_lastBatchStop =
(from(bucket: migration.batchBucket)
|> range(start: migration.start)
|> filter(fn: (r) => r._field == "batch_stop")
|> filter(fn: (r) => r.srcOrg == migration.sourceOrg)
|> filter(fn: (r) => r.srcBucket == migration.sourceBucket)
|> last()
|> findRecord(fn: (key) => true, idx: 0))._value
_batchStart =
if exists _lastBatchStop then
time(v: _lastBatchStop)
else
migration.start
return {start: _batchStart, stop: experimental.addDuration(d: migration.batchInterval, to: _batchStart)}
}
// Define a static record with batch start and stop time properties
batch = {start: batchRange().start, stop: batchRange().stop}
// Check to see if the current batch start time is beyond the migration.stop
// time and exit with an error if it is.
finished =
if batch.start >= migration.stop then
die(msg: "Batch range is beyond the migration range. Migration is complete.")
else
"Migration in progress"
// Query all data from the specified source bucket within the batch-defined time
// range. To limit migrated data by measurement, tag, or field, add a `filter()`
// function after `range()` with the appropriate predicate fn.
data = () =>
from(host: migration.sourceHost, org: migration.sourceOrg, token: migration.sourceToken, bucket: migration.sourceBucket)
|> range(start: batch.start, stop: batch.stop)
// rowCount is a stream of tables that contains the number of rows returned in
// the batch and is used to generate batch metadata.
rowCount =
data()
|> group(columns: ["_start", "_stop"])
|> count()
// emptyRange is a stream of tables that acts as filler data if the batch is
// empty. This is used to generate batch metadata for empty batches and is
// necessary to correctly increment the time range for the next batch.
emptyRange = array.from(rows: [{_start: batch.start, _stop: batch.stop, _value: 0}])
// metadata returns a stream of tables representing batch metadata.
metadata = () => {
_input =
if exists (rowCount |> findRecord(fn: (key) => true, idx: 0))._value then
rowCount
else
emptyRange
return
_input
|> map(
fn: (r) =>
({
_time: now(),
_measurement: "batches",
srcOrg: migration.sourceOrg,
srcBucket: migration.sourceBucket,
dstBucket: migration.destinationBucket,
batch_start: string(v: batch.start),
batch_stop: string(v: batch.stop),
rows: r._value,
percent_complete:
float(v: int(v: r._stop) - int(v: migration.start)) / float(
v: int(v: migration.stop) - int(v: migration.start),
) * 100.0,
}),
)
|> group(columns: ["_measurement", "srcOrg", "srcBucket", "dstBucket"])
}
// Write the queried data to the specified InfluxDB OSS bucket.
data()
|> to(bucket: migration.destinationBucket)
// Generate and store batch metadata in the migration.batchBucket.
metadata()
|> experimental.to(bucket: migration.batchBucket)
```
### Configuration help
{{< expand-wrapper >}}
<!----------------------- BEGIN Determine task interval ----------------------->
{{% expand "Determine your task interval" %}}
The task interval determines how often the migration task runs and is defined by
the [`task.every` option](/influxdb/v2.2/process-data/task-options/#every).
InfluxDB Cloud rate limits and quotas reset every five minutes, so
**we recommend a `5m` task interval**.
You can do shorter task intervals and execute the migration task more often,
but you need to balance the task interval with your [batch interval](#determine-your-batch-interval)
and the amount of data returned in each batch.
If the total amount of data queried in each five-minute interval exceeds your
InfluxDB Cloud organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/),
the batch will fail until rate limits and quotas reset.
{{% /expand %}}
<!------------------------ END Determine task interval ------------------------>
<!---------------------- BEGIN Determine migration start ---------------------->
{{% expand "Determine your migration start time" %}}
The `migration.start` time should be at or near the same time as the earliest
data point you want to migrate.
All migration batches are determined using the `migration.start` time and
`migration.batchInterval` settings.
To find time of the earliest point in your bucket, run the following query:
```js
from(bucket: "example-cloud-bucket")
|> range(start: 0)
|> group()
|> first()
|> keep(columns: ["_time"])
```
{{% /expand %}}
<!----------------------- END Determine migration start ----------------------->
<!----------------------- BEGIN Determine batch interval ---------------------->
{{% expand "Determine your batch interval" %}}
The `migration.batchInterval` setting controls the time range queried by each batch.
The "density" of the data in your InfluxDB Cloud bucket and your InfluxDB Cloud
organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/)
determine what your batch interval should be.
For example, if you're migrating data collected from hundreds of sensors with
points recorded every second, your batch interval will need to be shorter.
If you're migrating data collected from five sensors with points recorded every
minute, your batch interval can be longer.
It all depends on how much data gets returned in a single batch.
If points occur at regular intervals, you can get a fairly accurate estimate of
how much data will be returned in a given time range by using the `/api/v2/query`
endpoint to execute a query for the time range duration and then measuring the
size of the response body.
The following `curl` command queries an InfluxDB Cloud bucket for the last day
and returns the size of the response body in bytes.
You can customize the range duration to match your specific use case and
data density.
```sh
INFLUXDB_CLOUD_ORG=<your_influxdb_cloud_org>
INFLUXDB_CLOUD_TOKEN=<your_influxdb_cloud_token>
INFLUXDB_CLOUD_BUCKET=<your_influxdb_cloud_bucket>
curl -so /dev/null --request POST \
https://cloud2.influxdata.com/api/v2/query?org=$INFLUXDB_CLOUD_ORG \
--header "Authorization: Token $INFLUXDB_CLOUD_TOKEN" \
--header "Accept: application/csv" \
--header "Content-type: application/vnd.flux" \
--data "from(bucket:\"$INFLUXDB_CLOUD_BUCKET\") |> range(start: -1d, stop: now())" \
--write-out '%{size_download}'
```
{{% note %}}
You can also use other HTTP API tools like [Postman](https://www.postman.com/)
that provide the size of the response body.
{{% /note %}}
Divide the output of this command by 1000000 to convert it to megabytes (MB).
```
batchInterval = (read-rate-limit-mb / response-body-size-mb) * range-duration
```
For example, if the response body of your query that returns data from one day
is 8 MB and you're using the InfluxDB Cloud Free Plan with a read limit of
300 MB per five minutes:
```js
batchInterval = (300 / 8) * 1d
// batchInterval = 37d
```
You could query 37 days of data before hitting your read limit, but this is just an estimate.
We recommend setting the `batchInterval` slightly lower than the calculated interval
to allow for variation between batches.
So in this example, **it would be best to set your `batchInterval` to `35d`**.
##### Important things to note
- This assumes no other queries are running in your InfluxDB Cloud organization.
- You should also consider your network speeds and whether a batch can be fully
downloaded within the [task interval](#determine-your-task-interval).
{{% /expand %}}
<!------------------------ END Determine batch interval ----------------------->
{{< /expand-wrapper >}}
## Monitor the migration progress
The [InfluxDB Cloud Migration Community template](https://github.com/influxdata/community-templates/tree/master/influxdb-cloud-oss-migration/)
installs the migration task outlined in this guide as well as a dashboard
for monitoring running data migrations.
{{< img-hd src="/img/influxdb/2-1-migration-dashboard.png" alt="InfluxDB Cloud migration dashboard" />}}
<a class="btn" href="https://github.com/influxdata/community-templates/tree/master/influxdb-cloud-oss-migration/#quick-install">Install the InfluxDB Cloud Migration template</a>
## Troubleshoot migration task failures
If the migration task fails, [view your task logs](/influxdb/v2.2/process-data/manage-tasks/task-run-history/)
to identify the specific error. Below are common causes of migration task failures.
- [Exceeded rate limits](#exceeded-rate-limits)
- [Invalid API token](#invalid-api-token)
- [Query timeout](#query-timeout)
### Exceeded rate limits
If your data migration causes you to exceed your InfluxDB Cloud organization's
limits and quotas, the task will return an error similar to:
```
too many requests
```
**Possible solutions**:
- Update the `migration.batchInterval` setting in your migration task to use
a smaller interval. Each batch will then query less data.
### Invalid API token
If the API token you add as the `INFLUXDB_CLOUD_SECRET` doesn't have read access to
your InfluxDB Cloud bucket, the task will return an error similar to:
```
unauthorized access
```
**Possible solutions**:
- Ensure the API token has read access to your InfluxDB Cloud bucket.
- Generate a new InfluxDB Cloud API token with read access to the bucket you
want to migrate. Then, update the `INFLUXDB_CLOUD_TOKEN` secret in your
InfluxDB OSS instance with the new token.
### Query timeout
The InfluxDB Cloud query timeout is 90 seconds. If it takes longer than this to
return the data from the batch interval, the query will time out and the
task will fail.
**Possible solutions**:
- Update the `migration.batchInterval` setting in your migration task to use
a smaller interval. Each batch will then query less data and take less time
to return results.

View File

@ -0,0 +1,64 @@
---
title: Migrate data from InfluxDB OSS to other InfluxDB instances
description: >
To migrate data from an InfluxDB OSS bucket to another InfluxDB OSS or InfluxDB
Cloud bucket, export your data as line protocol and write it to your other
InfluxDB bucket.
menu:
influxdb_2_2:
name: Migrate data from OSS
parent: Migrate data
weight: 101
---
To migrate data from an InfluxDB OSS bucket to another InfluxDB OSS or InfluxDB
Cloud bucket, export your data as line protocol and write it to your other
InfluxDB bucket.
{{% cloud %}}
#### InfluxDB Cloud write limits
If migrating data from InfluxDB OSS to InfluxDB Cloud, you are subject to your
[InfluxDB Cloud organization's rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/).
Consider exporting your data in time-based batches to limit the file size
of exported line protocol to match your InfluxDB Cloud organization's limits.
{{% /cloud %}}
1. [Find the InfluxDB OSS bucket ID](/influxdb/{{< current-version-link >}}/organizations/buckets/view-buckets/)
that contains data you want to migrate.
2. Use the `influxd inspect export-lp` command to export data in your bucket as
[line protocol](/influxdb/v2.2/reference/syntax/line-protocol/).
Provide the following:
- **bucket ID**: ({{< req >}}) ID of the bucket to migrate.
- **engine path**: ({{< req >}}) Path to the TSM storage files on disk.
The default engine path [depends on your operating system](/influxdb/{{< current-version-link >}}/reference/internals/file-system-layout/#file-system-layout),
If using a [custom engine-path](/influxdb/{{< current-version-link >}}/reference/config-options/#engine-path)
provide your custom path.
- **output path**: ({{< req >}}) File path to output line protocol to.
- **start time**: Earliest time to export.
- **end time**: Latest time to export.
- **measurement**: Export a specific measurement. By default, the command
exports all measurements.
- **compression**: ({{< req text="Recommended" color="magenta" >}})
Use Gzip compression to compress the output line protocol file.
```sh
influxd inspect export-lp \
--bucket-id 12ab34cd56ef \
--engine-path ~/.influxdbv2/engine \
--output-path path/to/export.lp
--start 2022-01-01T00:00:00Z \
--end 2022-01-31T23:59:59Z \
--compress
```
3. Write the exported line protocol to your InfluxDB OSS or InfluxDB Cloud instance.
Do any of the following:
- Write line protocol in the **InfluxDB UI**:
- [InfluxDB Cloud UI](/influxdb/cloud/write-data/no-code/load-data/#load-csv-or-line-protocol-in-ui)
- [InfluxDB OSS {{< current-version >}} UI](/influxdb/{{< current-version-link >}}/write-data/no-code/load-data/#load-csv-or-line-protocol-in-ui)
- [Write line protocol using the `influx write` command](/influxdb/{{< current-version-link >}}/reference/cli/influx/write/)
- [Write line protocol using the InfluxDB API](/influxdb/{{< current-version-link >}}/write-data/developer-tools/api/)
- [Bulk ingest data (InfluxDB Cloud)](/influxdb/cloud/write-data/bulk-ingest-cloud/)

View File

@ -0,0 +1,38 @@
---
title: Monitor data and send alerts
seotitle: Monitor data and send alerts
description: >
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints. Or use community templates to monitor supported environments.
menu:
influxdb_2_2:
name: Monitor & alert
weight: 7
influxdb/v2.2/tags: [monitor, alert, checks, notification, endpoints]
---
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints. Or use [community templates to monitor](/influxdb/v2.2/monitor-alert/templates/) supported environments.
## Overview
1. A [check](/influxdb/v2.2/reference/glossary/#check) in InfluxDB queries data and assigns a status with a `_level` based on specific conditions.
2. InfluxDB stores the output of a check in the `statuses` measurement in the `_monitoring` system bucket.
3. [Notification rules](/influxdb/v2.2/reference/glossary/#notification-rule) check data in the `statuses`
measurement and, based on conditions set in the notification rule, send a message
to a [notification endpoint](/influxdb/v2.2/reference/glossary/#notification-endpoint).
4. InfluxDB stores notifications in the `notifications` measurement in the `_monitoring` system bucket.
## Create an alert
To get started, do the following:
1. [Create checks](/influxdb/v2.2/monitor-alert/checks/create/) to monitor data and assign a status.
2. [Add notification endpoints](/influxdb/v2.2/monitor-alert/notification-endpoints/create/)
to send notifications to third parties.
3. [Create notification rules](/influxdb/v2.2/monitor-alert/notification-rules/create) to check
statuses and send notifications to your notifications endpoints.
## Manage your monitoring and alerting pipeline
{{< children >}}

View File

@ -0,0 +1,19 @@
---
title: Manage checks
seotitle: Manage monitoring checks in InfluxDB
description: >
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
menu:
influxdb_2_2:
parent: Monitor & alert
weight: 101
influxdb/v2.2/tags: [monitor, checks, notifications, alert]
related:
- /influxdb/v2.2/monitor-alert/notification-rules/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
Learn how to create and manage checks:
{{< children >}}

View File

@ -0,0 +1,150 @@
---
title: Create checks
seotitle: Create monitoring checks in InfluxDB
description: >
Create a check in the InfluxDB UI.
menu:
influxdb_2_2:
parent: Manage checks
weight: 201
related:
- /influxdb/v2.2/monitor-alert/notification-rules/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
Create a check in the InfluxDB user interface (UI).
Checks query data and apply a status to each point based on specified conditions.
## Parts of a check
A check consists of two parts a query and check configuration.
#### Check query
- Specifies the dataset to monitor.
- May include tags to narrow results.
#### Check configuration
- Defines check properties, including the check interval and status message.
- Evaluates specified conditions and applies a status (if applicable) to each data point:
- `crit`
- `warn`
- `info`
- `ok`
- Stores status in the `_level` column.
## Check types
There are two types of checks:
- [threshold](#threshold-check)
- [deadman](#deadman-check)
#### Threshold check
A threshold check assigns a status based on a value being above, below,
inside, or outside of defined thresholds.
#### Deadman check
A deadman check assigns a status to data when a series or group doesn't report
in a specified amount of time.
## Create a check
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}** and select the [type of check](#check-types) to create.
3. Click **Name this check** in the top left corner and provide a unique name for the check, and then do the following:
- [Configure the check query](#configure-the-check-query)
- [Configure the check](#configure-the-check)
4. _(Optional)_ In the **Name this check** field at the top, enter a unique name for the check.
#### Configure the check query
1. Select the **bucket**, **measurement**, **field** and **tag sets** to query.
2. If creating a threshold check, select an **aggregate function**.
Aggregate functions aggregate data between the specified check intervals and
return a single value for the check to process.
In the **Aggregate functions** column, select an interval from the interval drop-down list
(for example, "Every 5 minutes") and an aggregate function from the list of functions.
3. Click **{{< caps >}}Submit{{< /caps >}}** to run the query and preview the results.
To see the raw query results, click the **View Raw Data {{< icon "toggle" >}}** toggle.
#### Configure the check
1. Click **{{< caps >}}2. Configure Check{{< /caps >}}** near the top of the window.
2. In the **{{< caps >}}Properties{{< /caps >}}** column, configure the following:
##### Schedule Every
Select the interval to run the check (for example, "Every 5 minutes").
This interval matches the aggregate function interval for the check query.
_Changing the interval here will update the aggregate function interval._
##### Offset
Delay the execution of a task to account for any late data.
Offset queries do not change the queried time range.
{{% note %}}Your offset must be shorter than your [check interval](#schedule-every).
{{% /note %}}
##### Tags
Add custom tags to the query output.
Each custom tag appends a new column to each row in the query output.
The column label is the tag key and the column value is the tag value.
Use custom tags to associate additional metadata with the check.
Common metadata tags across different checks lets you easily group and organize checks.
You can also use custom tags in [notification rules](/influxdb/v2.2/monitor-alert/notification-rules/create/).
3. In the **{{< caps >}}Status Message Template{{< /caps >}}** column, enter
the status message template for the check.
Use [Flux string interpolation](/{{< latest "flux" >}}/data-types/basic/string/#interpolate-strings)
to populate the message with data from the query.
Check data is represented as a record, `r`.
Access specific column values using dot notation: `r.columnName`.
Use data from the following columns:
- columns included in the query output
- [custom tags](#tags) added to the query output
- `_check_id`
- `_check_name`
- `_level`
- `_source_measurement`
- `_type`
###### Example status message template
```
From ${r._check_name}:
${r._field} is ${r._level}.
Its value is ${string(v: r.field_name)}.
```
When a check generates a status, it stores the message in the `_message` column.
4. Define check conditions that assign statuses to points.
Condition options depend on your check type.
##### Configure a threshold check
1. In the **{{< caps >}}Thresholds{{< /caps >}}** column, click the status name (CRIT, WARN, INFO, or OK)
to define conditions for that specific status.
2. From the **When value** drop-down list, select a threshold: is above, is below,
is inside of, is outside of.
3. Enter a value or values for the threshold.
You can also use the threshold sliders in the data visualization to define threshold values.
##### Configure a deadman check
1. In the **{{< caps >}}Deadman{{< /caps >}}** column, enter a duration for the deadman check in the **for** field.
For example, `90s`, `5m`, `2h30m`, etc.
2. Use the **set status to** drop-down list to select a status to set on a dead series.
3. In the **And stop checking after** field, enter the time to stop monitoring the series.
For example, `30m`, `2h`, `3h15m`, etc.
5. Click the green **{{< icon "check" >}}** in the top right corner to save the check.
## Clone a check
Create a new check by cloning an existing check.
1. Go to **Alerts > Alerts** in the navigation on the left.
{{< nav-icon "alerts" >}}
2. Click the **{{< icon "gear" >}}** icon next to the check you want to clone
and then click **Clone**.

View File

@ -0,0 +1,33 @@
---
title: Delete checks
seotitle: Delete monitoring checks in InfluxDB
description: >
Delete checks in the InfluxDB UI.
menu:
influxdb_2_2:
parent: Manage checks
weight: 204
related:
- /influxdb/v2.2/monitor-alert/notification-rules/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
If you no longer need a check, use the InfluxDB user interface (UI) to delete it.
{{% warn %}}
Deleting a check cannot be undone.
{{% /warn %}}
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Click the **{{< icon "delete" >}}** icon, and then click **{{< caps >}}Confirm{{< /caps >}}**.
After a check is deleted, all statuses generated by the check remain in the `_monitoring`
bucket until the retention period for the bucket expires.
{{% note %}}
You can also [disable a check](/influxdb/v2.2/monitor-alert/checks/update/#enable-or-disable-a-check)
without having to delete it.
{{% /note %}}

View File

@ -0,0 +1,60 @@
---
title: Update checks
seotitle: Update monitoring checks in InfluxDB
description: >
Update, rename, enable or disable checks in the InfluxDB UI.
menu:
influxdb_2_2:
parent: Manage checks
weight: 203
related:
- /influxdb/v2.2/monitor-alert/notification-rules/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
Update checks in the InfluxDB user interface (UI).
Common updates include:
- [Update check queries and logic](#update-check-queries-and-logic)
- [Enable or disable a check](#enable-or-disable-a-check)
- [Rename a check](#rename-a-check)
- [Add or update a check description](#add-or-update-a-check-description)
- [Add a label to a check](#add-a-label-to-a-check)
To update checks, select **Alerts > Alerts** in the navigation menu on the left.
{{< nav-icon "alerts" >}}
## Update check queries and logic
1. Click the name of the check you want to update. The check builder appears.
2. To edit the check query, click **{{< caps >}}1. Define Query{{< /caps >}}** at the top of the check builder window.
3. To edit the check logic, click **{{< caps >}}2. Configure Check{{< /caps >}}** at the top of the check builder window.
_For details about using the check builder, see [Create checks](/influxdb/v2.2/monitor-alert/checks/create/)._
## Enable or disable a check
Click the {{< icon "toggle" >}} toggle next to a check to enable or disable it.
## Rename a check
1. Hover over the name of the check you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the check name.
2. Enter a new name and click out of the name field or press enter to save.
_You can also rename a check in the [check builder](#update-check-queries-and-logic)._
## Add or update a check description
1. Hover over the check description you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the description.
2. Enter a new description and click out of the name field or press enter to save.
## Add a label to a check
1. Click **{{< icon "add-label" >}} Add a label** next to the check you want to add a label to.
The **Add Labels** box appears.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **{{< caps >}}Create Label{{< /caps >}}**.
4. To remove a label, click **{{< icon "x" >}}** on the label.

View File

@ -0,0 +1,37 @@
---
title: View checks
seotitle: View monitoring checks in InfluxDB
description: >
View check details and statuses and notifications generated by checks in the InfluxDB UI.
menu:
influxdb_2_2:
parent: Manage checks
weight: 202
related:
- /influxdb/v2.2/monitor-alert/notification-rules/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
View check details and statuses and notifications generated by checks in the InfluxDB user interface (UI).
- [View a list of all checks](#view-a-list-of-all-checks)
- [View check details](#view-check-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-check)
- [View notifications triggered by a check](#view-notifications-triggered-by-a-check)
To view checks, click **Alerts > Alerts** in navigation menu on the left.
{{< nav-icon "alerts" >}}
## View a list of all checks
The **{{< caps >}}Checks{{< /caps >}}** section of the Alerts landing page displays all existing checks.
## View check details
Click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a check
1. Click the **{{< icon "view" >}}** icon on the check.
2. Click **View History**.
The Statuses History page displays statuses generated by the selected check.

View File

@ -0,0 +1,96 @@
---
title: Create custom checks
seotitle: Custom checks
description: >
Create custom checks with a Flux task.
menu:
influxdb_2_2:
parent: Monitor & alert
weight: 201
influxdb/v2.2/tags: [alerts, checks, tasks, Flux]
---
In the UI, you can create two kinds of [checks](/influxdb/v2.2/reference/glossary/#check):
[`threshold`](/influxdb/v2.2/monitor-alert/checks/create/#threshold-check) and
[`deadman`](/influxdb/v2.2/monitor-alert/checks/create/#deadman-check).
Using a Flux task, you can create a custom check that provides a couple advantages:
- Customize and transform the data you would like to use for the check.
- Set up custom criteria for your alert (other than `threshold` and `deadman`).
## Create a task
1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left.
{{< nav-icon "tasks" >}}
2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}**.
3. In the **Name** field, enter a descriptive name,
and then enter how often to run the task in the **Every** field (for example, `10m`).
For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/v2.2/process-data/task-options/).
4. Enter the Flux script for your custom check, including the [`monitor.check`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/monitor/check/) function.
{{% note %}}
Use the [`/api/v2/checks/{checkID}/query` API endpoint](/influxdb/v2.2/api/#operation/DeleteDashboardsIDOwnersID)
to see the Flux code for a check built in the UI.
This can be useful for constructing custom checks.
{{% /note %}}
### Example: Monitor failed tasks
The script below is fairly complex, and can be used as a framework for similar tasks.
It does the following:
- Import the necessary `influxdata/influxdb/monitor` package, and other packages for data processing.
- Query the `_tasks` bucket to retrieve all statuses generated by your check.
- Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`.
- Create a `check` object that specifies an ID, name, and type for the check.
- Define the `ok` and `crit` statuses.
- Execute the `monitor` function on the `check` using the `task_data`.
#### Example alert task script
```js
import "strings"
import "regexp"
import "influxdata/influxdb/monitor"
import "influxdata/influxdb/schema"
option task = {name: "Failed Tasks Check", every: 1h, offset: 4m}
task_data = from(bucket: "_tasks")
|> range(start: -task.every)
|> filter(fn: (r) => r["_measurement"] == "runs")
|> filter(fn: (r) => r["_field"] == "logs")
|> map(fn: (r) => ({r with name: strings.split(v: regexp.findString(r: /option task = \{([^\}]+)/, v: r._value), t: "\\\\\\\"")[1]}))
|> drop(columns: ["_value", "_start", "_stop"])
|> group(columns: ["name", "taskID", "status", "_measurement"])
|> map(fn: (r) => ({r with _value: if r.status == "failed" then 1 else 0}))
|> last()
check = {
// 16 characters, alphanumeric
_check_id: "0000000000000001",
// Name string
_check_name: "Failed Tasks Check",
// Check type (threshold, deadman, or custom)
_type: "custom",
tags: {},
}
ok = (r) => r["logs"] == 0
crit = (r) => r["logs"] == 1
messageFn = (r) => "The task: ${r.taskID} - ${r.name} has a status of ${r.status}"
task_data
|> schema["fieldsAsCols"]()
|> monitor["check"](data: check, messageFn: messageFn, ok: ok, crit: crit)
```
{{% note %}}
Creating a custom check does not send a notification email.
For information on how to create notification emails, see
[Create notification endpoints](/influxdb/v2.2/monitor-alert/notification-endpoints/create),
[Create notification rules](/influxdb/v2.2/monitor-alert/notification-rules/create),
and [Send alert email](/influxdb/v2.2/monitor-alert/send-email/)
{{% /note %}}

View File

@ -0,0 +1,19 @@
---
title: Manage notification endpoints
list_title: Manage notification endpoints
description: >
Create, read, update, and delete endpoints in the InfluxDB UI.
influxdb/v2.2/tags: [monitor, endpoints, notifications, alert]
menu:
influxdb_2_2:
parent: Monitor & alert
weight: 102
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-rules/
---
Notification endpoints store information to connect to a third-party service.
Create a connection to a HTTP, Slack, or PagerDuty endpoint.
{{< children >}}

View File

@ -0,0 +1,67 @@
---
title: Create notification endpoints
description: >
Create notification endpoints to send alerts on your time series data.
menu:
influxdb_2_2:
name: Create endpoints
parent: Manage notification endpoints
weight: 201
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-rules/
---
To send notifications about changes in your data, start by creating a notification endpoint to a third-party service. After creating notification endpoints, [create notification rules](/influxdb/v2.2/monitor-alert/notification-rules/create) to send alerts to third-party services on [check statuses](/influxdb/v2.2/monitor-alert/checks/create).
{{% cloud-only %}}
#### Endpoints available in InfluxDB Cloud
The following endpoints are available for the InfluxDB Cloud Free Plan and Usage-based Plan:
| Endpoint | Free Plan | Usage-based Plan |
|:-------- |:-------------------: |:----------------------------:|
| **Slack** | **{{< icon "check" >}}** | **{{< icon "check" >}}** |
| **PagerDuty** | | **{{< icon "check" >}}** |
| **HTTP** | | **{{< icon "check" >}}** |
{{% /cloud-only %}}
## Create a notification endpoint
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}**.
3. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**.
4. From the **Destination** drop-down list, select a destination endpoint to send notifications to.
{{% cloud-only %}}_See [available endpoints](#endpoints-available-in-influxdb-cloud)._{{% /cloud-only %}}
5. In the **Name** and **Description** fields, enter a name and description for the endpoint.
6. Enter information to connect to the endpoint:
- **For HTTP**, enter the **URL** to send the notification.
Select the **auth method** to use: **None** for no authentication.
To authenticate with a username and password, select **Basic** and then
enter credentials in the **Username** and **Password** fields.
To authenticate with an API token, select **Bearer**, and then enter the
API token in the **Token** field.
- **For Slack**, create an [Incoming WebHook](https://api.slack.com/incoming-webhooks#posting_with_webhooks)
in Slack, and then enter your webHook URL in the **Slack Incoming WebHook URL** field.
- **For PagerDuty**:
- [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service),
[add an integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service),
and then enter the PagerDuty integration key for your new service in the **Routing Key** field.
- The **Client URL** provides a useful link in your PagerDuty notification.
Enter any URL that you'd like to use to investigate issues.
This URL is sent as the `client_url` property in the PagerDuty trigger event.
By default, the **Client URL** is set to your Monitoring & Alerting History
page, and the following included in the PagerDuty trigger event:
```json
"client_url": "http://localhost:8086/orgs/<your-org-ID>/alert-history"
```
6. Click **{{< caps >}}Create Notification Endpoint{{< /caps >}}**.

View File

@ -0,0 +1,28 @@
---
title: Delete notification endpoints
description: >
Delete a notification endpoint in the InfluxDB UI.
menu:
influxdb_2_2:
name: Delete endpoints
parent: Manage notification endpoints
weight: 204
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-rules/
---
If notifications are no longer sent to an endpoint, complete the steps below to
delete the endpoint, and then [update notification rules](/influxdb/v2.2/monitor-alert/notification-rules/update)
with a new notification endpoint as needed.
## Delete a notification endpoint
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}** and find the rule
you want to delete.
3. Click the **{{< icon "trash" >}}** icon on the notification you want to delete
and then click **{{< caps >}}Confirm{{< /caps >}}**.

View File

@ -0,0 +1,55 @@
---
title: Update notification endpoints
description: >
Update notification endpoints in the InfluxDB UI.
menu:
influxdb_2_2:
name: Update endpoints
parent: Manage notification endpoints
weight: 203
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-rules/
---
Complete the following steps to update notification endpoint details.
To update the notification endpoint selected for a notification rule, see [update notification rules](/influxdb/v2.2/monitor-alert/notification-rules/update/).
**To update a notification endpoint**
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}** and then do the following as needed:
- [Update the name or description for notification endpoint](#update-the-name-or-description-for-notification-endpoint)
- [Change endpoint details](#change-endpoint-details)
- [Disable notification endpoint](#disable-notification-endpoint)
- [Add a label to notification endpoint](#add-a-label-to-notification-endpoint)
## Update the name or description for notification endpoint
1. Hover over the name or description of the endpoint and click the pencil icon
(**{{< icon "edit" >}}**) to edit the field.
2. Click outside of the field to save your changes.
## Change endpoint details
1. Click the name of the endpoint to update.
2. Update details as needed, and then click **Edit Notification Endpoint**.
For details about each field, see [Create notification endpoints](/influxdb/v2.2/monitor-alert/notification-endpoints/create/).
## Disable notification endpoint
Click the {{< icon "toggle" >}} toggle to disable the notification endpoint.
## Add a label to notification endpoint
1. Click **{{< icon "add-label" >}} Add a label** next to the endpoint you want to add a label to.
The **Add Labels** box opens.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **{{< caps >}}Create Label{{< /caps >}}**.
4. To remove a label, click **{{< icon "x" >}}** on the label.

View File

@ -0,0 +1,40 @@
---
title: View notification endpoint history
seotitle: View notification endpoint details and history
description: >
View notification endpoint details and history in the InfluxDB UI.
menu:
influxdb_2_2:
name: View endpoint history
parent: Manage notification endpoints
weight: 202
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-rules/
---
View notification endpoint details and history in the InfluxDB user interface (UI).
1. In the navigation menu on the left, select **Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}**.
- [View notification endpoint details](#view-notification-endpoint-details)
- [View history notification endpoint history](#view-notification-endpoint-history), including statues and notifications sent to the endpoint
## View notification endpoint details
On the notification endpoints page:
1. Click the name of the notification endpoint you want to view.
2. View the notification endpoint destination, name, and information to connect to the endpoint.
## View notification endpoint history
On the notification endpoints page, click the **{{< icon "gear" >}}** icon,
and then click **View History**.
The Check Statuses History page displays:
- Statuses generated for the selected notification endpoint
- Notifications sent to the selected notification endpoint

View File

@ -0,0 +1,17 @@
---
title: Manage notification rules
description: >
Manage notification rules in InfluxDB.
weight: 103
influxdb/v2.2/tags: [monitor, notifications, alert]
menu:
influxdb_2_2:
parent: Monitor & alert
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
The following articles provide information on managing your notification rules:
{{< children >}}

View File

@ -0,0 +1,44 @@
---
title: Create notification rules
description: >
Create notification rules to send alerts on your time series data.
weight: 201
menu:
influxdb_2_2:
parent: Manage notification rules
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
Once you've set up checks and notification endpoints, create notification rules to alert you.
_For details, see [Manage checks](/influxdb/v2.2/monitor-alert/checks/) and
[Manage notification endpoints](/influxdb/v2.2/monitor-alert/notification-endpoints/)._
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
- [Create a new notification rule in the UI](#create-a-new-notification-rule-in-the-ui)
- [Clone an existing notification rule in the UI](#clone-an-existing-notification-rule-in-the-ui)
## Create a new notification rule
1. On the notification rules page, click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**.
2. Complete the **About** section:
1. In the **Name** field, enter a name for the notification rule.
2. In the **Schedule Every** field, enter how frequently the rule should run.
3. In the **Offset** field, enter an offset time. For example,if a task runs on the hour, a 10m offset delays the task to 10 minutes after the hour. Time ranges defined in the task are relative to the specified execution time.
3. In the **Conditions** section, build a condition using a combination of status and tag keys.
- Next to **When status is equal to**, select a status from the drop-down field.
- Next to **AND When**, enter one or more tag key-value pairs to filter by.
4. In the **Message** section, select an endpoint to notify.
5. Click **{{< caps >}}Create Notification Rule{{< /caps >}}**.
## Clone an existing notification rule
On the notification rules page, click the **{{< icon "gear" >}}** icon and select **Clone**.
The cloned rule appears.

View File

@ -0,0 +1,24 @@
---
title: Delete notification rules
description: >
If you no longer need to receive an alert, delete the associated notification rule.
weight: 204
menu:
influxdb_2_2:
parent: Manage notification rules
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
If you no longer need to receive an alert, delete the associated notification rule.
## Delete a notification rule
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
3. Click the **{{< icon "trash" >}}** icon on the notification rule you want to delete.
4. Click **{{< caps >}}Confirm{{< /caps >}}**.

View File

@ -0,0 +1,50 @@
---
title: Update notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 203
menu:
influxdb_2_2:
parent: Manage notification rules
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
Update notification rules to update the notification message or change the schedule or conditions.
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
- [Update the name or description for notification rules](#update-the-name-or-description-for-notification-rules)
- [Enable or disable notification rules](#enable-or-disable-notification-rules)
- [Add a label to notification rules](#add-a-label-to-notification-rules)
## Update the name or description for notification rules
On the Notification Rules page:
1. Hover over the name or description of a rule and click the pencil icon
(**{{< icon "edit" >}}**) to edit the field.
2. Click outside of the field to save your changes.
## Enable or disable notification rules
On the notification rules page, click the {{< icon "toggle" >}} toggle to
enable or disable the notification rule.
## Add a label to notification rules
On the notification rules page:
1. Click **{{< icon "add-label" >}} Add a label**
next to the rule you want to add a label to.
The **Add Labels** box opens.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **{{< caps >}}Create Label{{< /caps >}}**.
4. To remove a label, click **{{< icon "x" >}}** on the label.

View File

@ -0,0 +1,44 @@
---
title: View notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 202
menu:
influxdb_2_2:
parent: Manage notification rules
related:
- /influxdb/v2.2/monitor-alert/checks/
- /influxdb/v2.2/monitor-alert/notification-endpoints/
---
View notification rule details and statuses and notifications generated by notification rules in the InfluxDB user interface (UI).
- [View a list of all notification rules](#view-a-list-of-all-notification-rules)
- [View notification rule details](#view-notification-rule-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-notification-rule)
- [View notifications triggered by a notification rule](#view-notifications-triggered-by-a-notification-rule)
**To view notification rules:**
1. In the navigation menu on the left, select **Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
## View a list of all notification rules
The **{{< caps >}}Notification Rules{{< /caps >}}** section of the Alerts landing page displays all existing checks.
## View notification rule details
Click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a notification rule
Click the **{{< icon "gear" >}}** icon on the notification rule, and then **View History**.
The Statuses History page displays statuses generated by the selected check.
## View notifications triggered by a notification rule
1. Click the **{{< icon "gear" >}}** icon on the notification rule, and then **View History**.
2. In the top left corner, click **{{< caps >}}Notifications{{< /caps >}}**.
The Notifications History page displays notifications initiated by the selected notification rule.

View File

@ -0,0 +1,295 @@
---
title: Send alert email
description: >
Send an alert email.
menu:
influxdb_2_2:
parent: Monitor & alert
weight: 104
influxdb/v2.2/tags: [alert, email, notifications, check]
related:
- /influxdb/v2.2/monitor-alert/checks/
---
Send an alert email using a third-party service, such as [SendGrid](https://sendgrid.com/), [Amazon Simple Email Service (SES)](https://aws.amazon.com/ses/), [Mailjet](https://www.mailjet.com/), or [Mailgun](https://www.mailgun.com/). To send an alert email, complete the following steps:
1. [Create a check](/influxdb/v2.2/monitor-alert/checks/create/#create-a-check-in-the-influxdb-ui) to identify the data to monitor and the status to alert on.
2. Set up your preferred email service (sign up, retrieve API credentials, and send test email):
- **SendGrid**: See [Getting Started With the SendGrid API](https://sendgrid.com/docs/API_Reference/api_getting_started.html)
- **AWS Simple Email Service (SES)**: See [Using the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email.html). Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html).
- **Mailjet**: See [Getting Started with Mailjet](https://dev.mailjet.com/email/guides/getting-started/)
- **Mailgun**: See [Mailgun Signup](https://signup.mailgun.com/new/signup)
3. [Create an alert email task](#create-an-alert-email-task) to call your email service and send an alert email.
{{% note %}}
In the procedure below, we use the **Task** page in the InfluxDB UI (user interface) to create a task. Explore other ways to [create a task](/influxdb/v2.2/process-data/manage-tasks/create-task/).
{{% /note %}}
### Create an alert email task
1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left.
{{< nav-icon "tasks" >}}
2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}**.
3. In the **Name** field, enter a descriptive name, for example, **Send alert email**,
and then enter how often to run the task in the **Every** field, for example, `10m`.
For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/v2.2/process-data/task-options/).
4. In the right panel, enter the following detail in your **task script** (see [examples below](#examples)):
- Import the [Flux HTTP package](/{{< latest "flux" >}}/stdlib/http/).
- (Optional) Store your API key as a secret for reuse.
First, [add your API key as a secret](/influxdb/v2.2/security/secrets/manage-secrets/add/),
and then import the [Flux InfluxDB Secrets package](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/secrets/).
- Query the `statuses` measurement in the `_monitoring` bucket to retrieve all statuses generated by your check.
- Set the time range to monitor; use the same interval that the task is scheduled to run. For example, `range (start: -task.every)`.
- Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`.
- Use the `map()` function to evaluate the criteria to send an alert using `http.post()`.
- Specify your email service `url` (endpoint), include applicable request `headers`, and verify your request `data` format follows the format specified for your email service.
#### Examples
{{< tabs-wrapper >}}
{{% tabs %}}
[SendGrid](#)
[AWS SES](#)
[Mailjet](#)
[Mailgun](#)
{{% /tabs %}}
<!-------------------------------- BEGIN SendGrid -------------------------------->
{{% tab-content %}}
The example below uses the SendGrid API to send an alert email when more than 3 critical statuses occur since the previous task run.
```js
import "http"
import "json"
// Import the Secrets package if you store your API key as a secret.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secret if applicable. Otherwise, skip this line
// and add the API key as the Bearer token in the Authorization header.
SENDGRID_APIKEY = secrets.get(key: "SENDGRID_APIKEY")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "statuses" and r._level == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 3 then
{r with _value: http.post(
url: "https://api.sendgrid.com/v3/mail/send",
headers: {"Content-Type": "application/json", "Authorization": "Bearer ${SENDGRID_APIKEY}"},
data: json.encode(
v: {
"personalizations": [
{
"to": [
{
"email": "jane.doe@example.com"
}
]
}
],
"from": {
"email": "john.doe@example.com"
},
"subject": "InfluxDB critical alert",
"content": [
{
"type": "text/plain",
"value": "There have been ${r._value} critical statuses."
}
]
}
)
)}
else
{r with _value: 0},
)
```
{{% /tab-content %}}
<!-------------------------------- BEGIN AWS SES -------------------------------->
{{% tab-content %}}
The example below uses the AWS SES API v2 to send an alert email when more than 3 critical statuses occur since the last task run.
{{% note %}}
Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). We recommend signing your AWS API requests using the [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html).
{{% /note %}}
```js
import "http"
import "json"
// Import the Secrets package if you store your API credentials as secrets.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secrets if applicable. Otherwise, skip this line
// and add the API key as the Bearer token in the Authorization header.
AWS_AUTH_ALGORITHM = secrets.get(key: "AWS_AUTH_ALGORITHM")
AWS_CREDENTIAL = secrets.get(key: "AWS_CREDENTIAL")
AWS_SIGNED_HEADERS = secrets.get(key: "AWS_SIGNED_HEADERS")
AWS_CALCULATED_SIGNATURE = secrets.get(key: "AWS_CALCULATED_SIGNATURE")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r.measurement == "statuses" and r._level == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 3 then
{r with _value: http.post(
url: "https://email.your-aws-region.amazonaws.com/sendemail/v2/email/outbound-emails",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer ${AWS_AUTH_ALGORITHM}${AWS_CREDENTIAL}${AWS_SIGNED_HEADERS}${AWS_CALCULATED_SIGNATURE}"},
data: json.encode(v: {
"Content": {
"Simple": {
"Body": {
"Text": {
"Charset": "UTF-8",
"Data": "There have been ${r._value} critical statuses."
}
},
"Subject": {
"Charset": "UTF-8",
"Data": "InfluxDB critical alert"
}
}
},
"Destination": {
"ToAddresses": [
"john.doe@example.com"
]
}
}
)
)}
else
{r with _value: 0},
)
```
For details on the request syntax, see [SendEmail API v2 reference](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html).
{{% /tab-content %}}
<!-------------------------------- BEGIN Mailjet ------------------------------->
{{% tab-content %}}
The example below uses the Mailjet Send API to send an alert email when more than 3 critical statuses occur since the last task run.
{{% note %}}
To view your Mailjet API credentials, sign in to Mailjet and open the [API Key Management page](https://app.mailjet.com/account/api_keys).
{{% /note %}}
```js
import "http"
import "json"
// Import the Secrets package if you store your API keys as secrets.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secrets if applicable. Otherwise, skip this line
// and add the API keys as Basic credentials in the Authorization header.
MAILJET_APIKEY = secrets.get(key: "MAILJET_APIKEY")
MAILJET_SECRET_APIKEY = secrets.get(key: "MAILJET_SECRET_APIKEY")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r.measurement == "statuses" and "r.level" == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 3 then
{r with
_value: http.post(
url: "https://api.mailjet.com/v3.1/send",
headers: {
"Content-type": "application/json",
"Authorization": "Basic ${MAILJET_APIKEY}:${MAILJET_SECRET_APIKEY}"
},
data: json.encode(
v: {
"Messages": [
{
"From": {"Email": "jane.doe@example.com"},
"To": [{"Email": "john.doe@example.com"}],
"Subject": "InfluxDB critical alert",
"TextPart": "There have been ${r._value} critical statuses.",
"HTMLPart": "<h3>${r._value} critical statuses</h3><p>There have been ${r._value} critical statuses.",
},
],
},
),
),
}
else
{r with _value: 0},
)
```
{{% /tab-content %}}
<!-------------------------------- BEGIN Mailgun ---------------------------->
{{% tab-content %}}
The example below uses the Mailgun API to send an alert email when more than 3 critical statuses occur since the last task run.
{{% note %}}
To view your Mailgun API keys, sign in to Mailjet and open [Account Security - API security](https://app.mailgun.com/app/account/security/api_keys). Mailgun requires that a domain be specified via Mailgun. A domain is automatically created for you when you first set up your account. You must include this domain in your `url` endpoint (for example, `https://api.mailgun.net/v3/YOUR_DOMAIN` or `https://api.eu.mailgun.net/v3/YOUR_DOMAIN`. If you're using a free version of Mailgun, you can set up a maximum of five authorized recipients (to receive email alerts) for your domain. To view your Mailgun domains, sign in to Mailgun and view the [Domains page](https://app.mailgun.com/app/sending/domains).
{{% /note %}}
```js
import "http"
import "json"
// Import the Secrets package if you store your API key as a secret.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secret if applicable. Otherwise, skip this line
// and add the API key as the Bearer token in the Authorization header.
MAILGUN_APIKEY = secrets.get(key: "MAILGUN_APIKEY")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r["_measurement"] == "statuses")
|> filter(fn: (r) => r["_level"] == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 1 then
{r with _value: http.post(
url: "https://api.mailgun.net/v3/YOUR_DOMAIN/messages",
headers: {
"Content-type": "application/json",
"Authorization": "Basic api:${MAILGUN_APIKEY}"
},
data: json.encode(v: {
"from": "Username <mailgun@YOUR_DOMAIN_NAME>",
"to": "email@example.com",
"subject": "InfluxDB critical alert",
"text": "There have been ${r._value} critical statuses."
}
)
)}
else
{r with _value: 0},
)
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}

Some files were not shown because too many files have changed in this diff Show More