InfluxDB OSS 2.4 (#4361)

* created OSS 2.4

* port nav-item shortcode fixes to 2.4

* update set() example in influxdb faq

* port schema exploration updates to 2.4

* replace indexDB with InfluxDB

* OSS and Enterprise FAQ update (#4310)

* add info about relative time ranges and task retries, closes influxdata/EAR#3415

* add delete faqs, closes influxdata/EAR#3412

* restructure enterprise faq, add entropy faq, closes influxdata/EAR#3364

* add faq about total query time, closes influxdata/EAR#3509

* port MQTT changes into influxdb 2.4

* Influx CLI 2.4 (#4340)

* feat: added CLI scripts documentation (#4223)

* feat: added CLI scripts documentation

* chore: address feedback

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* chore: more feedback

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* moved scripts cli docs into 2.4

* Add users to an org as an owner and other fixes (#4258)

* document adding users to an org as an owner and other fixes, closes #4171

* update content around owners

* InfluxQL shell documentation (#4263)

* added influx v1 shell command docs, closes #4173

* add influxql shell process docs, closes #4172

* add links to influxql shell process docs

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* feat: documented invokable scripts within tasks functionality in the CLI (#4295)

* feat: documented invokable scripts within tasks functionality in the CLI

* chore: added updated_in frontmatter

* add information about cli configs with username/password (#4328)

Co-authored-by: Andrew Depke <andrewdepke@gmail.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* influx CLI 2.4 release notes (#4267)

* feat: added CLI scripts documentation (#4223)

* feat: added CLI scripts documentation

* chore: address feedback

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* chore: more feedback

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* moved scripts cli docs into 2.4

* influx CLI 2.4 release notes

* Update content/influxdb/v2.4/reference/release-notes/influx-cli.md

* update release notes with cli username/password support

Co-authored-by: Andrew Depke <andrewdepke@gmail.com>

* 2.4 Virtual DBRPs (#4342)

* WIP auto-add dbrp

* add dbrp management docs

* Update replication docs for InfluxDB 2.4 (#4343)

* update replication docs

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* port postman install update to 2.4

* port influx stacks init fix to all influxdb versions

* API docs for 2.4 (#4357)

* api docs for 2.4

* updated to latest swaggerV1Compat.yml

* update edge.js

* Oss 2.4 release notes (#4352)

* added initial notes

* added updates

* updated the release notes

* made a few updates, changed TBD

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Sam Dillard <sam@influxdata.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/v2.4/reference/release-notes/influxdb.md

Co-authored-by: Sam Dillard <sam@influxdata.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* added date to influx-cli 2.4.0 release

Co-authored-by: lwandzura <51929958+lwandzura@users.noreply.github.com>
Co-authored-by: Andrew Depke <andrewdepke@gmail.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
Co-authored-by: Sam Dillard <sam@influxdata.com>
pull/4364/head
Scott Anderson 2022-08-19 09:27:54 -06:00 committed by GitHub
parent 00d767a683
commit 20a409294f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
485 changed files with 68485 additions and 190 deletions

18732
api-docs/v2.4/ref.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,420 @@
# this is a manually maintained file for these old routes until oats#15 is resolved
openapi: "3.0.0"
info:
title: InfluxDB API Service (V1 compatible endpoints)
version: 0.1.0
description: |
The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.
If you want to use the latest InfluxDB `/api/v2` API instead,
see the [InfluxDB v2 API documentation](https://docs.influxdata.com/influxdb/cloud/api/).
servers:
- url: /
description: V1-compatible API endpoints.
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: "#/components/parameters/TraceSpan"
- $ref: "#/components/parameters/AuthUserV1"
- $ref: "#/components/parameters/AuthPassV1"
- in: query
name: db
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: When present, its value indicates to the database that compression is applied to the line protocol body.
schema:
type: string
description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
"204":
description: Write data is correctly formatted and accepted for writing to the bucket.
"400":
description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: "#/components/schemas/LineProtocolError"
"401":
description: Token does not have sufficient permissions to write to this organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
"403":
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
"413":
description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: "#/components/schemas/LineProtocolLengthError"
"429":
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
"503":
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/query:
post: # technically this functions with other methods as well
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain: # although this should be `application/vnd.influxql`, oats breaks so we define the content-type header parameter
schema:
type: string
parameters:
- $ref: "#/components/parameters/TraceSpan"
- $ref: "#/components/parameters/AuthUserV1"
- $ref: "#/components/parameters/AuthPassV1"
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
"200":
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: "#/components/schemas/InfluxQLCSVResponse"
text/csv:
schema:
$ref: "#/components/schemas/InfluxQLCSVResponse"
application/json:
schema:
$ref: "#/components/schemas/InfluxQLResponse"
application/x-msgpack:
schema:
type: string
format: binary
"429":
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: "1"
span_id: "1"
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
oneOf:
- required: [statement_id, error]
- required: [statement_id, series]
items:
type: object
properties:
statement_id:
type: integer
error:
type: string
series:
type: array
items:
type: object
properties:
name:
type: string
tags:
type: object
additionalProperties:
type: string
partial:
type: boolean
columns:
type: array
items:
type: string
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: >
name,tags,time,test_field,test_tag
test_measurement,,1603740794286107366,1,tag_value
test_measurement,,1603740870053205649,2,tag_value
test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
# This set of enumerations must remain in sync with the constants defined in errors.go
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required: [code, message]
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: Op describes the logical code operation during error. Useful for debugging.
type: string
err:
readOnly: true
description: Err is a stack of errors that occurred during processing of the request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required: [code, message, op, err]
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required: [code, message, maxLength]
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: |
Use the [Token authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](https://docs.influxdata.com/influxdb/cloud/api-guide/api_intro/#authentication).
- [Manage API tokens](https://docs.influxdata.com/influxdb/cloud/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: |
Use the HTTP [Basic authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a username and password](https://docs.influxdata.com/influxdb/cloud/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: |
Use the [Querystring authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through the query string.
For examples and more information, see how to [authenticate with a username and password](https://docs.influxdata.com/influxdb/cloud/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: |
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true

View File

@ -79,6 +79,8 @@ function addPreserve() {
$('.keep-url').each(function () {
// For code blocks with no syntax highlighting
$(this).next('pre').addClass('preserve')
// For code blocks with no syntax highlighting inside of a link (API endpoint blocks)
$(this).next('a').find('pre').addClass('preserve')
// For code blocks with syntax highlighting
$(this).next('.highlight').find('pre').addClass('preserve')
// For code blocks inside .keep-url div

View File

@ -28,7 +28,7 @@ Your queries should guide what data you store in [tags](/enterprise_influxdb/v1.
## Avoid too many series
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/enterprise_influxdb/v1.9/concepts/glossary/#measurement)
- [tags](/enterprise_influxdb/v1.9/concepts/glossary/#tag)

View File

@ -15,94 +15,97 @@ This page addresses frequent sources of confusion and places where InfluxDB
behaves in an unexpected way relative to other database systems.
Where applicable, it links to outstanding issues on GitHub.
**Administration**
##### Administration {href="administration-1"}
* [How do I include a single quote in a password?](#how-do-i-include-a-single-quote-in-a-password)
* [How can I identify my version of InfluxDB?](#how-can-i-identify-my-version-of-influxdb)
* [Where can I find InfluxDB logs?](#where-can-i-find-influxdb-logs)
* [What is the relationship between shard group durations and retention policies?](#what-is-the-relationship-between-shard-group-durations-and-retention-policies)
* [Why aren't data dropped after I've altered a retention policy?](#why-arent-data-dropped-after-ive-altered-a-retention-policy)
* [Why does InfluxDB fail to parse microsecond units in the configuration file?](#why-does-influxdb-fail-to-parse-microsecond-units-in-the-configuration-file)
* [Does InfluxDB have a file system size limit?](#does-influxdb-have-a-file-system-size-limit)
- [How do I include a single quote in a password?](#how-do-i-include-a-single-quote-in-a-password)
- [How can I identify my version of InfluxDB?](#how-can-i-identify-my-version-of-influxdb)
- [Where can I find InfluxDB logs?](#where-can-i-find-influxdb-logs)
- [What is the relationship between shard group durations and retention policies?](#what-is-the-relationship-between-shard-group-durations-and-retention-policies)
- [Why aren't data dropped after I've altered a retention policy?](#why-arent-data-dropped-after-ive-altered-a-retention-policy)
- [Why does InfluxDB fail to parse microsecond units in the configuration file?](#why-does-influxdb-fail-to-parse-microsecond-units-in-the-configuration-file)
- [Does InfluxDB have a file system size limit?](#does-influxdb-have-a-file-system-size-limit)
**Command line interface (CLI)**
##### Command line interface (CLI) {href="command-line-interface-cli-1"}
* [How do I make InfluxDBs CLI return human readable timestamps?](#how-do-i-use-the-influxdb-cli-to-return-human-readable-timestamps)
* [How can a non-admin user `USE` a database in the InfluxDB CLI?](#how-can-a-non-admin-user-use-a-database-in-the-influxdb-cli)
* [How do I write to a non-`DEFAULT` retention policy with the InfluxDB CLI?](#how-do-i-write-to-a-non-default-retention-policy-with-the-influxdb-cli)
* [How do I cancel a long-running query?](#how-do-i-cancel-a-long-running-query)
- [How do I make InfluxDBs CLI return human readable timestamps?](#how-do-i-use-the-influxdb-cli-to-return-human-readable-timestamps)
- [How can a non-admin user `USE` a database in the InfluxDB CLI?](#how-can-a-non-admin-user-use-a-database-in-the-influxdb-cli)
- [How do I write to a non-`DEFAULT` retention policy with the InfluxDB CLI?](#how-do-i-write-to-a-non-default-retention-policy-with-the-influxdb-cli)
- [How do I cancel a long-running query?](#how-do-i-cancel-a-long-running-query)
**Data types**
##### Data types {href="data-types-1"}
* [Why can't I query Boolean field values?](#why-cant-i-query-boolean-field-values)
* [How does InfluxDB handle field type discrepancies across shards?](#how-does-influxdb-handle-field-type-discrepancies-across-shards)
* [What are the minimum and maximum integers that InfluxDB can store?](#what-are-the-minimum-and-maximum-integers-that-influxdb-can-store)
* [What are the minimum and maximum timestamps that InfluxDB can store?](#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store)
* [How can I tell what type of data is stored in a field?](#how-can-i-tell-what-type-of-data-is-stored-in-a-field)
* [Can I change a field's data type?](#can-i-change-a-fields-data-type)
- [Why can't I query Boolean field values?](#why-cant-i-query-boolean-field-values)
- [How does InfluxDB handle field type discrepancies across shards?](#how-does-influxdb-handle-field-type-discrepancies-across-shards)
- [What are the minimum and maximum integers that InfluxDB can store?](#what-are-the-minimum-and-maximum-integers-that-influxdb-can-store)
- [What are the minimum and maximum timestamps that InfluxDB can store?](#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store)
- [How can I tell what type of data is stored in a field?](#how-can-i-tell-what-type-of-data-is-stored-in-a-field)
- [Can I change a field's data type?](#can-i-change-a-fields-data-type)
**InfluxQL functions**
##### InfluxQL functions {href="influxql-functions-1"}
* [How do I perform mathematical operations within a function?](#how-do-i-perform-mathematical-operations-within-a-function)
* [Why does my query return epoch 0 as the timestamp?](#why-does-my-query-return-epoch-0-as-the-timestamp)
* [Which InfluxQL functions support nesting?](#which-influxql-functions-support-nesting)
- [How do I perform mathematical operations within a function?](#how-do-i-perform-mathematical-operations-within-a-function)
- [Why does my query return epoch 0 as the timestamp?](#why-does-my-query-return-epoch-0-as-the-timestamp)
- [Which InfluxQL functions support nesting?](#which-influxql-functions-support-nesting)
**Querying data**
##### Querying data {href="querying-data-1"}
* [What determines the time intervals returned by `GROUP BY time()` queries?](#what-determines-the-time-intervals-returned-by-group-by-time-queries)
* [Why do my queries return no data or partial data?](#why-do-my-queries-return-no-data-or-partial-data)
* [Why don't my `GROUP BY time()` queries return timestamps that occur after `now()`?](#why-dont-my-group-by-time-queries-return-timestamps-that-occur-after-now)
* [Can I perform mathematical operations against timestamps?](#can-i-perform-mathematical-operations-against-timestamps)
* [Can I identify write precision from returned timestamps?](#can-i-identify-write-precision-from-returned-timestamps)
* [When should I single quote and when should I double quote in queries?](#when-should-i-single-quote-and-when-should-i-double-quote-in-queries)
* [Why am I missing data after creating a new `DEFAULT` retention policy?](#why-am-i-missing-data-after-creating-a-new-default-retention-policy)
* [Why is my query with a `WHERE OR` time clause returning empty results?](#why-is-my-query-with-a-where-or-time-clause-returning-empty-results)
* [Why does `fill(previous)` return empty results?](#why-does-fillprevious-return-empty-results)
* [Why are my `INTO` queries missing data?](#why-are-my-into-queries-missing-data)
* [How do I query data with an identical tag key and field key?](#how-do-i-query-data-with-an-identical-tag-key-and-field-key)
* [How do I query data across measurements?](#how-do-i-query-data-across-measurements)
* [Does the order of the timestamps matter?](#does-the-order-of-the-timestamps-matter)
* [How do I `SELECT` data with a tag that has no value?](#how-do-i-select-data-with-a-tag-that-has-no-value)
- [What determines the time intervals returned by `GROUP BY time()` queries?](#what-determines-the-time-intervals-returned-by-group-by-time-queries)
- [Why do my queries return no data or partial data?](#why-do-my-queries-return-no-data-or-partial-data)
- [Why don't my `GROUP BY time()` queries return timestamps that occur after `now()`?](#why-dont-my-group-by-time-queries-return-timestamps-that-occur-after-now)
- [Can I perform mathematical operations against timestamps?](#can-i-perform-mathematical-operations-against-timestamps)
- [Can I identify write precision from returned timestamps?](#can-i-identify-write-precision-from-returned-timestamps)
- [When should I single quote and when should I double quote in queries?](#when-should-i-single-quote-and-when-should-i-double-quote-in-queries)
- [Why am I missing data after creating a new `DEFAULT` retention policy?](#why-am-i-missing-data-after-creating-a-new-default-retention-policy)
- [Why is my query with a `WHERE OR` time clause returning empty results?](#why-is-my-query-with-a-where-or-time-clause-returning-empty-results)
- [Why does `fill(previous)` return empty results?](#why-does-fillprevious-return-empty-results)
- [Why are my `INTO` queries missing data?](#why-are-my-into-queries-missing-data)
- [How do I query data with an identical tag key and field key?](#how-do-i-query-data-with-an-identical-tag-key-and-field-key)
- [How do I query data across measurements?](#how-do-i-query-data-across-measurements)
- [Does the order of the timestamps matter?](#does-the-order-of-the-timestamps-matter)
- [How do I SELECT data with a tag that has no value?](#how-do-i-select-data-with-a-tag-that-has-no-value)
- [Why do I get different results for the same query?](#why-do-i-get-different-results-for-the-same-query)
**Series and series cardinality**
##### Series and series cardinality {href="series-and-series-cardinality-1"}
* [Why does series cardinality matter?](#why-does-series-cardinality-matter)
* [How can I remove series from the index?](#how-can-i-remove-series-from-the-index)
- [Why does series cardinality matter?](#why-does-series-cardinality-matter)
- [How can I remove series from the index?](#how-can-i-remove-series-from-the-index)
**Writing data**
##### Writing data {href="writing-data-1"}
* [How do I write integer field values?](#how-do-i-write-integer-field-values)
* [How does InfluxDB handle duplicate points?](#how-does-influxdb-handle-duplicate-points)
* [What newline character does the InfluxDB API require?](#what-newline-character-does-the-influxdb-api-require)
* [What words and characters should I avoid when writing data to InfluxDB?](#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb)
* [When should I single quote and when should I double quote when writing data?](#when-should-i-single-quote-and-when-should-i-double-quote-when-writing-data)
* [Does the precision of the timestamp matter?](#does-the-precision-of-the-timestamp-matter)
* [What are the configuration recommendations and schema guidelines for writing sparse, historical data?](#what-are-the-configuration-recommendations-and-schema-guidelines-for-writing-sparse-historical-data)
- [How do I write integer field values?](#how-do-i-write-integer-field-values)
- [How does InfluxDB handle duplicate points?](#how-does-influxdb-handle-duplicate-points)
- [What newline character does the InfluxDB API require?](#what-newline-character-does-the-influxdb-api-require)
- [What words and characters should I avoid when writing data to InfluxDB?](#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb)
- [When should I single quote and when should I double quote when writing data?](#when-should-i-single-quote-and-when-should-i-double-quote-when-writing-data)
- [Does the precision of the timestamp matter?](#does-the-precision-of-the-timestamp-matter)
- [What are the configuration recommendations and schema guidelines for writing sparse, historical data?](#what-are-the-configuration-recommendations-and-schema-guidelines-for-writing-sparse-historical-data)
**Log errors**
##### Log errors {href="log-errors-1"}
* [Where can I find InfluxDB Enterprise logs?](#where-can-i-find-influxdb-enterprise-logs)
* [Why am I seeing a `503 Service Unavailable` error in my meta node logs?](#why-am-i-seeing-a-503-service-unavailable-error-in-my-meta-node-logs)
* [Why am I seeing a `409` error in some of my data node logs?](#why-am-i-seeing-a-409-error-in-some-of-my-data-node-logs)
* [Why am I seeing `hinted handoff queue not empty` errors in my data node logs?](#why-am-i-seeing-hinted-handoff-queue-not-empty-errors-in-my-data-node-logs)
* [Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?](#why-am-i-seeing-error-writing-count-stats--partial-write-errors-in-my-data-node-logs)
* [Why am I seeing `queue is full` errors in my data node logs?](#why-am-i-seeing-queue-is-full-errors-in-my-data-node-logs)
* [Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?](#why-am-i-seeing-unable-to-determine-if-hostname-is-a-meta-node-when-i-try-to-add-a-meta-node-with-influxd-ctl-join)
* [Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory?](#why-is-influxdb-reporting-an-out-of-memory-oom-exception-when-my-system-has-free-memory)
- [Where can I find InfluxDB Enterprise logs?](#where-can-i-find-influxdb-enterprise-logs)
- [Why am I seeing a `503 Service Unavailable` error in my meta node logs?](#why-am-i-seeing-a-503-service-unavailable-error-in-my-meta-node-logs)
- [Why am I seeing a `409` error in some of my data node logs?](#why-am-i-seeing-a-409-error-in-some-of-my-data-node-logs)
- [Why am I seeing `hinted handoff queue not empty` errors in my data node logs?](#why-am-i-seeing-hinted-handoff-queue-not-empty-errors-in-my-data-node-logs)
- [Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?](#why-am-i-seeing-error-writing-count-stats--partial-write-errors-in-my-data-node-logs)
- [Why am I seeing `queue is full` errors in my data node logs?](#why-am-i-seeing-queue-is-full-errors-in-my-data-node-logs)
- [Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?](#why-am-i-seeing-unable-to-determine-if-hostname-is-a-meta-node-when-i-try-to-add-a-meta-node-with-influxd-ctl-join)
- [Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory?](#why-is-influxdb-reporting-an-out-of-memory-oom-exception-when-my-system-has-free-memory)
---
## How do I include a single quote in a password?
## Administration
#### How do I include a single quote in a password?
Escape the single quote with a backslash (`\`) both when creating the password
and when sending authentication requests.
## How can I identify my version of InfluxDB?
#### How can I identify my version of InfluxDB?
There a number of ways to identify the version of InfluxDB that you're using:
#### Run `influxd version` in your terminal:
##### Run `influxd version` in your terminal:
```bash
$ influxd version
@ -110,7 +113,7 @@ $ influxd version
InfluxDB v{{< latest-patch >}} (git: master b7bb7e8359642b6e071735b50ae41f5eb343fd42)
```
#### `curl` the `/ping` endpoint:
##### `curl` the `/ping` endpoint:
```bash
$ curl -i 'http://localhost:8086/ping'
@ -122,7 +125,7 @@ X-Influxdb-Version: {{< latest-patch >}}
Date: Wed, 01 Mar 2017 20:46:17 GMT
```
#### Launch the InfluxDB command line interface:
##### Launch the InfluxDB command line interface:
```bash
$ influx
@ -131,7 +134,7 @@ Connected to http://localhost:8086 version {{< latest-patch >}}
InfluxDB shell version: {{< latest-patch >}}
```
#### Check the HTTP response in your logs:
##### Check the HTTP response in your logs:
```bash
$ journalctl -u influxdb.service
@ -139,14 +142,14 @@ $ journalctl -u influxdb.service
Mar 01 20:49:45 rk-api influxd[29560]: [httpd] 127.0.0.1 - - [01/Mar/2017:20:49:45 +0000] "POST /query?db=&epoch=ns&q=SHOW+DATABASES HTTP/1.1" 200 151 "-" "InfluxDBShell/{{< latest-patch >}}" 9a4371a1-fec0-11e6-84b6-000000000000 1709
```
## Where can I find InfluxDB logs?
#### Where can I find InfluxDB logs?
On System V operating systems logs are stored under `/var/log/influxdb/`.
On systemd operating systems you can access the logs using `journalctl`.
Use `journalctl -u influxdb` to view the logs in the journal or `journalctl -u influxdb > influxd.log` to print the logs to a text file. With systemd, log retention depends on your system's journald settings.
## What is the relationship between shard group durations and retention policies?
#### What is the relationship between shard group durations and retention policies?
InfluxDB stores data in shard groups.
A single shard group covers a specific time interval; InfluxDB determines that time interval by looking at the `DURATION` of the relevant retention policy (RP).
@ -168,7 +171,7 @@ Check your retention policy's shard group duration with the
[`SHOW RETENTION POLICIES`](/enterprise_influxdb/v1.9/query_language/explore-schema/#show-retention-policies)
statement.
## Why aren't data dropped after I've altered a retention policy?
#### Why aren't data dropped after I've altered a retention policy?
Several factors explain why data may not be immediately dropped after a
retention policy (RP) change.
@ -198,7 +201,7 @@ InfluxDB will drop that shard group once all of its data is outside the new
The system will then begin writing data to shard groups that have the new,
shorter `SHARD DURATION` preventing any further unexpected data retention.
## Why does InfluxDB fail to parse microsecond units in the configuration file?
#### Why does InfluxDB fail to parse microsecond units in the configuration file?
The syntax for specifying microsecond duration units differs for
[configuration](/enterprise_influxdb/v1.9/administration/configuration/)
@ -220,7 +223,7 @@ If a configuration option specifies the `u` or `µ` syntax, InfluxDB fails to st
run: parse config: time: unknown unit [µ|u] in duration [<integer>µ|<integer>u]
```
## Does InfluxDB have a file system size limit?
#### Does InfluxDB have a file system size limit?
InfluxDB works within file system size restrictions for Linux and Windows POSIX. Some storage providers and distributions have size restrictions; for example:
@ -230,7 +233,11 @@ InfluxDB works within file system size restrictions for Linux and Windows POSIX.
If you anticipate growing over 16TB per volume/file system, we recommend finding a provider and distribution that supports your storage requirements.
## How do I use the InfluxDB CLI to return human readable timestamps?
---
## Command line interface (CLI)
#### How do I use the InfluxDB CLI to return human readable timestamps?
When you first connect to the CLI, specify the [rfc3339](https://www.ietf.org/rfc/rfc3339.txt) precision:
@ -250,7 +257,7 @@ InfluxDB shell 0.xx.x
Check out [CLI/Shell](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/) for more useful CLI options.
## How can a non-admin user `USE` a database in the InfluxDB CLI?
#### How can a non-admin user `USE` a database in the InfluxDB CLI?
In versions prior to v1.3, [non-admin users](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/#user-types-and-privileges) could not execute a `USE <database_name>` query in the CLI even if they had `READ` and/or `WRITE` permissions on that database.
@ -263,7 +270,7 @@ ERR: Database <database_name> doesn't exist. Run SHOW DATABASES for a list of ex
> **Note** that the [`SHOW DATABASES` query](/enterprise_influxdb/v1.9/query_language/explore-schema/#show-databases) returns only those databases on which the non-admin user has `READ` and/or `WRITE` permissions.
## How do I write to a non-DEFAULT retention policy with the InfluxDB CLI?
#### How do I write to a non-DEFAULT retention policy with the InfluxDB CLI?
Use the syntax `INSERT INTO [<database>.]<retention_policy> <line_protocol>` to write data to a non-`DEFAULT` retention policy using the CLI.
(Specifying the database and retention policy this way is only allowed with the CLI.
@ -287,12 +294,16 @@ Note that you will need to fully qualify the measurement to query data in the no
"<database>"."<retention_policy>"."<measurement>"
```
## How do I cancel a long-running query?
#### How do I cancel a long-running query?
You can cancel a long-running interactive query from the CLI using `Ctrl+C`. To stop other long-running query that you see when using the [`SHOW QUERIES`](/influxdb/v1.3/query_language/spec/#show-queries) command,
you can use the [`KILL QUERY`](/enterprise_influxdb/v1.9/troubleshooting/query_management/#stop-currently-running-queries-with-kill-query) command to stop it.
## Why can't I query Boolean field values?
---
## Data types
#### Why can't I query boolean field values?
Acceptable Boolean syntax differs for data writes and data queries.
@ -309,13 +320,13 @@ For example, `SELECT * FROM "hamlet" WHERE "bool"=True` returns all points with
<!-- TODO: closed issue. Edit docs if necessary. -->
<!-- {{% warn %}} [GitHub Issue #3939](https://github.com/influxdb/influxdb/issues/3939) {{% /warn %}} -->
## How does InfluxDB handle field type discrepancies across shards?
#### How does InfluxDB handle field type discrepancies across shards?
Field values can be floats, integers, strings, or Booleans.
Field value types cannot differ within a
[shard](/enterprise_influxdb/v1.9/concepts/glossary/#shard), but they can [differ](/enterprise_influxdb/v1.9/write_protocols/line_protocol_reference) across shards.
### The SELECT statement
##### The SELECT statement
The
[`SELECT` statement](/enterprise_influxdb/v1.9/query_language/explore-data/#the-basic-select-statement)
@ -328,7 +339,7 @@ following list: float, integer, string, Boolean.
If your data have field value type discrepancies, use the syntax
`<field_key>::<type>` to query the different data types.
#### Example
###### Example
The measurement `just_my_type` has a single field called `my_field`.
`my_field` has four field values across four different shards, and each value has
@ -364,12 +375,12 @@ time my_field my_field_1 my_field_2 my_field_3
2016-06-03T18:45:00Z true
```
### The SHOW FIELD KEYS query
##### The SHOW FIELD KEYS query
`SHOW FIELD KEYS` returns every data type, across every shard, associated with
the field key.
#### Example
###### Example
The measurement `just_my_type` has a single field called `my_field`.
`my_field` has four field values across four different shards, and each value has
@ -388,24 +399,24 @@ my_field integer
my_field boolean
```
## What are the minimum and maximum integers that InfluxDB can store?
#### What are the minimum and maximum integers that InfluxDB can store?
InfluxDB stores all integers as signed int64 data types.
The minimum and maximum valid values for int64 are `-9023372036854775808` and `9023372036854775807`.
See [Go builtins](http://golang.org/pkg/builtin/#int64) for more information.
Values close to but within those limits may lead to unexpected results; some functions and operators convert the int64 data type to float64 during calculation which can cause overflow issues.
## What are the minimum and maximum timestamps that InfluxDB can store?
#### What are the minimum and maximum timestamps that InfluxDB can store?
The minimum timestamp is `-9223372036854775806` or `1677-09-21T00:12:43.145224194Z`.
The maximum timestamp is `9223372036854775806` or `2262-04-11T23:47:16.854775806Z`.
Timestamps outside that range return a [parsing error](/enterprise_influxdb/v1.9/troubleshooting/errors/#unable-to-parse-time-outside-range).
## How can I tell what type of data is stored in a field?
#### How can I tell what type of data is stored in a field?
The [`SHOW FIELD KEYS`](/enterprise_influxdb/v1.9/query_language/explore-schema/#show-field-keys) query also returns the field's type.
#### Example
##### Example
```sql
> SHOW FIELD KEYS FROM all_the_types
@ -418,7 +429,7 @@ orange integer
yellow float
```
## Can I change a field's data type?
#### Can I change a field's data type?
Currently, InfluxDB offers very limited support for changing a field's data type.
@ -432,12 +443,12 @@ We list possible workarounds for changing a field's data type below.
Note that these workarounds will not update data that have already been
written to the database.
#### Write the data to a different field
##### Write the data to a different field
The simplest workaround is to begin writing the new data type to a different field in the same
[series](/enterprise_influxdb/v1.9/concepts/glossary/#series).
#### Work the shard system
##### Work the shard system
Field value types cannot differ within a
[shard](/enterprise_influxdb/v1.9/concepts/glossary/#shard) but they can differ across
@ -452,13 +463,17 @@ Note that this will not change the field's data type on prior shards.
For how this will affect your queries, please see
[How does InfluxDB handle field type discrepancies across shards](/enterprise_influxdb/v1.9/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-field-type-discrepancies-across-shards).
## How do I perform mathematical operations within a function?
---
## InfluxQL functions
#### How do I perform mathematical operations within a function?
Currently, InfluxDB does not support mathematical operations within functions.
We recommend using InfluxQL's [subqueries](/enterprise_influxdb/v1.9/query_language/explore-data/#subqueries)
as a workaround.
### Example
##### Example
InfluxQL does not support the following syntax:
@ -476,12 +491,12 @@ See the
[Data Exploration](/enterprise_influxdb/v1.9/query_language/explore-data/#subqueries)
page for more information.
## Why does my query return epoch 0 as the timestamp?
#### Why does my query return epoch 0 as the timestamp?
In InfluxDB, epoch 0 (`1970-01-01T00:00:00Z`) is often used as a null timestamp equivalent.
If you request a query that has no timestamp to return, such as an aggregation function with an unbounded time range, InfluxDB returns epoch 0 as the timestamp.
## Which InfluxQL functions support nesting?
#### Which InfluxQL functions support nesting?
The following InfluxQL functions support nesting:
@ -497,14 +512,18 @@ The following InfluxQL functions support nesting:
For information on how to use a subquery as a substitute for nested functions, see
[Data exploration](/enterprise_influxdb/v1.9/query_language/explore-data/#subqueries).
## What determines the time intervals returned by `GROUP BY time()` queries?
---
## Querying data
#### What determines the time intervals returned by `GROUP BY time()` queries?
The time intervals returned by `GROUP BY time()` queries conform to the InfluxDB database's preset time
buckets or to the user-specified [offset interval](/enterprise_influxdb/v1.9/query_language/explore-data/#advanced-group-by-time-syntax).
#### Example
##### Example
##### Preset time buckets
###### Preset time buckets
The following query calculates the average value of `sunflowers` between
6:15pm and 7:45pm and groups those averages into one hour intervals:
@ -558,7 +577,7 @@ time sunflowers time mean
```
##### Offset interval
###### Offset interval
The following query calculates the average value of `sunflowers` between
6:15pm and 7:45pm and groups those averages into one hour intervals.
@ -612,7 +631,7 @@ time sunflowers time mean
|--|
```
## Why do my queries return no data or partial data?
#### Why do my queries return no data or partial data?
The most common reasons why your query returns no data or partial data:
@ -621,25 +640,25 @@ The most common reasons why your query returns no data or partial data:
- [SELECT query includes `GROUP BY time()`](#select-query-includes-group-by-time) (partial data before `now()` returned)
- [Tag and field key with the same name](#tag-and-field-key-with-the-same-name)
### Querying the wrong retention policy
##### Querying the wrong retention policy
InfluxDB automatically queries data in a databases `DEFAULT` retention policy](/enterprise_influxdb/v1.9/concepts/glossary/#retention-policy-rp) (RP). If your data is stored in another RP, you must specify the RP in your query to get results.
### No field key in the SELECT clause
##### No field key in the SELECT clause
A query requires at least one [field key](/enterprise_influxdb/v1.9/concepts/glossary/#field-key) in the `SELECT` clause. If the `SELECT` clause includes only [tag keys](/enterprise_influxdb/v1.9/concepts/glossary/#tag-key), the query returns an empty response. For more information, see [Data exploration](/enterprise_influxdb/v1.9/query_language/explore-data/#common-issues-with-the-select-statement).
### SELECT query includes `GROUP BY time()`
##### SELECT query includes `GROUP BY time()`
If your `SELECT` query includes a [`GROUP BY time()` clause](/enterprise_influxdb/v1.9/query_language/explore-data/#group-by-time-intervals), only data points between `1677-09-21 00:12:43.145224194` and [`now()`](/enterprise_influxdb/v1.9/concepts/glossary/#now) are returned. Therefore, if any of your data points occur after `now()`, specify [an alternative upper bound](/enterprise_influxdb/v1.9/query_language/explore-data/#time-syntax) in your time interval.
(By default, most [`SELECT` queries](/enterprise_influxdb/v1.9/query_language/explore-data/#the-basic-select-statement) query data with timestamps between `1677-09-21 00:12:43.145224194` and `2262-04-11T23:47:16.854775806Z` UTC.)
### Tag and field key with the same name
##### Tag and field key with the same name
Avoid using the same name for a tag and field key. If you inadvertently add the same name for a tag and field key, and then query both keys together, the query results show the second key queried (tag or field) appended with `_1` (also visible as the column header in Chronograf). To query a tag or field key appended with `_1`, you **must drop** the appended `_1` **and include** the syntax `::tag` or `::field`.
#### Example
###### Example
1. [Launch `influx`](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/#launch-influx).
@ -704,7 +723,7 @@ Avoid using the same name for a tag and field key. If you inadvertently add the
{{% warn %}}**Warning:** If you inadvertently add a duplicate key name, follow the steps below to [remove a duplicate key](#remove-a-duplicate-key). Because of memory requirements, if you have large amounts of data, we recommend chunking your data (while selecting it) by a specified interval (for example, date range) to fit the allotted memory.
{{% /warn %}}
#### Remove a duplicate key
##### Remove a duplicate key
1. [Launch `influx`](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/#launch-influx).
@ -734,7 +753,7 @@ Avoid using the same name for a tag and field key. If you inadvertently add the
DROP MEASUREMENT "temporary_measurement"
```
## Why don't my GROUP BY time() queries return timestamps that occur after now()?
#### Why don't my GROUP BY time() queries return timestamps that occur after now()?
Most `SELECT` statements have a default time range between [`1677-09-21 00:12:43.145224194` and `2262-04-11T23:47:16.854775806Z` UTC](#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store).
For `SELECT` statements with a [`GROUP BY time()` clause](/enterprise_influxdb/v1.9/query_language/explore-data/#group-by-time-intervals), the default time
@ -766,7 +785,7 @@ the lower bound to `now()` such that the query's time range is between
For for more on time syntax in queries, see [Data Exploration](/enterprise_influxdb/v1.9/query_language/explore-data/#time-syntax).
## Can I perform mathematical operations against timestamps?
#### Can I perform mathematical operations against timestamps?
Currently, it is not possible to execute mathematical operators against timestamp values in InfluxDB.
Most time calculations must be carried out by the client receiving the query results.
@ -775,7 +794,7 @@ There is limited support for using InfluxQL functions against timestamp values.
The function [ELAPSED()](/enterprise_influxdb/v1.9/query_language/functions/#elapsed)
returns the difference between subsequent timestamps in a single field.
## Can I identify write precision from returned timestamps?
#### Can I identify write precision from returned timestamps?
InfluxDB stores all timestamps as nanosecond values, regardless of the write precision supplied.
It is important to note that when returning query results, the database silently drops trailing zeros from timestamps which obscures the initial write precision.
@ -796,7 +815,7 @@ time value precision_supplied timestamp_supplied
<!-- TODO: closed issue. Edit docs if necessary. -->
<!-- {{% warn %}} [GitHub Issue #2977](https://github.com/influxdb/influxdb/issues/2977) {{% /warn %}} -->
## When should I single quote and when should I double quote in queries?
#### When should I single quote and when should I double quote in queries?
Single quote string values (for example, tag values) but do not single quote identifiers (database names, retention policy names, user names, measurement names, tag keys, and field keys).
@ -830,7 +849,7 @@ No: `SELECT "water_level" FROM "h2o_feet" WHERE time > "2015-08-18T23:00:01.2320
See [Data Exploration](/enterprise_influxdb/v1.9/query_language/explore-data/#time-syntax) for more on time syntax in queries.
## Why am I missing data after creating a new DEFAULT retention policy?
#### Why am I missing data after creating a new DEFAULT retention policy?
When you create a new `DEFAULT` retention policy (RP) on a database, the data written to the old `DEFAULT` RP remain in the old RP.
Queries that do not specify an RP automatically query the new `DEFAULT` RP so the old data may appear to be missing.
@ -865,7 +884,7 @@ time count
1970-01-01T00:00:00Z 8
```
## Why is my query with a `WHERE OR` time clause returning empty results?
#### Why is my query with a `WHERE OR` time clause returning empty results?
Currently, InfluxDB does not support using `OR` in the `WHERE` clause to specify multiple time ranges.
InfluxDB returns an empty response if the query's `WHERE` clause uses `OR`
@ -881,7 +900,7 @@ Example:
{{% warn %}} [GitHub Issue #7530](https://github.com/influxdata/influxdb/issues/7530)
{{% /warn %}}
## Why does `fill(previous)` return empty results?
#### Why does `fill(previous)` return empty results?
`fill(previous)` doesn't fill the result for a time bucket if the previous value is outside the query's time range.
@ -914,7 +933,7 @@ time max
While this is the expected behavior of `fill(previous)`, an [open feature request](https://github.com/influxdata/influxdb/issues/6878) on GitHub proposes that `fill(previous)` should fill results even when previous values fall outside the querys time range.
## Why are my INTO queries missing data?
#### Why are my INTO queries missing data?
By default, `INTO` queries convert any tags in the initial data to fields in
the newly written data.
@ -924,9 +943,9 @@ Include `GROUP BY *` in all `INTO` queries to preserve tags in the newly written
Note that this behavior does not apply to queries that use the [`TOP()`](/enterprise_influxdb/v1.9/query_language/functions/#top) or [`BOTTOM()`](/enterprise_influxdb/v1.9/query_language/functions/#bottom) functions.
See the [`TOP()`](/enterprise_influxdb/v1.9/query_language/functions/#top-tags-and-the-into-clause) and [`BOTTOM()`](/enterprise_influxdb/v1.9/query_language/functions/#bottom-tags-and-the-into-clause) documentation for more information.
#### Example
##### Example
##### Initial data
###### Initial data
The `french_bulldogs` measurement includes the `color` tag and the `name` field.
@ -940,7 +959,7 @@ time color name
2016-05-25T00:10:00Z black prince
```
##### `INTO` query without `GROUP BY *`
###### `INTO` query without `GROUP BY *`
An `INTO` query without a `GROUP BY *` clause turns the `color` tag into
a field in the newly written data.
@ -964,7 +983,7 @@ time color name
2016-05-25T00:10:00Z black prince
```
##### `INTO` query with `GROUP BY *`
###### `INTO` query with `GROUP BY *`
An `INTO` query with a `GROUP BY *` clause preserves `color` as a tag in the newly written data.
In this case, the `nugget` point and the `rumple` point remain unique points and InfluxDB does not overwrite any data.
@ -985,13 +1004,13 @@ time color name
2016-05-25T00:10:00Z black prince
```
## How do I query data with an identical tag key and field key?
#### How do I query data with an identical tag key and field key?
Use the `::` syntax to specify if the key is a field key or tag key.
#### Examples
##### Examples
##### Sample data
###### Sample data
```sql
> INSERT candied,almonds=true almonds=50,half_almonds=51 1465317610000000000
@ -1005,7 +1024,7 @@ time almonds almonds_1 half_almonds
2016-06-07T16:40:20Z 55 true 56
```
##### Specify that the key is a field:
###### Specify that the key is a field:
```sql
> SELECT * FROM "candied" WHERE "almonds"::field > 51
@ -1015,7 +1034,7 @@ time almonds almonds_1 half_almonds
2016-06-07T16:40:20Z 55 true 56
```
##### Specify that the key is a tag:
###### Specify that the key is a tag:
```sql
> SELECT * FROM "candied" WHERE "almonds"::tag='true'
@ -1026,14 +1045,14 @@ time almonds almonds_1 half_almonds
2016-06-07T16:40:20Z 55 true 56
```
## How do I query data across measurements?
#### How do I query data across measurements?
Currently, there is no way to perform cross-measurement math or grouping.
All data must be under a single measurement to query it together.
InfluxDB is not a relational database and mapping data across measurements is not currently a recommended [schema](/enterprise_influxdb/v1.9/concepts/glossary/#schema).
See GitHub Issue [#3552](https://github.com/influxdata/influxdb/issues/3552) for a discussion of implementing JOIN in InfluxDB.
## Does the order of the timestamps matter?
#### Does the order of the timestamps matter?
No.
Our tests indicate that there is a only a negligible difference between the times
@ -1044,7 +1063,7 @@ SELECT ... FROM ... WHERE time > 'timestamp1' AND time < 'timestamp2'
SELECT ... FROM ... WHERE time < 'timestamp2' AND time > 'timestamp1'
```
## How do I SELECT data with a tag that has no value?
#### How do I SELECT data with a tag that has no value?
Specify an empty tag value with `''`. For example:
@ -1056,11 +1075,35 @@ time origin priceless
2016-07-20T18:42:00Z 8
```
## Why does series cardinality matter?
#### Why do I get different results for the same query?
This usually means your data nodes are out of sync (have entropy).
To check if entropy exists in your InfluxDB Enterprise cluster, run the following
command from one of your meta nodes:
```sh
influxd-ctl entropy show
```
If entropy does exists, run the following command from one of your meta nodes to
repair the entropy:
```
influxd-ctl repair
```
For more information about entropy and the InfluxDB Enterprise Anti-entropy (AE)
service, see [Anti-Entropy service in InfluxDB Enterprise](/enterprise_influxdb/v1.9/administration/configure/anti-entropy/).
---
## Series and series cardinality
#### Why does series cardinality matter?
InfluxDB maintains an in-memory index of every [series](/enterprise_influxdb/v1.9/concepts/glossary/#series) in the system. As the number of unique series grows, so does the RAM usage. High [series cardinality](/enterprise_influxdb/v1.9/concepts/glossary/#series-cardinality) can lead to the operating system killing the InfluxDB process with an out of memory (OOM) exception. See [SHOW CARDINALITY](/enterprise_influxdb/v1.9/query_language/spec/#show-cardinality) to learn about the InfluxSQL commands for series cardinality.
## How can I remove series from the index?
#### How can I remove series from the index?
To reduce series cardinality, series must be dropped from the index.
[`DROP DATABASE`](/enterprise_influxdb/v1.9/query_language/manage-database/#delete-a-database-with-drop-database),
@ -1069,7 +1112,11 @@ To reduce series cardinality, series must be dropped from the index.
> **Note:** `DROP` commands are usually CPU-intensive, as they frequently trigger a TSM compaction. Issuing `DROP` queries at a high frequency may significantly impact write and other query throughput.
## How do I write integer field values?
---
## Writing data
#### How do I write integer field values?
Add a trailing `i` to the end of the field value when writing an integer.
If you do not provide the `i`, InfluxDB will treat the field value as a float.
@ -1077,7 +1124,7 @@ If you do not provide the `i`, InfluxDB will treat the field value as a float.
Writes an integer: `value=100i`
Writes a float: `value=100`
## How does InfluxDB handle duplicate points?
#### How does InfluxDB handle duplicate points?
A point is uniquely identified by the measurement name, [tag set](/enterprise_influxdb/v1.9/concepts/glossary/#tag-set), and timestamp.
If you submit a new point with the same measurement, tag set, and timestamp as an existing point, the field set becomes the union of the old field set and the new field set, where any ties go to the new field set.
@ -1135,21 +1182,19 @@ time az hostname val_1 val_2
1970-01-15T06:56:07.890000001Z us_west server02 5.24
```
## What newline character does the InfluxDB API require?
#### What newline character does the InfluxDB API require?
The InfluxDB line protocol relies on line feed (`\n`, which is ASCII `0x0A`) to indicate the end of a line and the beginning of a new line. Files or data that use a newline character other than `\n` will result in the following errors: `bad timestamp`, `unable to parse`.
Note that Windows uses carriage return and line feed (`\r\n`) as the newline character.
## What words and characters should I avoid when writing data to InfluxDB?
### InfluxQL keywords
#### What words and characters should I avoid when writing data to InfluxDB?
If you use an [InfluxQL keyword](https://github.com/influxdata/influxql/blob/master/README.md#keywords) as an identifier you will need to double quote that identifier in every query.
This can lead to [non-intuitive errors](/enterprise_influxdb/v1.9/troubleshooting/errors/#error-parsing-query-found-expected-identifier-at-line-char).
Identifiers are continuous query names, database names, field keys, measurement names, retention policy names, subscription names, tag keys, and user names.
### time
##### time
The keyword `time` is a special case.
`time` can be a
@ -1164,9 +1209,7 @@ In those cases, `time` does not require double quotes in queries.
[tag key](/enterprise_influxdb/v1.9/concepts/glossary/#tag-key);
InfluxDB rejects writes with `time` as a field key or tag key and returns an error.
#### Examples
##### Write `time` as a measurement and query it
###### Write `time` as a measurement and query it
```sql
> INSERT time value=1
@ -1181,7 +1224,7 @@ time value
`time` is a valid measurement name in InfluxDB.
##### Write `time` as a field key and attempt to query it
###### Write `time` as a field key and attempt to query it
```sql
> INSERT mymeas time=1
@ -1191,7 +1234,7 @@ ERR: {"error":"partial write: invalid field name: input field \"time\" on measur
`time` is not a valid field key in InfluxDB.
The system does does not write the point and returns a `400`.
##### Write `time` as a tag key and attempt to query it
###### Write `time` as a tag key and attempt to query it
```sql
> INSERT mymeas,time=1 value=1
@ -1201,7 +1244,7 @@ ERR: {"error":"partial write: invalid tag key: input tag \"time\" on measurement
`time` is not a valid tag key in InfluxDB.
The system does does not write the point and returns a `400`.
### Characters
##### Characters
To keep regular expressions and quoting simple, avoid using the following characters in identifiers:
@ -1213,7 +1256,7 @@ To keep regular expressions and quoting simple, avoid using the following charac
`=` equal sign
`,` comma
## When should I single quote and when should I double quote when writing data?
#### When should I single quote and when should I double quote when writing data?
* Avoid single quoting and double quoting identifiers when writing data via the line protocol; see the examples below for how writing identifiers with quotes can complicate queries.
Identifiers are database names, retention policy names, user names, measurement names, tag keys, and field keys.
@ -1239,7 +1282,7 @@ Identifiers are database names, retention policy names, user names, measurement
For more information , see [Line protocol](/enterprise_influxdb/v1.9/write_protocols/).
## Does the precision of the timestamp matter?
#### Does the precision of the timestamp matter?
Yes.
To maximize performance, use the coarsest possible timestamp precision when writing data to InfluxDB.
@ -1254,7 +1297,7 @@ curl -i -XPOST "http://localhost:8086/write?db=weather&precision=s" --data-binar
The tradeoff is that identical points with duplicate timestamps, more likely to occur as precision gets coarser, may overwrite other points.
## What are the configuration recommendations and schema guidelines for writing sparse, historical data?
#### What are the configuration recommendations and schema guidelines for writing sparse, historical data?
For users who want to write sparse, historical data to InfluxDB, InfluxData recommends:
@ -1266,7 +1309,12 @@ Increase the shard group duration for your datas retention policy with the [`
Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#cache-snapshot-write-cold-duration--10m).
If youre writing a lot of historical data, the default setting (`10m`) can cause the system to hold all of your data in cache for every shard.
Temporarily lowering the `cache-snapshot-write-cold-duration` setting to `10s` while you write the historical data makes the process more efficient.
## Where can I find InfluxDB Enterprise logs?
---
## Log errors
#### Where can I find InfluxDB Enterprise logs?
On systemd operating systems, service logs can be accessed using the `journalctl` command.
@ -1278,14 +1326,14 @@ Enterprise console: `journalctl -u influx-enterprise`
The `journalctl` output can be redirected to print the logs to a text file. With systemd, log retention depends on the system's journald settings.
## Why am I seeing a `503 Service Unavailable` error in my meta node logs?
#### Why am I seeing a `503 Service Unavailable` error in my meta node logs?
This is the expected behavior if you haven't joined the meta node to the
cluster.
The `503` errors should stop showing up in the logs once you
[join the meta node to the cluster](/enterprise_influxdb/v1.9/introduction/installation/installation/meta_node_installation/#step-3-join-the-meta-nodes-to-the-cluster).
## Why am I seeing a `409` error in some of my data node logs?
#### Why am I seeing a `409` error in some of my data node logs?
When you create a
[Continuous Query (CQ)](/enterprise_influxdb/v1.9/concepts/glossary/#continuous-query-cq)
@ -1304,7 +1352,7 @@ Log output for the data node that accepts the lease:
[meta-http] 2016/09/19 09:08:54 172.31.12.27 - - [19/Sep/2016:09:08:54 +0000] GET /lease?name=continuous_querier&node_id=0 HTTP/1.2 200 105 - InfluxDB Meta Client b05a3861-7e48-11e6-86a7-000000000000 8.87547ms
```
## Why am I seeing `hinted handoff queue not empty` errors in my data node logs?
#### Why am I seeing `hinted handoff queue not empty` errors in my data node logs?
```
[write] 2016/10/18 10:35:21 write failed for shard 2382 on node 4: hinted handoff queue not empty
@ -1314,20 +1362,21 @@ This error is informational only and does not necessarily indicate a problem in
Note that for some [write consistency](/enterprise_influxdb/v1.9/concepts/clustering/#write-consistency) settings, InfluxDB may return a write error (500) for the write attempt, even if the points are successfully queued in hinted handoff. Some write clients may attempt to resend those points, leading to duplicate points being added to the hinted handoff queue and lengthening the time it takes for the queue to drain. If the queues are not draining, consider temporarily downgrading the write consistency setting, or pause retries on the write clients until the hinted handoff queues fully drain.
## Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?
#### Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?
```
[stats] 2016/10/18 10:35:21 error writing count stats for FOO_grafana: partial write
```
The `_internal` database collects per-node and also cluster-wide information about the InfluxDB Enterprise cluster. The cluster metrics are replicated to other nodes using `consistency=all`. For a [write consistency](/enterprise_influxdb/v1.9/concepts/clustering/#write-consistency) of `all`, InfluxDB returns a write error (500) for the write attempt even if the points are successfully queued in hinted handoff. Thus, if there are points still in hinted handoff, the `_internal` writes will fail the consistency check and log the error, even though the data is in the durable hinted handoff queue and should eventually persist.
## Why am I seeing `queue is full` errors in my data node logs?
#### Why am I seeing `queue is full` errors in my data node logs?
This error indicates that the coordinating node that received the write cannot add the incoming write to the hinted handoff queue for the destination node because it would exceed the maximum size of the queue. This error typically indicates a catastrophic condition for the cluster - one data node may have been offline or unable to accept writes for an extended duration.
The controlling configuration settings are in the `[hinted-handoff]` section of the file. `max-size` is the total size in bytes per hinted handoff queue. When `max-size` is exceeded, all new writes for that node are rejected until the queue drops below `max-size`. `max-age` is the maximum length of time a point will persist in the queue. Once this limit has been reached, points expire from the queue. The age is calculated from the write time of the point, not the timestamp of the point.
## Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?
#### Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?
Meta nodes use the `/status` endpoint to determine the current state of another meta node. A healthy meta node that is ready to join the cluster will respond with a `200` HTTP response code and a JSON string with the following format (assuming the default ports):
@ -1335,7 +1384,7 @@ Meta nodes use the `/status` endpoint to determine the current state of another
If you are getting an error message while attempting to `influxd-ctl join` a new meta node, it means that the JSON string returned from the `/status` endpoint is incorrect. This generally indicates that the meta node configuration file is incomplete or incorrect. Inspect the HTTP response with `curl -v "http://<hostname>:8091/status"` and make sure that the `hostname`, the `bind-address`, and the `http-bind-address` are correctly populated. Also check the `license-key` or `license-path` in the configuration file of the meta nodes. Finally, make sure that you specify the `http-bind-address` port in the join command, e.g. `influxd-ctl join hostname:8091`.
## Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory?
#### Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory?
`mmap` is a Unix system call that maps files into memory.
As the number of shards in an InfluxDB Enterprise cluster increases, the number of memory maps increase.

View File

@ -107,7 +107,7 @@ and then applies an [aggregate](/flux/v0.x/function-types/#aggregates).
{{% note %}}
Use the [InfluxDB Data Explorer](/influxdb/cloud/query-data/execute-queries/data-explorer/)
or the [Flux REPL](/{{< latest "influxdb" >}}/tools/repl/#build-the-repl)
or the [Flux REPL](/{{< latest "influxdb" >}}/tools/flux-repl/#build-the-repl)
to build and execute the following basic query.
{{% /note %}}

View File

@ -72,9 +72,9 @@ The structure of results returned by `csv.from()` depends on the
## Examples
_If just getting started, use the [Flux REPL](/influxdb/cloud/tools/repl/) or the
If just getting started, use the [Flux REPL](/influxdb/cloud/tools/flux-repl/) or the
[InfluxDB Data Explorer](/influxdb/cloud/query-data/execute-queries/data-explorer/)
to execute Flux queries._
to execute Flux queries.
- [Query an annotated CSV string](#query-an-annotated-csv-string)
- [Query a raw CSV string](#query-a-raw-csv-string)

View File

@ -78,7 +78,7 @@ azure auth=ENV
**{{< cloud-name "short" >}}** and **InfluxDB OSS** _**do not**_ have access to
the underlying file system and do not support reading credentials from a file.
To retrieve SQL Server credentials from a file, execute the query in the
[Flux REPL](/{{< latest "influxdb" >}}/tools/repl/) on your local machine.
[Flux REPL](/{{< latest "influxdb" >}}/tools/flux-repl/) on your local machine.
{{% /warn %}}
```powershell

View File

@ -80,7 +80,7 @@ azure auth=ENV
**{{< cloud-name "short" >}}** and **InfluxDB OSS** _**do not**_ have access to
the underlying file system and do not support reading credentials from a file.
To retrieve SQL Server credentials from a file, execute the query in the
[Flux REPL](/{{< latest "influxdb" >}}/tools/repl/) on your local machine.
[Flux REPL](/{{< latest "influxdb" >}}/tools/flux-repl/) on your local machine.
{{% /warn %}}
```powershel

View File

@ -9,10 +9,12 @@ menu:
influxdb_cloud:
name: Query with InfluxQL
parent: Query data
related:
- /influxdb/cloud/reference/api/influxdb-1x/
- /influxdb/cloud/reference/api/influxdb-1x/query
- /influxdb/cloud/reference/api/influxdb-1x/dbrp
cascade:
related:
- /influxdb/cloud/reference/api/influxdb-1x/
- /influxdb/cloud/reference/api/influxdb-1x/query
- /influxdb/cloud/reference/api/influxdb-1x/dbrp
- /influxdb/cloud/tools/influxql-shell/
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,14 @@
---
title: Manage DBRP mappings
seotitle: Manage database and retention policy mappings
description: >
Create and manage database and retention policy (DBRP) mappings to use
InfluxQL to query InfluxDB buckets.
menu:
influxdb_cloud:
parent: Query with InfluxQL
weight: 202
influxdb/cloud/tags: [influxql, dbrp]
---
{{< duplicate-oss >}}

View File

@ -6,6 +6,8 @@ menu:
name: influx org members add
parent: influx org members
weight: 301
updated_in: CLI v2.4.0
metadata: [influx CLI 2.0.0+, InfluxDB OSS only]
---
{{% note %}}

View File

@ -6,6 +6,7 @@ menu:
name: influx org members remove
parent: influx org members
weight: 301
metadata: [influx CLI 2.0.0+, InfluxDB OSS only]
---
{{% note %}}

View File

@ -7,8 +7,10 @@ menu:
parent: influx
weight: 101
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/remote
cascade:
related:
- /influxdb/cloud/reference/cli/influx/remote
- /influxdb/cloud/write-data/replication/replicate-data/
---
{{< duplicate-oss >}}

View File

@ -7,8 +7,6 @@ menu:
parent: influx replication
weight: 101
influxdb/cloud/tags: [write]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -7,8 +7,6 @@ menu:
parent: influx replication
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -7,8 +7,6 @@ menu:
parent: influx replication
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -7,8 +7,6 @@ menu:
parent: influx replication
weight: 102
influxdb/cloud/tags: [write, replication]
related:
- /influxdb/cloud/reference/cli/influx/replication
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,18 @@
---
title: influx scripts
description: The `influx scripts` command and its subcommands manage invokable scripts in InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts
parent: influx
weight: 101
influxdb/cloud/tags: [scripts]
cascade:
related:
- /influxdb/cloud/reference/cli/influx/#provide-required-authentication-credentials, influx CLI—Provide required authentication credentials
- /influxdb/cloud/reference/cli/influx/#flag-patterns-and-conventions, influx CLI—Flag patterns and conventions
- /influxdb/cloud/api-guide/api-invokable-scripts/
metadata: [influx CLI 2.4.0+, InfluxDB Cloud only]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,11 @@
---
title: influx scripts create
description: The `influx scripts create` command creates an invokable script in InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts create
parent: influx scripts
weight: 201
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,11 @@
---
title: influx scripts delete
description: The `influx scripts delete` command deletes an invokable script in InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts delete
parent: influx scripts
weight: 201
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,11 @@
---
title: influx scripts invoke
description: The `influx scripts invoke` command executes an invokable script in InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts invoke
parent: influx scripts
weight: 201
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,11 @@
---
title: influx scripts list
description: The `influx scripts list` command lists and searches for invokable scripts in InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts list
parent: influx scripts
weight: 201
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,11 @@
---
title: influx scripts retrieve
description: The `influx scripts retrieve` command retrieves invokable script information from InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts retrieve
parent: influx scripts
weight: 201
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,11 @@
---
title: influx scripts update
description: The `influx scripts update` command updates information related to an invokable script in InfluxDB.
menu:
influxdb_cloud_ref:
name: influx scripts update
parent: influx scripts
weight: 201
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,19 @@
---
title: influx v1 shell
description: >
The `influx v1 shell` subcommand starts an InfluxQL shell (REPL).
menu:
influxdb_cloud_ref:
name: influx v1 shell
parent: influx v1
weight: 101
influxdb/cloud/tags: [InfluxQL]
related:
- /influxdb/cloud/reference/cli/influx/#provide-required-authentication-credentials, influx CLI—Provide required authentication credentials
- /influxdb/cloud/reference/cli/influx/#flag-patterns-and-conventions, influx CLI—Flag patterns and conventions
- /influxdb/cloud/query-data/influxql/
- /influxdb/v2.4/tools/influxql-shell/
metadata: [influx CLI 2.4.0+, InfluxDB Cloud]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,16 @@
---
title: Use the Interactive Flux REPL
description: >
Use the Flux REPL (ReadEvalPrint Loop) to execute Flux scripts and interact
with InfluxDB and other data sources.
influxdb/cloud/tags: [flux]
menu:
influxdb_cloud:
name: Use the Flux REPL
parent: Tools & integrations
weight: 103
aliases:
- /influxdb/cloud/tools/repl/
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,15 @@
---
title: Use the InfluxQL shell
description: >
Use the InfluxQL interactive shell to execute InfluxQL queries and interact with InfluxDB.
menu:
influxdb_cloud:
name: Use the InfluxQL shell
parent: Tools & integrations
weight: 104
influxdb/cloud/tags: [InfluxQL]
related:
- /influxdb/cloud/reference/cli/influx/v1/shell/
---
{{< duplicate-oss >}}

View File

@ -28,7 +28,7 @@ Your queries should guide what data you store in [tags](/influxdb/v1.8/concepts/
## Avoid too many series
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/influxdb/v1.8/concepts/glossary/#measurement)
- [tags](/influxdb/v1.8/concepts/glossary/#tag)

View File

@ -43,11 +43,11 @@ influx stacks init [flags]
##### Initialize a stack with a name and description
```sh
influx stack init -n "Example Stack" -d "InfluxDB stack for monitoring some awesome stuff"
influx stacks init -n "Example Stack" -d "InfluxDB stack for monitoring some awesome stuff"
```
##### Initialize a stack with a name and URLs to associate with the stack
```sh
influx stack init -n "Example Stack" -u https://example.com/template-1.yml
influx stacks init -n "Example Stack" -u https://example.com/template-1.yml
```

View File

@ -20,14 +20,14 @@ Take steps to understand and resolve high series cardinality.
{{% oss-only %}}
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/influxdb/v2.0/reference/glossary/#measurement)
- [tags](/influxdb/v2.0/reference/glossary/#tag)
{{% /oss-only %}}
{{% cloud-only %}}
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/influxdb/v2.0/reference/glossary/#measurement)
- [tags](/influxdb/v2.0/reference/glossary/#tag)
- [field keys](/influxdb/cloud/reference/glossary/#field-key)

View File

@ -43,11 +43,11 @@ influx stacks init [flags]
##### Initialize a stack with a name and description
```sh
influx stack init -n "Example Stack" -d "InfluxDB stack for monitoring some awesome stuff"
influx stacks init -n "Example Stack" -d "InfluxDB stack for monitoring some awesome stuff"
```
##### Initialize a stack with a name and URLs to associate with the stack
```sh
influx stack init -n "Example Stack" -u https://example.com/template-1.yml
influx stacks init -n "Example Stack" -u https://example.com/template-1.yml
```

View File

@ -43,11 +43,11 @@ influx stacks init [flags]
##### Initialize a stack with a name and description
```sh
influx stack init -n "Example Stack" -d "InfluxDB stack for monitoring some awesome stuff"
influx stacks init -n "Example Stack" -d "InfluxDB stack for monitoring some awesome stuff"
```
##### Initialize a stack with a name and URLs to associate with the stack
```sh
influx stack init -n "Example Stack" -u https://example.com/template-1.yml
influx stacks init -n "Example Stack" -u https://example.com/template-1.yml
```

View File

@ -21,14 +21,14 @@ Take steps to understand and resolve high series cardinality.
{{% oss-only %}}
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/influxdb/v2.2/reference/glossary/#measurement)
- [tags](/influxdb/v2.2/reference/glossary/#tag)
{{% /oss-only %}}
{{% cloud-only %}}
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/influxdb/v2.2/reference/glossary/#measurement)
- [tags](/influxdb/v2.2/reference/glossary/#tag)
- [field keys](/influxdb/cloud/reference/glossary/#field-key)

View File

@ -28,7 +28,7 @@ Take steps to understand and resolve high series cardinality.
{{% /oss-only %}}
{{% cloud-only %}}
IndexDB indexes the following data elements to speed up reads:
InfluxDB indexes the following data elements to speed up reads:
- [measurement](/influxdb/v2.3/reference/glossary/#measurement)
- [tags](/influxdb/v2.3/reference/glossary/#tag)
- [field keys](/influxdb/cloud/reference/glossary/#field-key)

View File

@ -0,0 +1,19 @@
---
title: InfluxDB OSS 2.4 documentation
description: >
InfluxDB OSS is an open source time series database designed to handle high write and query loads.
Learn how to use and leverage InfluxDB in use cases such as monitoring metrics, IoT data, and events.
layout: landing-influxdb
menu:
influxdb_2_4:
name: InfluxDB OSS 2.4
weight: 1
---
#### Welcome
Welcome to the InfluxDB v2.4 documentation!
InfluxDB is an open source time series database designed to handle high write and query workloads.
This documentation is meant to help you learn how to use and leverage InfluxDB to meet your needs.
Common use cases include infrastructure monitoring, IoT data collection, events handling, and more.
If your use case involves time series data, InfluxDB is purpose-built to handle it.

View File

@ -0,0 +1,41 @@
---
title: Develop with the InfluxDB API
seotitle: Use the InfluxDB API
description: Interact with InfluxDB 2.4 using a rich API for writing and querying data and more.
weight: 4
menu:
influxdb_2_4:
name: Develop with the API
influxdb/v2.4/tags: [api]
---
The InfluxDB v2 API provides a programmatic interface for interactions with InfluxDB.
Access the InfluxDB API using the `/api/v2/` endpoint.
## Developer guides
- [API starter guide](/influxdb/v2.4/api-guide/starter/)
## InfluxDB client libraries
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
For tutorials and information about client libraries, see [InfluxDB client libraries](/{{< latest "influxdb" >}}/api-guide/client-libraries/).
## InfluxDB v2 API documentation
<a class="btn" href="/influxdb/v2.4/api/">InfluxDB OSS {{< current-version >}} API documentation</a>
### View InfluxDB API documentation locally
InfluxDB API documentation is built into the `influxd` service and represents
the API specific to the current version of InfluxDB.
To view the API documentation locally, [start InfluxDB](/influxdb/v2.4/get-started/#start-influxdb)
and visit the `/docs` endpoint in a browser ([localhost:8086/docs](http://localhost:8086/docs)).
## InfluxDB v1 compatibility API documentation
The InfluxDB v2 API includes [InfluxDB 1.x compatibility endpoints](/influxdb/v2.4/reference/api/influxdb-1x/)
that work with InfluxDB 1.x client libraries and third-party integrations like
[Grafana](https://grafana.com) and others.
<a class="btn" href="/influxdb/v2.4/api/v1-compatibility/">View full v1 compatibility API documentation</a>

View File

@ -0,0 +1,75 @@
---
title: API Quick Start
seotitle: Use the InfluxDB API
description: Interact with InfluxDB using a rich API for writing and querying data and more.
weight: 3
menu:
influxdb_2_4:
name: Quick start
parent: Develop with the API
aliases:
- /influxdb/v2.4/tools/api/
influxdb/cloud/tags: [api]
---
InfluxDB offers a rich API and [client libraries](/influxdb/v2.4/api-guide/client-libraries) ready to integrate with your application. Use popular tools like Curl and [Postman](/influxdb/v2.4/api-guide/postman) for rapidly testing API requests.
This section will guide you through the most commonly used API methods.
For detailed documentation on the entire API, see [InfluxDBv2 API Reference](/influxdb/v2.4/reference/api/#influxdb-v2-api-documentation).
{{% note %}}
If you need to use InfluxDB {{< current-version >}} with **InfluxDB 1.x** API clients and integrations, see the [1.x compatibility API](/influxdb/v2.4/reference/api/influxdb-1x/).
{{% /note %}}
## Bootstrap your application
With most API requests, you'll need to provide a minimum of your InfluxDB URL, Organization, and Authorization Token.
[Install InfluxDB OSS v2.x](/influxdb/v2.4/install/) or upgrade to
an [InfluxDB Cloud account](/influxdb/cloud/sign-up).
### Authentication
InfluxDB uses [API tokens](/influxdb/v2.4/security/tokens/) to authorize API requests.
1. Before exploring the API, use the InfluxDB UI to
[create an initial API token](/influxdb/v2.4/security/tokens/create-token/) for your application.
2. Include your API token in an `Authentication: Token YOUR_API_TOKEN` HTTP header with each request.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[curl](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
{{% get-shared-text "api/v2.0/auth/oss/token-auth.sh" %}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
{{% get-shared-text "api/v2.0/auth/oss/token-auth.js" %}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Postman is another popular tool for exploring APIs. See how to [send authenticated requests with Postman](/{{< latest "influxdb" >}}/api-guide/postman/#send-authenticated-api-requests-with-postman).
## Buckets API
Before writing data you'll need to create a Bucket in InfluxDB.
[Create a bucket](/influxdb/v2.4/organizations/buckets/create-bucket/#create-a-bucket-using-the-influxdb-api) using an HTTP request to the InfluxDB API `/buckets` endpoint.
```sh
{{% get-shared-text "api/v2.0/buckets/oss/create.sh" %}}
```
## Write API
[Write data to InfluxDB](/influxdb/v2.4/write-data/developer-tools/api/) using an HTTP request to the InfluxDB API `/write` endpoint.
## Query API
[Query from InfluxDB](/influxdb/v2.4/query-data/execute-queries/influx-api/) using an HTTP request to the `/query` endpoint.

View File

@ -0,0 +1,26 @@
---
title: Use InfluxDB client libraries
description: >
InfluxDB client libraries are language-specific tools that integrate with the InfluxDB v2 API.
View the list of available client libraries.
weight: 101
aliases:
- /influxdb/v2.4/reference/client-libraries/
- /influxdb/v2.4/reference/api/client-libraries/
- /influxdb/v2.4/tools/client-libraries/
menu:
influxdb_2_4:
name: Client libraries
parent: Develop with the API
influxdb/v2.4/tags: [client libraries]
---
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
The following **InfluxDB v2** client libraries are available:
{{% note %}}
These client libraries are in active development and may not be feature-complete.
This list will continue to grow as more client libraries are released.
{{% /note %}}
{{< children type="list" >}}

View File

@ -0,0 +1,21 @@
---
title: Arduino client library
seotitle: Use the InfluxDB Arduino client library
list_title: Arduino
description: Use the InfluxDB Arduino client library to interact with InfluxDB.
external_url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino
list_note: _ contributed by [tobiasschuerg](https://github.com/tobiasschuerg)_
menu:
influxdb_2_4:
name: Arduino
parent: Client libraries
params:
url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino
weight: 201
---
Arduino is an open-source hardware and software platform used for building electronics projects.
The documentation for this client library is available on GitHub.
<a href="https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino" target="_blank" class="btn github">Arduino InfluxDB client</a>

View File

@ -0,0 +1,117 @@
---
title: JavaScript client library for web browsers
seotitle: Use the InfluxDB JavaScript client library for web browsers
list_title: JavaScript for browsers
description: >
Use the InfluxDB JavaScript client library to interact with InfluxDB in web clients.
menu:
influxdb_2_4:
name: JavaScript for browsers
identifier: client_js_browsers
parent: Client libraries
influxdb/v2.4/tags: [client libraries, JavaScript]
weight: 201
aliases:
- /influxdb/v2.4/reference/api/client-libraries/browserjs/
- /influxdb/v2.4/api-guide/client-libraries/browserjs/write
- /influxdb/v2.4/api-guide/client-libraries/browserjs/query
related:
- /influxdb/v2.4/api-guide/client-libraries/nodejs/write/
- /influxdb/v2.4/api-guide/client-libraries/nodejs/query/
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to interact with the InfluxDB API in browsers and front-end clients. This library supports both front-end and server-side environments and provides the following distributions:
* ECMAScript modules (ESM) and CommonJS modules (CJS)
* Bundled ESM
* Bundled UMD
This guide presumes some familiarity with JavaScript, browser environments, and InfluxDB.
If you're just getting started with InfluxDB, see [Get started with InfluxDB](/{{% latest "influxdb" %}}/get-started/).
{{% warn %}}
### Tokens in production applications
{{% api/browser-token-warning %}}
{{% /warn %}}
* [Before you begin](#before-you-begin)
* [Use with module bundlers](#use-with-module-bundlers)
* [Use bundled distributions with browsers and module loaders](#use-bundled-distributions-with-browsers-and-module-loaders)
* [Get started with the example app](#get-started-with-the-example-app)
## Before you begin
1. Install [Node.js](https://nodejs.org/en/download/package-manager/) to serve your front-end app.
2. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/{{% latest "influxdb" %}}/reference/urls/).
## Use with module bundlers
If you use a module bundler like Webpack or Parcel, install `@influxdata/influxdb-client-browser`.
For more information and examples, see [Node.js](/{{% latest "influxdb" %}}/api-guide/client-libraries/nodejs/).
## Use bundled distributions with browsers and module loaders
1. Configure InfluxDB properties for your script.
```html
<script>
window.INFLUX_ENV = {
url: 'http://localhost:8086',
token: 'YOUR_AUTH_TOKEN'
}
</script>
```
2. Import modules from the latest client library browser distribution.
`@influxdata/influxdb-client-browser` exports bundled ESM and UMD syntaxes.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[ESM](#import-esm)
[UMD](#import-umd)
{{% /code-tabs %}}
{{% code-tab-content %}}
```html
<script type="module">
import {InfluxDB, Point} from 'https://unpkg.com/@influxdata/influxdb-client-browser/dist/index.browser.mjs'
const influxDB = new InfluxDB({INFLUX_ENV.url, INFLUX_ENV.token})
</script>
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```html
<script src="https://unpkg.com/@influxdata/influxdb-client-browser"></script>
<script>
const Influx = window['@influxdata/influxdb-client']
const InfluxDB = Influx.InfluxDB
const influxDB = new InfluxDB({INFLUX_ENV.url, INFLUX_ENV.token})
</script>
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
After you've imported the client library, you're ready to [write data](/{{% latest "influxdb" %}}/api-guide/client-libraries/nodejs/write/?t=nodejs) to InfluxDB.
## Get started with the example app
This library includes an example browser app that queries from and writes to your InfluxDB instance.
1. Clone the [influxdb-client-js](https://github.com/influxdata/influxdb-client-js) repo.
2. Navigate to the `examples` directory:
```js
cd examples
```
3. Update `./env_browser.js` with your InfluxDB [url](/{{% latest "influxdb" %}}/reference/urls/), [bucket](/{{% latest "influxdb" %}}/organizations/buckets/), [organization](/{{% latest "influxdb" %}}/organizations/), and [token](/{{% latest "influxdb" %}}/security/tokens/)
4. Run the following command to start the application at [http://localhost:3001/examples/index.html]()
```sh
npm run browser
```
`index.html` loads the `env_browser.js` configuration, the client library ESM modules, and the application in your browser.

View File

@ -0,0 +1,20 @@
---
title: C# client library
list_title: C#
seotitle: Use the InfluxDB C# client library
description: Use the InfluxDB C# client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-csharp
menu:
influxdb_2_4:
name: C#
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-csharp
weight: 201
---
C# is a general-purpose object-oriented programming language.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-csharp" target="_blank" class="btn github">C# InfluxDB client</a>

View File

@ -0,0 +1,20 @@
---
title: Dart client library
list_title: Dart
seotitle: Use the InfluxDB Dart client library
description: Use the InfluxDB Dart client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-dart
menu:
influxdb_2_4:
name: Dart
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-dart
weight: 201
---
Dart is a programming language created for quick application development for both web and mobile apps.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-dart" target="_blank" class="btn github">Dart InfluxDB client</a>

View File

@ -0,0 +1,206 @@
---
title: Go client library
seotitle: Use the InfluxDB Go client library
list_title: Go
description: >
Use the InfluxDB Go client library to interact with InfluxDB.
menu:
influxdb_2_4:
name: Go
parent: Client libraries
influxdb/v2.4/tags: [client libraries, Go]
weight: 201
aliases:
- /influxdb/v2.4/reference/api/client-libraries/go/
- /influxdb/v2.4/tools/client-libraries/go/
---
Use the [InfluxDB Go client library](https://github.com/influxdata/influxdb-client-go) to integrate InfluxDB into Go scripts and applications.
This guide presumes some familiarity with Go and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/v2.4/get-started/).
## Before you begin
1. [Install Go 1.13 or later](https://golang.org/doc/install).
2. Add the client package your to your project dependencies.
```sh
# Add InfluxDB Go client package to your project go.mod
go get github.com/influxdata/influxdb-client-go/v2
```
3. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/v2.4/reference/urls/).
## Boilerplate for the InfluxDB Go Client Library
Use the Go library to write and query data from InfluxDB.
1. In your Go program, import the necessary packages and specify the entry point of your executable program.
```go
package main
import (
"context"
"fmt"
"time"
"github.com/influxdata/influxdb-client-go/v2"
)
```
2. Define variables for your InfluxDB [bucket](/influxdb/v2.4/organizations/buckets/), [organization](/influxdb/v2.4/organizations/), and [token](/influxdb/v2.4/security/tokens/).
```go
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "http://localhost:8086"
```
3. Create the the InfluxDB Go client and pass in the `url` and `token` parameters.
```go
client := influxdb2.NewClient(url, token)
```
4. Create a **write client** with the `WriteAPIBlocking` method and pass in the `org` and `bucket` parameters.
```go
writeAPI := client.WriteAPIBlocking(org, bucket)
```
5. To query data, create an InfluxDB **query client** and pass in your InfluxDB `org`.
```go
queryAPI := client.QueryAPI(org)
```
## Write data to InfluxDB with Go
Use the Go library to write data to InfluxDB.
1. Create a [point](/influxdb/v2.4/reference/glossary/#point) and write it to InfluxDB using the `WritePoint` method of the API writer struct.
2. Close the client to flush all pending writes and finish.
```go
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
writeAPI.WritePoint(context.Background(), p)
client.Close()
```
### Complete example write script
```go
func main() {
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "http://localhost:8086"
// Create new client with default option for server url authenticate by token
client := influxdb2.NewClient(url, token)
// User blocking write client for writes to desired bucket
writeAPI := client.WriteAPIBlocking(org, bucket)
// Create point using full params constructor
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
// Write point immediately
writeAPI.WritePoint(context.Background(), p)
// Ensures background processes finishes
client.Close()
}
```
## Query data from InfluxDB with Go
Use the Go library to query data to InfluxDB.
1. Create a Flux query and supply your `bucket` parameter.
```js
from(bucket:"<bucket>")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "stat")
```
The query client sends the Flux query to InfluxDB and returns the results as a FluxRecord object with a table structure.
**The query client includes the following methods:**
- `Query`: Sends the Flux query to InfluxDB.
- `Next`: Iterates over the query response.
- `TableChanged`: Identifies when the group key changes.
- `Record`: Returns the last parsed FluxRecord and gives access to value and row properties.
- `Value`: Returns the actual field value.
```go
result, err := queryAPI.Query(context.Background(), `from(bucket:"<bucket>")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
for result.Next() {
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
fmt.Printf("value: %v\n", result.Record().Value())
}
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
```
**The FluxRecord object includes the following methods for accessing your data:**
- `Table()`: Returns the index of the table the record belongs to.
- `Start()`: Returns the inclusive lower time bound of all records in the current table.
- `Stop()`: Returns the exclusive upper time bound of all records in the current table.
- `Time()`: Returns the time of the record.
- `Value() `: Returns the actual field value.
- `Field()`: Returns the field name.
- `Measurement()`: Returns the measurement name of the record.
- `Values()`: Returns a map of column values.
- `ValueByKey(<your_tags>)`: Returns a value from the record for given column key.
### Complete example query script
```go
func main() {
// Create client
client := influxdb2.NewClient(url, token)
// Get query client
queryAPI := client.QueryAPI(org)
// Get QueryTableResult
result, err := queryAPI.Query(context.Background(), `from(bucket:"my-bucket")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
// Iterate over query response
for result.Next() {
// Notice when group key has changed
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
// Access data
fmt.Printf("value: %v\n", result.Record().Value())
}
// Check for an error
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
// Ensures background processes finishes
client.Close()
}
```
For more information, see the [Go client README on GitHub](https://github.com/influxdata/influxdb-client-go).

View File

@ -0,0 +1,20 @@
---
title: Java client library
seotitle: Use the InfluxDB Java client library
list_title: Java
description: Use the Java client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-java
menu:
influxdb_2_4:
name: Java
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-java
weight: 201
---
Java is one of the oldest and most popular class-based, object-oriented programming languages.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-java" target="_blank" class="btn github">Java InfluxDB client</a>

View File

@ -0,0 +1,20 @@
---
title: Kotlin client library
seotitle: Use the Kotlin client library
list_title: Kotlin
description: Use the InfluxDB Kotlin client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin
menu:
influxdb_2_4:
name: Kotlin
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin
weight: 201
---
Kotlin is an open-source programming language that runs on the Java Virtual Machine (JVM).
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin" target="_blank" class="btn github">Kotlin InfluxDB client</a>

View File

@ -0,0 +1,23 @@
---
title: Node.js JavaScript client library
seotitle: Use the InfluxDB JavaScript client library
list_title: Node.js
description: >
Use the InfluxDB Node.js JavaScript client library to interact with InfluxDB.
menu:
influxdb_2_4:
name: Node.js
parent: Client libraries
influxdb/v2.4/tags: [client libraries, JavaScript]
weight: 201
aliases:
- /influxdb/v2.4/reference/api/client-libraries/nodejs/
- /influxdb/v2.4/reference/api/client-libraries/js/
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to integrate InfluxDB into your Node.js application.
In this guide, you'll start a Node.js project from scratch and code some simple API operations.
{{< children >}}
{{% api/v2dot0/nodejs/learn-more %}}

View File

@ -0,0 +1,97 @@
---
title: Install the InfluxDB JavaScript client library
seotitle: Install the InfluxDB Node.js JavaScript client library
description: >
Install the JavaScript client library to interact with the InfluxDB API in Node.js.
menu:
influxdb_2_4:
name: Install
parent: Node.js
influxdb/v2.4/tags: [client libraries, JavaScript]
weight: 100
aliases:
- /influxdb/v2.4/reference/api/client-libraries/nodejs/install
---
## Install Node.js
1. Install [Node.js](https://nodejs.org/en/download/package-manager/).
2. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/v2.4/reference/urls/).
3. Start a new Node.js project.
The `npm` package manager is included with Node.js.
```sh
npm init -y influx-node-app
```
## Install TypeScript
Many of the client library examples use [TypeScript](https://www.typescriptlang.org/). Follow these steps to initialize the TypeScript project.
1. Install TypeScript and type definitions for Node.js.
```sh
npm i -g typescript && npm i --save-dev @types/node
```
2. Create a TypeScript configuration with default values.
```sh
tsc --init
```
3. Run the TypeScript compiler. To recompile your code automatically as you make changes, pass the `watch` flag to the compiler.
```sh
tsc -w -p
```
## Install dependencies
The JavaScript client library contains two packages: `@influxdata/influxdb-client` and `@influxdata/influxdb-client-apis`.
Add both as dependencies of your project.
1. Open a new terminal window and install `@influxdata/influxdb-client` for querying and writing data:
```sh
npm install --save @influxdata/influxdb-client
```
3. Install `@influxdata/influxdb-client-apis` for access to the InfluxDB management APIs:
```sh
npm install --save @influxdata/influxdb-client-apis
```
## Next steps
Once you've installed the Javascript client library, you're ready to [write data](/influxdb/v2.4/api-guide/client-libraries/nodejs/write/) to InfluxDB or [get started](#get-started-with-examples) with other examples from the client library.
## Get started with examples
{{% note %}}
The client examples include an [`env`](https://github.com/influxdata/influxdb-client-js/blob/master/examples/env.js) module for accessing your InfluxDB properties from environment variables or from `env.js`.
The examples use these properties to interact with the InfluxDB API.
{{% /note %}}
1. Set environment variables or update `env.js` with your InfluxDB [bucket](/influxdb/v2.4/organizations/buckets/), [organization](/influxdb/v2.4/organizations/), [token](/influxdb/v2.4/security/tokens/), and [url](/influxdb/v2.4/reference/urls/).
```sh
export INFLUX_URL=http://localhost:8086
export INFLUX_TOKEN=YOUR_API_TOKEN
export INFLUX_ORG=YOUR_ORG
export INFLUX_BUCKET=YOUR_BUCKET
```
Replace the following:
- *`YOUR_API_TOKEN`*: InfluxDB API token
- *`YOUR_ORG`*: InfluxDB organization ID
- *`YOUR_BUCKET`*: InfluxDB bucket name
2. Run an example script.
```sh
query.ts
```
{{% api/v2dot0/nodejs/learn-more %}}

View File

@ -0,0 +1,94 @@
---
title: Query data with the InfluxDB JavaScript client library
description: >
Use the JavaScript client library to query data with the InfluxDB API in Node.js.
menu:
influxdb_2_4:
name: Query
parent: Node.js
influxdb/v2.4/tags: [client libraries, JavaScript]
weight: 201
aliases:
- /influxdb/v2.4/reference/api/client-libraries/nodejs/query
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB.
The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table.
## Before you begin
- [Install the client library and other dependencies](/influxdb/v2.4/api-guide/client-libraries/nodejs/install/).
## Query InfluxDB
1. Change to your new project directory and create a file for your query module.
```sh
cd influx-node-app && touch query.js
```
2. Instantiate an `InfluxDB` client. Provide your InfluxDB URL and API token.
Use the `getQueryApi()` method of the client.
Provide your InfluxDB organization ID to create a configured **query client**.
```js
import { InfluxDB, Point } from '@influxdata/influxdb-client'
const queryApi = new InfluxDB({YOUR_URL, YOUR_API_TOKEN}).getQueryApi(YOUR_ORG)
```
Replace the following:
- *`YOUR_URL`*: InfluxDB URL
- *`YOUR_API_TOKEN`*: InfluxDB API token
- *`YOUR_ORG`*: InfluxDB organization ID
3. Create a Flux query for your InfluxDB bucket. Store the query as a string variable.
{{% warn %}}
To prevent SQL injection attacks, avoid concatenating unsafe user input with queries.
{{% /warn %}}
```js
const fluxQuery =
'from(bucket: "YOUR_BUCKET")
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "temperature")'
```
Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket.
4. Use the `queryRows()` method of the query client to query InfluxDB.
`queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable).
`queryRows()` subscribes your observer to the observable.
Finally, the observer logs the rows from the response to the terminal.
```js
const observer = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
}
}
queryApi.queryRows(fluxQuery, observer)
```
### Complete example
```js
{{% get-shared-text "api/v2.0/query/query.mjs" %}}
```
To run the example from a file, set your InfluxDB environment variables and use `node` to execute the JavaScript file.
```sh
export INFLUX_URL=http://localhost:8086 && \
export INFLUX_TOKEN=YOUR_API_TOKEN && \
export INFLUX_ORG=YOUR_ORG && \
node query.js
```
{{% api/v2dot0/nodejs/learn-more %}}

View File

@ -0,0 +1,117 @@
---
title: Write data with the InfluxDB JavaScript client library
description: >
Use the JavaScript client library to write data with the InfluxDB API in Node.js.
menu:
influxdb_2_4:
name: Write
parent: Node.js
influxdb/v2.4/tags: [client libraries, JavaScript]
weight: 101
aliases:
- /influxdb/v2.4/reference/api/client-libraries/nodejs/write
related:
- /influxdb/v2.4/write-data/troubleshoot/
---
Use the [InfluxDB Javascript client library](https://github.com/influxdata/influxdb-client-js) to write data from a Node.js environment to InfluxDB.
The Javascript client library includes the following convenient features for writing data to InfluxDB:
- Apply default tags to data points.
- Buffer points into batches to optimize data transfer.
- Automatically retry requests on failure.
- Set an optional HTTP proxy address for your network.
### Before you begin
- [Install the client library and other dependencies](/influxdb/v2.4/api-guide/client-libraries/nodejs/install/).
### Write data with the client library
1. Instantiate an `InfluxDB` client. Provide your InfluxDB URL and API token.
```js
import {InfluxDB, Point} from '@influxdata/influxdb-client'
const influxDB = new InfluxDB({YOUR_URL, YOUR_API_TOKEN})
```
Replace the following:
- *`YOUR_URL`*: InfluxDB URL
- *`YOUR_API_TOKEN`*: InfluxDB API token
2. Use the `getWriteApi()` method of the client to create a **write client**.
Provide your InfluxDB organization ID and bucket name.
```js
const writeApi = influxDB.getWriteApi(YOUR_ORG, YOUR_BUCKET)
```
Replace the following:
- *`YOUR_ORG`*: InfluxDB organization ID
- *`YOUR_BUCKET`*: InfluxDB bucket name
3. To apply one or more [tags](/influxdb/v2.4/reference/glossary/#tag) to all points, use the `useDefaultTags()` method.
Provide tags as an object of key/value pairs.
```js
writeApi.useDefaultTags({region: 'west'})
```
4. Use the `Point()` constructor to create a [point](/influxdb/v2.4/reference/glossary/#point).
1. Call the constructor and provide a [measurement](/influxdb/v2.4/reference/glossary/#measurement).
2. To add one or more tags, chain the `tag()` method to the constructor.
Provide a `name` and `value`.
3. To add a field of type `float`, chain the `floatField()` method to the constructor.
Provide a `name` and `value`.
```js
const point1 = new Point('temperature')
.tag('sensor_id', 'TLM010')
.floatField('value', 24)
```
5. Use the `writePoint()` method to write the point to your InfluxDB bucket.
Finally, use the `close()` method to flush all pending writes.
The example logs the new data point followed by "WRITE FINISHED" to stdout.
```js
writeApi.writePoint(point1)
writeApi.close().then(() => {
console.log('WRITE FINISHED')
})
```
### Complete example
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Curl](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
{{< get-shared-text "api/v2.0/write/write.sh" >}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
{{< get-shared-text "api/v2.0/write/write.mjs" >}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
To run the example from a file, set your InfluxDB environment variables and use `node` to execute the JavaScript file.
```sh
export INFLUX_URL=http://localhost:8086 && \
export INFLUX_TOKEN=YOUR_API_TOKEN && \
export INFLUX_ORG=YOUR_ORG && \
export INFLUX_BUCKET=YOUR_BUCKET && \
node write.js
```
### Response codes
_For information about **InfluxDB API response codes**, see
[InfluxDB API Write documentation](/influxdb/cloud/api/#operation/PostWrite)._

View File

@ -0,0 +1,20 @@
---
title: PHP client library
seotitle: Use the InfluxDB PHP client library
list_title: PHP
description: Use the InfluxDB PHP client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-php
menu:
influxdb_2_4:
name: PHP
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-php
weight: 201
---
PHP is a popular general-purpose scripting language primarily used for web development.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-php" target="_blank" class="btn github">PHP InfluxDB client</a>

View File

@ -0,0 +1,176 @@
---
title: Python client library
seotitle: Use the InfluxDB Python client library
list_title: Python
description: >
Use the InfluxDB Python client library to interact with InfluxDB.
menu:
influxdb_2_4:
name: Python
parent: Client libraries
influxdb/v2.4/tags: [client libraries, python]
aliases:
- /influxdb/v2.4/reference/api/client-libraries/python/
- /influxdb/v2.4/reference/api/client-libraries/python-cl-guide/
- /influxdb/v2.4/tools/client-libraries/python/
weight: 201
---
Use the [InfluxDB Python client library](https://github.com/influxdata/influxdb-client-python) to integrate InfluxDB into Python scripts and applications.
This guide presumes some familiarity with Python and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/v2.4/get-started/).
## Before you begin
1. Install the InfluxDB Python library:
```sh
pip install influxdb-client
```
2. Ensure that InfluxDB is running.
If running InfluxDB locally, visit http://localhost:8086.
(If using InfluxDB Cloud, visit the URL of your InfluxDB Cloud UI.
For example: https://us-west-2-1.aws.cloud2.influxdata.com.)
## Write data to InfluxDB with Python
We are going to write some data in [line protocol](/influxdb/v2.4/reference/syntax/line-protocol/) using the Python library.
1. In your Python program, import the InfluxDB client library and use it to write data to InfluxDB.
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
```
2. Define a few variables with the name of your [bucket](/influxdb/v2.4/organizations/buckets/), [organization](/influxdb/v2.4/organizations/), and [token](/influxdb/v2.4/security/tokens/).
```python
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="http://localhost:8086"
```
3. Instantiate the client. The `InfluxDBClient` object takes three named parameters: `url`, `org`, and `token`. Pass in the named parameters.
```python
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
```
The `InfluxDBClient` object has a `write_api` method used for configuration.
4. Instantiate a **write client** using the `client` object and the `write_api` method. Use the `write_api` method to configure the writer object.
```python
write_api = client.write_api(write_options=SYNCHRONOUS)
```
5. Create a [point](/influxdb/v2.4/reference/glossary/#point) object and write it to InfluxDB using the `write` method of the API writer object. The write method requires three parameters: `bucket`, `org`, and `record`.
```python
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
### Complete example write script
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="http://localhost:8086"
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
write_api = client.write_api(write_options=SYNCHRONOUS)
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
## Query data from InfluxDB with Python
1. Instantiate the **query client**.
```python
query_api = client.query_api()
```
2. Create a Flux query, and then format it as a Python string.
```python
query = ' from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" ) '
```
The query client sends the Flux query to InfluxDB and returns a Flux object with a table structure.
3. Pass the `query()` method two named parameters:`org` and `query`.
```python
result = query_api.query(org=org, query=query)
```
4. Iterate through the tables and records in the Flux object.
- Use the `get_value()` method to return values.
- Use the `get_field()` method to return fields.
```python
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
**The Flux object provides the following methods for accessing your data:**
- `get_measurement()`: Returns the measurement name of the record.
- `get_field()`: Returns the field name.
- `get_value()`: Returns the actual field value.
- `values`: Returns a map of column values.
- `values.get("<your tag>")`: Returns a value from the record for given column.
- `get_time()`: Returns the time of the record.
- `get_start()`: Returns the inclusive lower time bound of all records in the current table.
- `get_stop()`: Returns the exclusive upper time bound of all records in the current table.
### Complete example query script
```python
query_api = client.query_api()
query = from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )
result = query_api.query(org=org, query=query)
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
For more information, see the [Python client README on GitHub](https://github.com/influxdata/influxdb-client-python).

View File

@ -0,0 +1,20 @@
---
title: R package client library
list_title: R
seotitle: Use the InfluxDB client R package
description: Use the InfluxDB client R package to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-r
menu:
influxdb_2_4:
name: R
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-r
weight: 201
---
R is a programming language and software environment for statistical analysis, reporting, and graphical representation primarily used in data science.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-r" target="_blank" class="btn github">R InfluxDB client</a>

View File

@ -0,0 +1,20 @@
---
title: Ruby client library
seotitle: Use the InfluxDB Ruby client library
list_title: Ruby
description: Use the InfluxDB Ruby client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-ruby
menu:
influxdb_2_4:
name: Ruby
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-ruby
weight: 201
---
Ruby is a highly flexible, open-source, object-oriented programming language.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-ruby" target="_blank" class="btn github">Ruby InfluxDB client</a>

View File

@ -0,0 +1,20 @@
---
title: Scala client library
seotitle: Use the InfluxDB Scala client library
list_title: Scala
description: Use the InfluxDB Scala client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala
menu:
influxdb_2_4:
name: Scala
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala
weight: 201
---
Scala is a general-purpose programming language that supports both object-oriented and functional programming.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-java/tree/master/client-scala" target="_blank" class="btn github">Scala InfluxDB client</a>

View File

@ -0,0 +1,20 @@
---
title: Swift client library
seotitle: Use the InfluxDB Swift client library
list_title: Swift
description: Use the InfluxDB Swift client library to interact with InfluxDB.
external_url: https://github.com/influxdata/influxdb-client-swift
menu:
influxdb_2_4:
name: Swift
parent: Client libraries
params:
url: https://github.com/influxdata/influxdb-client-swift
weight: 201
---
Swift is a programming language created by Apple for building applications accross multiple Apple platforms.
The documentation for this client library is available on GitHub.
<a href="https://github.com/influxdata/influxdb-client-swift" target="_blank" class="btn github">Swift InfluxDB client</a>

View File

@ -0,0 +1,57 @@
---
title: Use Postman with the InfluxDB API
description: >
Use [Postman](https://www.postman.com/), a popular tool for exploring APIs,
to interact with the [InfluxDB API](/influxdb/v2.4/api-guide/).
menu:
influxdb_2_4:
parent: Tools & integrations
name: Use Postman
weight: 105
influxdb/v2.4/tags: [api, authentication]
aliases:
- /influxdb/v2.4/reference/api/postman/
---
Use [Postman](https://www.postman.com/), a popular tool for exploring APIs,
to interact with the [InfluxDB API](/influxdb/v2.4/api-guide/).
## Install Postman
Download Postman from the [official downloads page](https://www.postman.com/downloads/).
Or to install with Homebrew on macOS, run the following command:
```sh
brew install --cask postman
```
## Send authenticated API requests with Postman
All requests to the [InfluxDB v2 API](/influxdb/v2.4/api-guide/) must include an [InfluxDB API token](/influxdb/v2.4/security/tokens/).
{{% note %}}
#### Authenticate with a username and password
If you need to send a username and password (`Authorization: Basic`) to the [InfluxDB 1.x compatibility API](/influxdb/v2.4/reference/api/influxdb-1x/), see how to [authenticate with a username and password scheme](/influxdb/v2.4/reference/api/influxdb-1x/#authenticate-with-the-token-scheme).
{{% /note %}}
To configure Postman to send an [InfluxDB API token](/influxdb/v2.4/security/tokens/) with the `Authorization: Token` HTTP header, do the following:
1. If you have not already, [create a token](/influxdb/v2.4/security/tokens/create-token/).
2. In the Postman **Authorization** tab, select **API Key** in the **Type** dropdown.
3. For **Key**, enter `Authorization`.
4. For **Value**, enter `Token INFLUX_API_TOKEN`, replacing *`INFLUX_API_TOKEN`* with the token generated in step 1.
5. Ensure that the **Add to** option is set to **Header**.
#### Test authentication credentials
To test the authentication, in Postman, enter your InfluxDB API `/api/v2/` root endpoint URL and click **Send**.
###### InfluxDB v2 API root endpoint
```sh
http://localhost:8086/api/v2
```

View File

@ -0,0 +1,13 @@
---
title: InfluxDB API client library tutorials
seotitle: Get started with InfluxDB API client libraries
description: Follow step-by-step tutorials to for InfluxDB API client libraries in your favorite framework or language.
weight: 4
menu:
influxdb_2_4:
name: Client library tutorials
parent: Develop with the API
influxdb/v2.4/tags: [api]
---
{{< children >}}

View File

@ -0,0 +1,30 @@
---
title: InfluxDB API client library starter
seotitle: Starter tutorial for InfluxDB API client libraries
description: Follow step-by-step tutorials to build an IoT dashboard with API client libraries in your favorite framework or language.
weight: 4
menu:
influxdb_2_4:
name: Client library starter
parent: Client library tutorials
influxdb/v2.4/tags: [api]
---
Follow step-by-step tutorials to build an Internet-of-Things (IoT) application with InfluxData client libraries and your favorite framework or language.
InfluxData and the user community maintain client libraries for developers who want to take advantage of:
- Idioms for InfluxDB requests, responses, and errors.
- Common patterns in a familiar programming language.
- Faster development and less boilerplate code.
These tutorials walk through using the InfluxDB API and
client libraries to build a modern application as you learn the following:
- InfluxDB core concepts.
- How the application interacts with devices and InfluxDB.
- How to authenticate apps and devices to the API.
- How to install a client library.
- How to write and query data in InfluxDB.
- How to use the InfluxData UI libraries to format data and create visualizations.
{{< children >}}

View File

@ -0,0 +1,521 @@
---
title: JavaScript client library starter
seotitle: Use JavaScript client library to build a sample application
list_title: JavaScript
description: >
Build a JavaScript application that writes, queries, and manages devices with the
InfluxDB client library.
menu:
influxdb_2_4:
identifier: client-library-starter-js
name: JavaScript
parent: Client library starter
influxdb/v2.4/tags: [api, javascript, nodejs]
---
{{% api/iot-starter-intro %}}
## Contents
- [Contents](#contents)
- [Set up InfluxDB](#set-up-influxdb)
- [Authenticate with an InfluxDB API token](#authenticate-with-an-influxdb-api-token)
- [Introducing IoT Starter](#introducing-iot-starter)
- [Create the application](#create-the-application)
- [Install InfluxDB client library](#install-influxdb-client-library)
- [Configure the client library](#configure-the-client-library)
- [Build the API](#build-the-api)
- [Create the API to list devices](#create-the-api-to-list-devices)
- [Handle requests for device information](#handle-requests-for-device-information)
- [Retrieve and list devices](#retrieve-and-list-devices)
- [Create the API to register devices](#create-the-api-to-register-devices)
- [Create an authorization for the device](#create-an-authorization-for-the-device)
- [Write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket)
- [Install and run the UI](#install-and-run-the-ui)
## Set up InfluxDB
If you haven't already, [create an InfluxDB Cloud account](https://www.influxdata.com/products/influxdb-cloud/) or [install InfluxDB OSS](https://www.influxdata.com/products/influxdb/).
### Authenticate with an InfluxDB API token
For convenience in development,
[create an _All-Access_ token](/influxdb/v2.4/security/tokens/create-token/)
for your application. This grants your application full read and write
permissions on all resources within your InfluxDB organization.
{{% note %}}
For a production application, create and use a
{{% cloud-only %}}custom{{% /cloud-only %}}{{% oss-only %}}read-write{{% /oss-only %}}
token with minimal permissions and only use it with your application.
{{% /note %}}
## Introducing IoT Starter
The application architecture has four layers:
- **InfluxDB API**: InfluxDB v2 API.
- **IoT device**: Virtual or physical devices write IoT data to the InfluxDB API.
- **UI**: Sends requests to the server and renders views in the browser.
- **API**: Receives requests from the UI, sends requests to InfluxDB, and processes responses from InfluxDB.
{{% note %}}
For the complete code referenced in this tutorial, see the [influxdata/iot-api-js repository](https://github.com/influxdata/iot-api-js).
{{% /note %}}
## Install Yarn
If you haven't already installed `yarn`, follow the [Yarn package manager installation instructions](https://yarnpkg.com/getting-started/install#nodejs-1610-1) for your version of Node.js.
- To check the installed `yarn` version, enter the following code into your terminal:
```bash
yarn --version
```
## Create the application
Create a directory that will contain your `iot-api` projects.
The following example code creates an `iot-api` directory in your home directory
and changes to the new directory:
```bash
mkdir ~/iot-api-apps
cd ~/iot-api-apps
```
Follow these steps to create a JavaScript application with [Next.js](https://nextjs.org/):
1. In your `~/iot-api-apps` directory, open a terminal and enter the following commands to create the `iot-api-js` app from the NextJS [learn-starter template](https://github.com/vercel/next-learn/tree/master/basics/learn-starter):
```bash
yarn create-next-app iot-api-js --example "https://github.com/vercel/next-learn/tree/master/basics/learn-starter"
```
2. After the installation completes, enter the following commands in your terminal to go into your `./iot-api-js` directory and start the development server:
```bash
cd iot-api-js
yarn dev -p 3001
```
To view the application, visit <http://localhost:3001> in your browser.
## Install InfluxDB client library
The InfluxDB client library provides the following InfluxDB API interactions:
- Query data with the Flux language.
- Write data to InfluxDB.
- Batch data in the background.
- Retry requests automatically on failure.
1. Enter the following command into your terminal to install the client library:
```bash
yarn add @influxdata/influxdb-client
```
2. Enter the following command into your terminal to install `@influxdata/influxdb-client-apis`, the _management APIs_ that create, modify, and delete authorizations, buckets, tasks, and other InfluxDB resources:
```bash
yarn add @influxdata/influxdb-client-apis
```
For more information about the client library, see the [influxdata/influxdb-client-js repo](https://github.com/influxdata/influxdb-client-js).
## Configure the client library
InfluxDB client libraries require configuration properties from your InfluxDB environment.
Typically, you'll provide the following properties as environment variables for your application:
- `INFLUX_URL`
- `INFLUX_TOKEN`
- `INFLUX_ORG`
- `INFLUX_BUCKET`
- `INFLUX_BUCKET_AUTH`
Next.js uses the `env` module to provide environment variables to your application.
The `./.env.development` file is versioned and contains non-secret default settings for your _development_ environment.
```bash
# .env.development
INFLUX_URL=http://localhost:8086
INFLUX_BUCKET=iot_center
INFLUX_BUCKET_AUTH=iot_center_devices
```
To configure secrets and settings that aren't added to version control,
create a `./.env.local` file and set the variables--for example, set your InfluxDB token and organization:
```sh
# .env.local
# INFLUX_TOKEN
# InfluxDB API token used by the application server to send requests to InfluxDB.
# For convenience in development, use an **All-Access** token.
INFLUX_TOKEN=29Xx1KH9VkASPR2DSfRfFd82OwGD...
# INFLUX_ORG
# InfluxDB organization ID you want to use in development.
INFLUX_ORG=48c88459ee424a04
```
Enter the following commands into your terminal to restart and load the `.env` files:
1. `CONTROL+C` to stop the application.
2. `yarn dev` to start the application.
Next.js sets variables that you can access in the `process.env` object--for example:
```ts
console.log(process.env.INFLUX_ORG)
```
## Build the API
Your application API provides server-side HTTP endpoints that process requests from the UI.
Each API endpoint is responsible for the following:
1. Listen for HTTP requests (from the UI).
2. Translate requests into InfluxDB API requests.
3. Process InfluxDB API responses and handle errors.
4. Respond with status and data (for the UI).
## Create the API to list devices
Add the `/api/devices` API endpoint that retrieves, processes, and lists devices.
`/api/devices` uses the `/api/v2/query` InfluxDB API endpoint to query `INFLUX_BUCKET_AUTH` for a registered device.
### Handle requests for device information
1. Create a `./pages/api/devices/[[...deviceParams]].js` file to handle requests for `/api/devices` and `/api/devices/<deviceId>/measurements/`.
2. In the file, export a Next.js request `handler` function.
[See the example](https://github.com/influxdata/iot-api-js/blob/18d34bcd59b93ad545c5cd9311164c77f6d1995a/pages/api/devices/%5B%5B...deviceParams%5D%5D.js).
{{% note %}}
In Next.js, the filename pattern `[[...param]].js` creates a _catch-all_ API route.
To learn more, see [Next.js dynamic API routes](https://nextjs.org/docs/api-routes/dynamic-api-routes).
{{% /note %}}
### Retrieve and list devices
Retrieve registered devices in `INFLUX_BUCKET_AUTH` and process the query results.
1. Create a Flux query that gets the last row of each [series](/influxdb/v2.4/reference/glossary#series) that contains a `deviceauth` measurement.
The example query below returns rows that contain the `key` field (authorization ID) and excludes rows that contain a `token` field (to avoid exposing tokens to the UI).
```js
// Flux query finds devices
from(bucket:`${INFLUX_BUCKET_AUTH}`)
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "deviceauth" and r._field != "token")
|> last()
```
2. Use the `QueryApi` client to send the Flux query to the `POST /api/v2/query` InfluxDB API endpoint.
Create a `./pages/api/devices/_devices.js` file that contains the following:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
{{% truncate %}}
```ts
import { InfluxDB } from '@influxdata/influxdb-client'
import { flux } from '@influxdata/influxdb-client'
const INFLUX_ORG = process.env.INFLUX_ORG
const INFLUX_BUCKET_AUTH = process.env.INFLUX_BUCKET_AUTH
const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.INFLUX_TOKEN})
/**
* Gets devices or a particular device when deviceId is specified. Tokens
* are not returned unless deviceId is specified. It can also return devices
* with empty/unknown key, such devices can be ignored (InfluxDB authorization is not associated).
* @param deviceId optional deviceId
* @returns promise with an Record<deviceId, {deviceId, createdAt, updatedAt, key, token}>.
*/
export async function getDevices(deviceId) {
const queryApi = influxdb.getQueryApi(INFLUX_ORG)
const deviceFilter =
deviceId !== undefined
? flux` and r.deviceId == "${deviceId}"`
: flux` and r._field != "token"`
const fluxQuery = flux`from(bucket:${INFLUX_BUCKET_AUTH})
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "deviceauth"${deviceFilter})
|> last()`
const devices = {}
return await new Promise((resolve, reject) => {
queryApi.queryRows(fluxQuery, {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
const deviceId = o.deviceId
if (!deviceId) {
return
}
const device = devices[deviceId] || (devices[deviceId] = {deviceId})
device[o._field] = o._value
if (!device.updatedAt || device.updatedAt < o._time) {
device.updatedAt = o._time
}
},
error: reject,
complete() {
resolve(devices)
},
})
})
}
```
{{% /truncate %}}
{{% caption %}}[iot-api-js/pages/api/devices/_devices.js getDevices(deviceId)](https://github.com/influxdata/iot-api-js/blob/18d34bcd59b93ad545c5cd9311164c77f6d1995a/pages/api/devices/_devices.js){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The `_devices` module exports a `getDevices(deviceId)` function that queries
for registered devices, processes the data, and returns a Promise with the result.
If you invoke the function as `getDevices()` (without a _`deviceId`_),
it retrieves all `deviceauth` points and returns a Promise with `{ DEVICE_ID: ROW_DATA }`.
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI queryRows(query, consumer)` method.
`queryRows` executes the `query` and provides the Annotated CSV result as an Observable to the `consumer`.
`queryRows` has the following TypeScript signature:
```ts
queryRows(
query: string | ParameterizedQuery,
consumer: FluxResultObserver<string[]>
): void
```
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/QueryApi.ts){{% /caption %}}
The `consumer` that you provide must implement the [`FluxResultObserver` interface](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/results/FluxResultObserver.ts) and provide the following callback functions:
- `next(row, tableMeta)`: processes the next row and table metadata--for example, to prepare the response.
- `error(error)`: receives and handles errors--for example, by rejecting the Promise.
- `complete()`: signals when all rows have been consumed--for example, by resolving the Promise.
To learn more about Observers, see the [RxJS Guide](https://rxjs.dev/guide/observer).
## Create the API to register devices
In this application, a _registered device_ is a point that contains your device ID, authorization ID, and API token.
The API token and authorization permissions allow the device to query and write to `INFLUX_BUCKET`.
In this section, you add the API endpoint that handles requests from the UI, creates an authorization in InfluxDB,
and writes the registered device to the `INFLUX_BUCKET_AUTH` bucket.
To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.4/security/tokens/)
The application API uses the following `/api/v2` InfluxDB API endpoints:
- `POST /api/v2/query`: to query `INFLUX_BUCKET_AUTH` for a registered device.
- `GET /api/v2/buckets`: to get the bucket ID for `INFLUX_BUCKET`.
- `POST /api/v2/authorizations`: to create an authorization for the device.
- `POST /api/v2/write`: to write the device authorization to `INFLUX_BUCKET_AUTH`.
1. Add a `./pages/api/devices/create.js` file to handle requests for `/api/devices/create`.
2. In the file, export a Next.js request `handler` function that does the following:
1. Accept a device ID in the request body.
2. Query `INFLUX_BUCKET_AUTH` and respond with an error if an authorization exists for the device.
3. [Create an authorization for the device](#create-an-authorization-for-the-device).
4. [Write the device ID and authorization to `INFLUX_BUCKET_AUTH`](#write-the-device-authorization-to-a-bucket).
5. Respond with `HTTP 200` when the write request completes.
[See the example](https://github.com/influxdata/iot-api-js/blob/25b38c94a1f04ea71f2ef4b9fcba5350d691cb9d/pages/api/devices/create.js).
### Create an authorization for the device
In this section, you create an authorization with _read_-_write_ permission to `INFLUX_BUCKET` and receive an API token for the device.
The example below uses the following steps to create the authorization:
1. Instantiate the `AuthorizationsAPI` client and `BucketsAPI` client with the configuration.
2. Retrieve the bucket ID.
3. Use the client library to send a `POST` request to the `/api/v2/authorizations` InfluxDB API endpoint.
In `./api/devices/create.js`, add the following `createAuthorization(deviceId)` function:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
{{% truncate %}}
```js
import { InfluxDB } from '@influxdata/influxdb-client'
import { getDevices } from './_devices'
import { AuthorizationsAPI, BucketsAPI } from '@influxdata/influxdb-client-apis'
import { Point } from '@influxdata/influxdb-client'
const INFLUX_ORG = process.env.INFLUX_ORG
const INFLUX_BUCKET_AUTH = process.env.INFLUX_BUCKET_AUTH
const INFLUX_BUCKET = process.env.INFLUX_BUCKET
const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.INFLUX_TOKEN})
/**
* Creates an authorization for a supplied deviceId
* @param {string} deviceId client identifier
* @returns {import('@influxdata/influxdb-client-apis').Authorization} promise with authorization or an error
*/
async function createAuthorization(deviceId) {
const authorizationsAPI = new AuthorizationsAPI(influxdb)
const bucketsAPI = new BucketsAPI(influxdb)
const DESC_PREFIX = 'IoTCenterDevice: '
const buckets = await bucketsAPI.getBuckets({name: INFLUX_BUCKET, orgID: INFLUX_ORG})
const bucketId = buckets.buckets[0]?.id
return await authorizationsAPI.postAuthorizations(
{
body: {
orgID: INFLUX_ORG,
description: DESC_PREFIX + deviceId,
permissions: [
{
action: 'read',
resource: {type: 'buckets', id: bucketId, orgID: INFLUX_ORG},
},
{
action: 'write',
resource: {type: 'buckets', id: bucketId, orgID: INFLUX_ORG},
},
],
},
}
)
}
```
{{% /truncate %}}
{{% caption %}}[iot-api-js/pages/api/devices/create.js](https://github.com/influxdata/iot-api-js/blob/42a37d683b5e4df601422f85d2c22f5e9d592e68/pages/api/devices/create.js){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
To create an authorization that has _read_-_write_ permission to `INFLUX_BUCKET`, you need the bucket ID.
To retrieve the bucket ID,
`createAuthorization(deviceId)` calls the `BucketsAPI getBuckets` function that sends a `GET` request to
the `/api/v2/buckets` InfluxDB API endpoint.
`createAuthorization(deviceId)` then passes a new authorization in the request body with the following:
- Bucket ID.
- Organization ID.
- Description: `IoTCenterDevice: DEVICE_ID`.
- List of permissions to the bucket.
To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.4/security/tokens/).
Next, [write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket).
### Write the device authorization to a bucket
With a device authorization in InfluxDB, write a point for the device and authorization details to `INFLUX_BUCKET_AUTH`.
Storing the device authorization in a bucket allows you to do the following:
- Report device authorization history.
- Manage devices with and without tokens.
- Assign the same token to multiple devices.
- Refresh tokens.
To write a point to InfluxDB, use the InfluxDB client library to send a `POST` request to the `/api/v2/write` InfluxDB API endpoint.
In `./pages/api/devices/create.js`, add the following `createDevice(deviceId)` function:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
```ts
/** Creates an authorization for a deviceId and writes it to a bucket */
async function createDevice(deviceId) {
let device = (await getDevices(deviceId)) || {}
let authorizationValid = !!Object.values(device)[0]?.key
if(authorizationValid) {
console.log(JSON.stringify(device))
return Promise.reject('This device ID is already registered and has an authorization.')
} else {
console.log(`createDeviceAuthorization: deviceId=${deviceId}`)
const authorization = await createAuthorization(deviceId)
const writeApi = influxdb.getWriteApi(INFLUX_ORG, INFLUX_BUCKET_AUTH, 'ms', {
batchSize: 2,
})
const point = new Point('deviceauth')
.tag('deviceId', deviceId)
.stringField('key', authorization.id)
.stringField('token', authorization.token)
writeApi.writePoint(point)
await writeApi.close()
return
}
}
```
{{% caption %}}[iot-api-js/pages/api/devices/create.js](https://github.com/influxdata/iot-api-js/blob/25b38c94a1f04ea71f2ef4b9fcba5350d691cb9d/pages/api/devices/create.js){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
`createDevice(device_id)` takes a _`device_id`_ and writes data to `INFLUX_BUCKET_AUTH` in the following steps:
1. Initialize `InfluxDBClient()` with `url`, `token`, and `org` values from the configuration.
2. Initialize a `WriteAPI` client for writing data to an InfluxDB bucket.
3. Create a `Point`.
4. Use `writeApi.writePoint(point)` to write the `Point` to the bucket.
The function writes a point with the following elements:
| Element | Name | Value |
|:------------|:-----------|:--------------------------|
| measurement | | `deviceauth` |
| tag | `deviceId` | device ID |
| field | `key` | authorization ID |
| field | `token` | authorization (API) token |
## Install and run the UI
`influxdata/iot-api-ui` is a standalone [Next.js React](https://nextjs.org/docs/basic-features/pages) UI that uses your application API to write and query data in InfluxDB.
`iot-api-ui` uses Next.js _[rewrites](https://nextjs.org/docs/api-reference/next.config.js/rewrites)_ to route all requests in the `/api/` path to your API.
To install and run the UI, do the following:
1. In your `~/iot-api-apps` directory, clone the [`influxdata/iot-api-ui` repo](https://github.com/influxdata/iot-api-ui) and go into the `iot-api-ui` directory--for example:
```bash
cd ~/iot-api-apps
git clone git@github.com:influxdata/iot-api-ui.git
cd ./iot-app-ui
```
2. The `./.env.development` file contains default configuration settings that you can
edit or override (with a `./.env.local` file).
3. To start the UI, enter the following command into your terminal:
```bash
yarn dev
```
To view the list and register devices, visit <http://localhost:3000/devices> in your browser.
To learn more about the UI components, see [`influxdata/iot-api-ui`](https://github.com/influxdata/iot-api-ui).

View File

@ -0,0 +1,583 @@
---
title: Python client library starter
seotitle: Use Python client library to build a sample application
list_title: Python
description: >
Build an application that writes, queries, and manages devices with the InfluxDB
client library for Python.
weight: 3
menu:
influxdb_2_4:
identifier: client-library-starter-py
name: Python
parent: Client library starter
influxdb/v2.4/tags: [api, python]
---
{{% api/iot-starter-intro %}}
- How to use the InfluxData UI libraries to format data and create visualizations.
## Contents
- [Contents](#contents)
- [Set up InfluxDB](#set-up-influxdb)
- [Authenticate with an InfluxDB API token](#authenticate-with-an-influxdb-api-token)
- [Introducing IoT Starter](#introducing-iot-starter)
- [Create the application](#create-the-application)
- [Install InfluxDB client library](#install-influxdb-client-library)
- [Configure the client library](#configure-the-client-library)
- [Build the API](#build-the-api)
- [Create the API to register devices](#create-the-api-to-register-devices)
- [Create an authorization for the device](#create-an-authorization-for-the-device)
- [Write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket)
- [Create the API to list devices](#create-the-api-to-list-devices)
- [Create IoT virtual device](#create-iot-virtual-device)
- [Write telemetry data](#write-telemetry-data)
- [Query telemetry data](#query-telemetry-data)
- [Define API responses](#define-api-responses)
- [Install and run the UI](#install-and-run-the-ui)
## Set up InfluxDB
If you haven't already, [create an InfluxDB Cloud account](https://www.influxdata.com/products/influxdb-cloud/) or [install InfluxDB OSS](https://www.influxdata.com/products/influxdb/).
### Authenticate with an InfluxDB API token
For convenience in development,
[create an _All-Access_ token](/influxdb/v2.4/security/tokens/create-token/)
for your application. This grants your application full read and write
permissions on all resources within your InfluxDB organization.
{{% note %}}
For a production application, create and use a
{{% cloud-only %}}custom{{% /cloud-only %}}{{% oss-only %}}read-write{{% /oss-only %}}
token with minimal permissions and only use it with your application.
{{% /note %}}
## Introducing IoT Starter
The application architecture has four layers:
- **InfluxDB API**: InfluxDB v2 API.
- **IoT device**: Virtual or physical devices write IoT data to the InfluxDB API.
- **UI**: Sends requests to the server and renders views in the browser.
- **API**: Receives requests from the UI, sends requests to InfluxDB,
and processes responses from InfluxDB.
{{% note %}}
For the complete code referenced in this tutorial, see the [influxdata/iot-api-python repository](https://github.com/influxdata/iot-api-python).
{{% /note %}}
## Create the application
Create a directory that will contain your `iot-api` projects.
The following example code creates an `iot-api` directory in your home directory
and changes to the new directory:
```bash
mkdir ~/iot-api-apps
cd ~/iot-api-apps
```
Use [Flask](https://flask.palletsprojects.com/), a lightweight Python web
framework,
to create your application.
1. In your `~/iot-api-apps` directory, open a terminal and enter the following commands to create and navigate into a new project directory:
```bash
mkdir iot-api-python && cd $_
```
2. Enter the following commands in your terminal to create and activate a Python virtual environment for the project:
```bash
# Create a new virtual environment named "virtualenv"
# Python 3.8+
python -m venv virtualenv
# Activate the virtualenv (OS X & Linux)
source virtualenv/bin/activate
```
3. After activation completes, enter the following commands in your terminal to install Flask with the `pip` package installer (included with Python):
```bash
pip install Flask
```
4. In your project, create a `app.py` file that:
1. Imports the Flask package.
2. Instantiates a Flask application.
3. Provides a route to execute the application.
```python
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
```
{{% caption %}}[influxdata/iot-api-python app.py](https://github.com/influxdata/iot-api-python/blob/main/app.py){{% /caption %}}
Start your application.
The following example code starts the application
on `http://localhost:3001` with debugging and hot-reloading enabled:
```bash
export FLASK_ENV=development
flask run -h localhost -p 3001
```
In your browser, visit <http://localhost:3001> to view the “Hello World!” response.
## Install InfluxDB client library
The InfluxDB client library provides the following InfluxDB API interactions:
- Query data with the Flux language.
- Write data to InfluxDB.
- Batch data in the background.
- Retry requests automatically on failure.
Enter the following command into your terminal to install the client library:
```bash
pip install influxdb-client
```
For more information about the client library, see the [influxdata/influxdb-client-python repo](https://github.com/influxdata/influxdb-client-python).
## Configure the client library
InfluxDB client libraries require configuration properties from your InfluxDB environment.
Typically, you'll provide the following properties as environment variables for your application:
- `INFLUX_URL`
- `INFLUX_TOKEN`
- `INFLUX_ORG`
- `INFLUX_BUCKET`
- `INFLUX_BUCKET_AUTH`
To set up the client configuration, create a `config.ini` in your project's top
level directory and paste the following to provide the necessary InfluxDB credentials:
```ini
[APP]
INFLUX_URL = <INFLUX_URL>
INFLUX_TOKEN = <INFLUX_TOKEN>
INFLUX_ORG = <INFLUX_ORG_ID>
INFLUX_BUCKET = iot_center
INFLUX_BUCKET_AUTH = iot_center_devices
```
{{% caption %}}[/iot-api-python/config.ini](https://github.com/influxdata/iot-api-python/blob/main/config.ini){{% /caption %}}
Replace the following:
- **`<INFLUX_URL>`**: your InfluxDB instance URL.
- **`<INFLUX_TOKEN>`**: your InfluxDB [API token](#authorization) with permission to query (_read_) buckets
and create (_write_) authorizations for devices.
- **`<INFLUX_ORG_ID>`**: your InfluxDB organization ID.
## Build the API
Your application API provides server-side HTTP endpoints that process requests from the UI.
Each API endpoint is responsible for the following:
1. Listen for HTTP requests (from the UI).
2. Translate requests into InfluxDB API requests.
3. Process InfluxDB API responses and handle errors.
4. Respond with status and data (for the UI).
## Create the API to register devices
In this application, a _registered device_ is a point that contains your device ID, authorization ID, and API token.
The API token and authorization permissions allow the device to query and write to `INFLUX_BUCKET`.
In this section, you add the API endpoint that handles requests from the UI, creates an authorization in InfluxDB,
and writes the registered device to the `INFLUX_BUCKET_AUTH` bucket.
To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.4/security/tokens/)
The application API uses the following `/api/v2` InfluxDB API endpoints:
- `POST /api/v2/query`: to query `INFLUX_BUCKET_AUTH` for a registered device.
- `GET /api/v2/buckets`: to get the bucket ID for `INFLUX_BUCKET`.
- `POST /api/v2/authorizations`: to create an authorization for the device.
- `POST /api/v2/write`: to write the device authorization to `INFLUX_BUCKET_AUTH`.
### Create an authorization for the device
In this section, you create an authorization with _read_-_write_ permission to `INFLUX_BUCKET` and receive an API token for the device.
The example below uses the following steps to create the authorization:
1. Instantiate the `AuthorizationsAPI` client and `BucketsAPI` client with the configuration.
2. Retrieve the bucket ID.
3. Use the client library to send a `POST` request to the `/api/v2/authorizations` InfluxDB API endpoint.
Create a `./api/devices.py` file that contains the following:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#python)
{{% /code-tabs %}}
{{% code-tab-content %}}
{{% truncate %}}
```python
# Import the dependencies.
import configparser
from datetime import datetime
from uuid import uuid4
# Import client library classes.
from influxdb_client import Authorization, InfluxDBClient, Permission, PermissionResource, Point, WriteOptions
from influxdb_client.client.authorizations_api import AuthorizationsApi
from influxdb_client.client.bucket_api import BucketsApi
from influxdb_client.client.query_api import QueryApi
from influxdb_client.client.write_api import SYNCHRONOUS
from api.sensor import Sensor
# Get the configuration key-value pairs.
config = configparser.ConfigParser()
config.read('config.ini')
def create_authorization(device_id) -> Authorization:
influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'),
token=os.environ.get('INFLUX_TOKEN'),
org=os.environ.get('INFLUX_ORG'))
authorization_api = AuthorizationsApi(influxdb_client)
# get bucket_id from bucket
buckets_api = BucketsApi(influxdb_client)
buckets = buckets_api.find_bucket_by_name(config.get('APP', 'INFLUX_BUCKET')) # function returns only 1 bucket
bucket_id = buckets.id
org_id = buckets.org_id
desc_prefix = f'IoTCenterDevice: {device_id}'
org_resource = PermissionResource(org_id=org_id, id=bucket_id, type="buckets")
read = Permission(action="read", resource=org_resource)
write = Permission(action="write", resource=org_resource)
permissions = [read, write]
authorization = Authorization(org_id=org_id, permissions=permissions, description=desc_prefix)
request = authorization_api.create_authorization(authorization)
return request
```
{{% /truncate %}}
{{% caption %}}[iot-api-python/api/devices.py](https://github.com/influxdata/iot-api-python/blob/d389a0e072c7a03dfea99e5663bdc32be94966bb/api/devices.py#L145){{% /caption %}}
To create an authorization that has _read_-_write_ permission to `INFLUX_BUCKET`, you need the bucket ID.
To retrieve the bucket ID, `create_authorization(deviceId)` calls the
`BucketsAPI find_bucket_by_name` function that sends a `GET` request to
the `/api/v2/buckets` InfluxDB API endpoint.
`create_authorization(deviceId)` then passes a new authorization in the request body with the following:
- Bucket ID.
- Organization ID.
- Description: `IoTCenterDevice: DEVICE_ID`.
- List of permissions to the bucket.
To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.4/security/tokens/).
Next, [write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket).
### Write the device authorization to a bucket
With a device authorization in InfluxDB, write a point for the device and authorization details to `INFLUX_BUCKET_AUTH`.
Storing the device authorization in a bucket allows you to do the following:
- Report device authorization history.
- Manage devices with and without tokens.
- Assign the same token to multiple devices.
- Refresh tokens.
To write a point to InfluxDB, use the InfluxDB client library to send a `POST` request to the `/api/v2/write` InfluxDB API endpoint.
In `./api/devices.py`, add the following `create_device(device_id)` function:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#python)
{{% /code-tabs %}}
{{% code-tab-content %}}
```python
def create_device(device_id=None):
influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'),
token=config.get('APP', 'INFLUX_TOKEN'),
org=config.get('APP', 'INFLUX_ORG'))
if device_id is None:
device_id = str(uuid4())
write_api = influxdb_client.write_api(write_options=SYNCHRONOUS)
point = Point('deviceauth') \
.tag("deviceId", device_id) \
.field('key', f'fake_auth_id_{device_id}') \
.field('token', f'fake_auth_token_{device_id}')
client_response = write_api.write(bucket=config.get('APP', 'INFLUX_BUCKET_AUTH'), record=point)
# write() returns None on success
if client_response is None:
return device_id
# Return None on failure
return None
```
{{% caption %}}[iot-api-python/api/devices.py](https://github.com/influxdata/iot-api-python/blob/f354941c80b6bac643ca29efe408fde1deebdc96/api/devices.py#L47){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
`create_device(device_id)` takes a _`device_id`_ and writes data to `INFLUX_BUCKET_AUTH` in the following steps:
1. Initialize `InfluxDBClient()` with `url`, `token`, and `org` values from the configuration.
2. Initialize a `WriteAPI` client for writing data to an InfluxDB bucket.
3. Create a `Point`.
4. Use `write_api.write()` to write the `Point` to the bucket.
5. Check for failures--if the write was successful, `write_api` returns `None`.
6. Return _`device_id`_ if successful; `None` otherwise.
The function writes a point with the following elements:
| Element | Name | Value |
|:------------|:-----------|:--------------------------|
| measurement | | `deviceauth` |
| tag | `deviceId` | device ID |
| field | `key` | authorization ID |
| field | `token` | authorization (API) token |
Next, [create the API to list devices](#create-the-api-to-list-devices).
## Create the API to list devices
Add the `/api/devices` API endpoint that retrieves, processes, and lists registered devices.
1. Create a Flux query that gets the last row of each [series](/influxdb/v2.4/reference/glossary#series) that contains a `deviceauth` measurement.
The example query below returns rows that contain the `key` field (authorization ID) and excludes rows that contain a `token` field (to avoid exposing tokens to the UI).
```js
// Flux query finds devices
from(bucket:`${INFLUX_BUCKET_AUTH}`)
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "deviceauth" and r._field != "token")
|> last()
```
2. Use the `QueryApi` client to send the Flux query to the `POST /api/v2/query` InfluxDB API endpoint.
In `./api/devices.py`, add the following:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#python)
{{% /code-tabs %}}
{{% code-tab-content %}}
{{% truncate %}}
```python
def get_device(device_id=None) -> {}:
influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'),
token=os.environ.get('INFLUX_TOKEN'),
org=os.environ.get('INFLUX_ORG'))
# Queries must be formatted with single and double quotes correctly
query_api = QueryApi(influxdb_client)
device_filter = ''
if device_id:
device_id = str(device_id)
device_filter = f'r.deviceId == "{device_id}" and r._field != "token"'
else:
device_filter = f'r._field != "token"'
flux_query = f'from(bucket: "{config.get("APP", "INFLUX_BUCKET_AUTH")}") ' \
f'|> range(start: 0) ' \
f'|> filter(fn: (r) => r._measurement == "deviceauth" and {device_filter}) ' \
f'|> last()'
response = query_api.query(flux_query)
result = []
for table in response:
for record in table.records:
try:
'updatedAt' in record
except KeyError:
record['updatedAt'] = record.get_time()
record[record.get_field()] = record.get_value()
result.append(record.values)
return result
```
{{% /truncate %}}
{{% caption %}}[iot-api-python/api/devices.py get_device()](https://github.com/influxdata/iot-api-python/blob/9bf44a659424a27eb937d545dc0455754354aef5/api/devices.py#L30){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The `get_device(device_id)` function does the following:
1. Instantiates a `QueryApi` client and sends the Flux query to InfluxDB.
2. Iterates over the `FluxTable` in the response and returns a list of tuples.
## Create IoT virtual device
Create a `./api/sensor.py` file that generates simulated weather telemetry data.
Follow the [example code](https://github.com/influxdata/iot-api-python/blob/f354941c80b6bac643ca29efe408fde1deebdc96/api/sensor.py) to create the IoT virtual device.
Next, generate data for virtual devices and [write the data to InfluxDB](#write-telemetry-data).
## Write telemetry data
In this section, you write telemetry data to an InfluxDB bucket.
To write data, use the InfluxDB client library to send a `POST` request to the `/api/v2/write` InfluxDB API endpoint.
The example below uses the following steps to generate data and then write it to InfluxDB:
1. Initialize a `WriteAPI` instance.
2. Create a `Point` with the `environment` measurement and data fields for temperature, humidity, pressure, latitude, and longitude.
3. Use the `WriteAPI write` method to send the point to InfluxDB.
In `./api/devices.py`, add the following `write_measurements(device_id)` function:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#python)
{{% /code-tabs %}}
{{% code-tab-content %}}
```python
def write_measurements(device_id):
influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'),
token=config.get('APP', 'INFLUX_TOKEN'),
org=config.get('APP', 'INFLUX_ORG'))
write_api = influxdb_client.write_api(write_options=SYNCHRONOUS)
virtual_device = Sensor()
coord = virtual_device.geo()
point = Point("environment") \
.tag("device", device_id) \
.tag("TemperatureSensor", "virtual_bme280") \
.tag("HumiditySensor", "virtual_bme280") \
.tag("PressureSensor", "virtual_bme280") \
.field("Temperature", virtual_device.generate_measurement()) \
.field("Humidity", virtual_device.generate_measurement()) \
.field("Pressure", virtual_device.generate_measurement()) \
.field("Lat", coord['latitude']) \
.field("Lon", coord['latitude']) \
.time(datetime.utcnow())
print(f"Writing: {point.to_line_protocol()}")
client_response = write_api.write(bucket=config.get('APP', 'INFLUX_BUCKET'), record=point)
# write() returns None on success
if client_response is None:
# TODO Maybe also return the data that was written
return device_id
# Return None on failure
return None
```
{{% caption %}}[iot-api-python/api/devices.py write_measurement()](https://github.com/influxdata/iot-api-python/blob/f354941c80b6bac643ca29efe408fde1deebdc96/api/devices.py){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Query telemetry data
In this section, you retrieve telemetry data from an InfluxDB bucket.
To retrieve data, use the InfluxDB client library to send a `POST` request to the `/api/v2/query` InfluxDB API endpoint.
The example below uses the following steps to retrieve and process telemetry data:
1. Query `environment` measurements in `INFLUX_BUCKET`.
2. Filter results by `device_id`.
3. Return CSV data that the [`influxdata/giraffe` UI library](https://github.com/influxdata/giraffe) can process.
In `./api/devices.py`, add the following `get_measurements(device_id)` function:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#python)
{{% /code-tabs %}}
{{% code-tab-content %}}
```python
def get_measurements(query):
influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'),
token=os.environ.get('INFLUX_TOKEN'), org=os.environ.get('INFLUX_ORG'))
query_api = QueryApi(influxdb_client)
result = query_api.query_csv(query,
dialect=Dialect(
header=True,
delimiter=",",
comment_prefix="#",
annotations=['group', 'datatype', 'default'],
date_time_format="RFC3339"))
response = ''
for row in result:
response += (',').join(row) + ('\n')
return response
```
{{% caption %}}[iot-api-python/api/devices.py get_measurements()](https://github.com/influxdata/iot-api-python/blob/9bf44a659424a27eb937d545dc0455754354aef5/api/devices.py#L122){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Define API responses
In `app.py`, add API endpoints that match incoming requests and respond with the results of your modules.
In the following `/api/devices/<device_id>` route example, `app.py` retrieves _`device_id`_ from `GET` and `POST` requests, passes it to the `get_device(device_id)` method and returns the result as JSON data with CORS `allow-` headers.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#python)
{{% /code-tabs %}}
{{% code-tab-content %}}
```python
@app.route('/api/devices/<string:device_id>', methods=['GET', 'POST'])
def api_get_device(device_id):
if request.method == "OPTIONS": # CORS preflight
return _build_cors_preflight_response()
return _corsify_actual_response(jsonify(devices.get_device(device_id)))
```
{{% caption %}}[iot-api-python/app.py](https://github.com/influxdata/iot-api-python/blob/9bf44a659424a27eb937d545dc0455754354aef5/app.py){{% /caption %}}
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Enter the following commands into your terminal to restart the application:
1. `CONTROL+C` to stop the application.
2. `flask run -h localhost -p 3001` to start the application.
To retrieve devices data from your API, visit <http://localhost:3001/api/devices> in your browser.
## Install and run the UI
`influxdata/iot-api-ui` is a standalone [Next.js React](https://nextjs.org/docs/basic-features/pages) UI that uses your application API to write and query data in InfluxDB.
`iot-api-ui` uses Next.js _[rewrites](https://nextjs.org/docs/api-reference/next.config.js/rewrites)_ to route all requests in the `/api/` path to your API.
To install and run the UI, do the following:
1. In your `~/iot-api-apps` directory, clone the [`influxdata/iot-api-ui` repo](https://github.com/influxdata/iot-api-ui) and go into the `iot-api-ui` directory--for example:
```bash
cd ~/iot-api-apps
git clone git@github.com:influxdata/iot-api-ui.git
cd ./iot-app-ui
```
2. The `./.env.development` file contains default configuration settings that you can
edit or override (with a `./.env.local` file).
3. To start the UI, enter the following command into your terminal:
```bash
yarn dev
```
To view the list and register devices, visit <http://localhost:3000/devices> in your browser.
To learn more about the UI components, see [`influxdata/iot-api-ui`](https://github.com/influxdata/iot-api-ui).

View File

@ -0,0 +1,17 @@
---
title: Back up and restore data
seotitle: Backup and restore data with InfluxDB
description: >
InfluxDB provides tools that let you back up and restore data and metadata stored
in InfluxDB.
influxdb/v2.4/tags: [backup, restore]
menu:
influxdb_2_4:
name: Back up & restore data
weight: 9
products: [oss]
---
InfluxDB provides tools to back up and restore data and metadata stored in InfluxDB.
{{< children >}}

View File

@ -0,0 +1,49 @@
---
title: Back up data
seotitle: Back up data in InfluxDB
description: >
Use the `influx backup` command to back up data and metadata stored in InfluxDB.
menu:
influxdb_2_4:
parent: Back up & restore data
weight: 101
related:
- /influxdb/v2.4/backup-restore/restore/
- /influxdb/v2.4/reference/cli/influx/backup/
products: [oss]
---
Use the [`influx backup` command](/influxdb/v2.4/reference/cli/influx/backup/) to back up
data and metadata stored in InfluxDB.
InfluxDB copies all data and metadata to a set of files stored in a specified directory
on your local filesystem.
{{% note %}}
#### InfluxDB 1.x/2.x compatibility
The InfluxDB {{< current-version >}} `influx backup` command is not compatible with versions of InfluxDB prior to 2.0.0.
**For information about migrating data between InfluxDB 1.x and {{< current-version >}}, see:**
- [Automatically upgrade from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.4/upgrade/v1-to-v2/automatic-upgrade/)
- [Manually upgrade from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.4/upgrade/v1-to-v2/manual-upgrade/)
{{% /note %}}
{{% cloud %}}
The `influx backup` command **cannot** back up data stored in **{{< cloud-name "short" >}}**.
{{% /cloud %}}
The `influx backup` command requires:
- The directory path for where to store the backup file set
- The **root authorization token** (the token created for the first user in the
[InfluxDB setup process](/influxdb/v2.4/get-started/)).
##### Back up data with the influx CLI
```sh
# Syntax
influx backup <backup-path> -t <root-token>
# Example
influx backup \
path/to/backup_$(date '+%Y-%m-%d_%H-%M') \
-t xXXXX0xXX0xxX0xx_x0XxXxXXXxxXX0XXX0XXxXxX0XxxxXX0Xx0xx==
```

View File

@ -0,0 +1,141 @@
---
title: Restore data
seotitle: Restore data in InfluxDB
description: >
Use the `influx restore` command to restore backup data and metadata from InfluxDB.
menu:
influxdb_2_4:
parent: Back up & restore data
weight: 101
influxdb/v2.4/tags: [restore]
related:
- /influxdb/v2.4/backup-restore/backup/
- /influxdb/v2.4/reference/cli/influxd/restore/
products: [oss]
---
{{% cloud %}}
Restores **not supported in {{< cloud-name "short" >}}**.
{{% /cloud %}}
Use the `influx restore` command to restore backup data and metadata from InfluxDB OSS.
- [Restore data with the influx CLI](#restore-data-with-the-influx-cli)
- [Recover from a failed restore](#recover-from-a-failed-restore)
InfluxDB moves existing data and metadata to a temporary location.
If the restore fails, InfluxDB preserves temporary data for recovery,
otherwise this data is deleted.
_See [Recover from a failed restore](#recover-from-a-failed-restore)._
{{% note %}}
#### Cannot restore to existing buckets
The `influx restore` command cannot restore data to existing buckets.
Use the `--new-bucket` flag to create a new bucket to restore data to.
To restore data and retain bucket names, [delete existing buckets](/influxdb/v2.4/organizations/buckets/delete-bucket/)
and then begin the restore process.
{{% /note %}}
## Restore data with the influx CLI
Use the `influx restore` command and specify the path to the backup directory.
_For more information about restore options and flags, see the
[`influx restore` documentation](/influxdb/v2.4/reference/cli/influx/restore/)._
- [Restore all time series data](#restore-all-time-series-data)
- [Restore data from a specific bucket](#restore-data-from-a-specific-bucket)
- [Restore and replace all InfluxDB data](#restore-and-replace-all-influxdb-data)
### Restore all time series data
To restore all time series data from a backup directory, provide the following:
- backup directory path
```sh
influx restore /backups/2020-01-20_12-00/
```
### Restore data from a specific bucket
To restore data from a specific backup bucket, provide the following:
- backup directory path
- bucket name or ID
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--bucket example-bucket
# OR
influx restore \
/backups/2020-01-20_12-00/ \
--bucket-id 000000000000
```
If a bucket with the same name as the backed up bucket already exists in InfluxDB,
use the `--new-bucket` flag to create a new bucket with a different name and
restore data into it.
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--bucket example-bucket \
--new-bucket new-example-bucket
```
### Restore and replace all InfluxDB data
To restore and replace all time series data _and_ InfluxDB key-value data such as
tokens, users, dashboards, etc., include the following:
- `--full` flag
- backup directory path
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--full
```
{{% note %}}
#### Restore to a new InfluxDB server
If using a backup to populate a new InfluxDB server:
1. Retrieve the [admin token](/influxdb/v2.4/security/tokens/#admin-token) from your source InfluxDB instance.
2. Set up your new InfluxDB instance, but use the `-t`, `--token` flag to use the
**admin token** from your source instance as the admin token on your new instance.
```sh
influx setup --token My5uP3rSecR37t0keN
```
3. Restore the backup to the new server.
```sh
influx restore \
/backups/2020-01-20_12-00/ \
--full
```
If you do not provide the admin token from your source InfluxDB instance as the
admin token in your new instance, the restore process and all subsequent attempts
to authenticate with the new server will fail.
1. The first restore API call uses the auto-generated token to authenticate with
the new server and overwrites the entire key-value store in the new server, including
the auto-generated token.
2. The second restore API call attempts to upload time series data, but uses the
auto-generated token to authenticate with new server.
That token is overwritten in first restore API call and the process fails to authenticate.
{{% /note %}}
## Recover from a failed restore
If the restoration process fails, InfluxDB preserves existing data in a `tmp`
directory in the [target engine path](/influxdb/v2.4/reference/cli/influx/restore/#flags)
(default is `~/.influxdbv2/engine`).
To recover from a failed restore:
1. Copy the temporary files back into the `engine` directory.
2. Remove the `.tmp` extensions from each of the copied files.
3. Restart the `influxd` server.

View File

@ -0,0 +1,71 @@
---
title: Get started with InfluxDB
description: >
Start collecting, processing, and visualizing data in InfluxDB OSS.
menu:
influxdb_2_4:
name: Get started
weight: 3
influxdb/v2.4/tags: [get-started]
aliases:
- /influxdb/v2.4/introduction/get-started/
---
After you've [installed InfluxDB OSS](/influxdb/v2.4/install/), you're ready to get started. Explore the following ways to work with your data:
- [Collect and write data](#collect-and-write-data)
- [Query data](#query-data)
- [Process data](#process-data)
- [Visualize data](#visualize-data)
- [Monitor and alert](#monitor-and-alert)
*Note:** To run InfluxDB, start the `influxd` daemon ([InfluxDB service](/influxdb/v2.4/reference/cli/influxd/)) using the [InfluxDB command line interface](/influxdb/v2.4/reference/cli/influx/). Once you've started the `influxd` daemon, use `localhost:8086` to log in to your InfluxDB instance.
To start InfluxDB, do the following:
1. Open a terminal.
2. Type `influxd` in the command line.
```sh
influxd
```
### Collect and write data
Collect and write data to InfluxDB using the Telegraf plugins, the InfluxDB v2 API, the `influx` command line interface (CLI), the InfluxDB UI (the user interface for InfluxDB 2.4), or the InfluxDB v2 API client libraries.
#### Use Telegraf
Use Telegraf to quickly write data to {{< cloud-name >}}.
Create new Telegraf configurations automatically in the InfluxDB UI, or manually update an existing Telegraf configuration to send data to your {{< cloud-name "short" >}} instance.
For details, see [Automatically configure Telegraf](/influxdb/v2.4/write-data/no-code/use-telegraf/auto-config/)
and [Manually update Telegraf configurations](/influxdb/v2.4/write-data/no-code/use-telegraf/manual-config/).
#### Scrape data
**InfluxDB OSS** lets you scrape Prometheus-formatted metrics from HTTP endpoints. For details, see [Scrape data](/influxdb/v2.4/write-data/no-code/scrape-data/).
#### API, CLI, and client libraries
For information about using the InfluxDB v2 API, `influx` CLI, and client libraries to write data, see [Write data to InfluxDB](/influxdb/v2.4/write-data/).
### Query data
Query data using Flux, the UI, and the `influx` command line interface.
See [Query data](/influxdb/v2.4/query-data/).
### Process data
Use InfluxDB tasks to process and downsample data. See [Process data](/influxdb/v2.4/process-data/).
### Visualize data
Build custom dashboards to visualize your data.
See [Visualize data](/influxdb/v2.4/visualize-data/).
### Monitor and alert
Monitor your data and sends alerts based on specified logic.
See [Monitor and alert](/influxdb/v2.4/monitor-alert/).
{{< influxdbu "influxdb-101" >}}

View File

@ -0,0 +1,98 @@
---
title: InfluxDB templates
description: >
InfluxDB templates are prepackaged InfluxDB configurations that contain everything
from dashboards and Telegraf configurations to notifications and alerts.
menu: influxdb_2_4
weight: 10
influxdb/v2.4/tags: [templates]
---
InfluxDB templates are prepackaged InfluxDB configurations that contain everything
from dashboards and Telegraf configurations to notifications and alerts.
Use templates to monitor your technology stack,
set up a fresh instance of InfluxDB, back up your dashboard configuration, or
[share your configuration](https://github.com/influxdata/community-templates/) with the InfluxData community.
**InfluxDB templates do the following:**
- Reduce setup time by giving you resources that are already configured for your use-case.
- Facilitate secure, portable, and source-controlled InfluxDB resource states.
- Simplify sharing and using pre-built InfluxDB solutions.
{{< youtube 2JjW4Rym9XE >}}
<a class="btn github" href="https://github.com/influxdata/community-templates/" target="_blank">View InfluxDB community templates</a>
## Template manifests
A template **manifest** is a file that defines
InfluxDB [resources](#template-resources).
Template manifests support the following formats:
- [YAML](https://yaml.org/)
- [JSON](https://www.json.org/)
- [Jsonnet](https://jsonnet.org/)
{{% note %}}
Template manifests are compatible with
[Kubernetes Custom Resource Definitions (CRD)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/).
{{% /note %}}
The `metadata.name` field in manifests uniquely identifies each resource in the template.
`metadata.name` values must be [DNS-1123](https://tools.ietf.org/html/rfc1123) compliant.
The `spec` object contains the resource configuration.
#### Example
```yaml
# bucket-template.yml
# Template manifest that defines two buckets.
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
name: thirsty-shaw-91b005
spec:
description: My IoT Center Bucket
name: iot-center
retentionRules:
- everySeconds: 86400
type: expire
---
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
name: upbeat-fermat-91b001
spec:
name: air_sensor
---
```
_See [Create an InfluxDB template](/influxdb/v2.4/influxdb-templates/create/) for information about
generating template manifests._
### Template resources
Templates may contain the following InfluxDB resources:
- [buckets](/influxdb/v2.4/organizations/buckets/create-bucket/)
- [checks](/influxdb/v2.4/monitor-alert/checks/create/)
- [dashboards](/influxdb/v2.4/visualize-data/dashboards/create-dashboard/)
- [dashboard variables](/influxdb/v2.4/visualize-data/variables/create-variable/)
- [labels](/influxdb/v2.4/visualize-data/labels/)
- [notification endpoints](/influxdb/v2.4/monitor-alert/notification-endpoints/create/)
- [notification rules](/influxdb/v2.4/monitor-alert/notification-rules/create/)
- [tasks](/influxdb/v2.4/process-data/manage-tasks/create-task/)
- [Telegraf configurations](/influxdb/v2.4/write-data/no-code/use-telegraf/)
## Stacks
Use **InfluxDB stacks** to manage InfluxDB templates.
When you apply a template, InfluxDB associates resources in the template with a stack.
Use stacks to add, update, or remove InfluxDB templates over time.
For more information, see [InfluxDB stacks](#influxdb-stacks) below.
---
{{< children >}}

View File

@ -0,0 +1,291 @@
---
title: Create an InfluxDB template
description: >
Use the InfluxDB UI and the `influx export` command to create InfluxDB templates.
menu:
influxdb_2_4:
parent: InfluxDB templates
name: Create a template
identifier: Create an InfluxDB template
weight: 103
influxdb/v2.4/tags: [templates]
related:
- /influxdb/v2.4/reference/cli/influx/export/
- /influxdb/v2.4/reference/cli/influx/export/all/
---
Use the InfluxDB user interface (UI) and the [`influx export` command](/influxdb/v2.4/reference/cli/influx/export/) to
create InfluxDB templates from [resources](/influxdb/v2.4/influxdb-templates/#template-resources) in an organization.
Add buckets, Telegraf configurations, tasks, and more in the InfluxDB
UI and then export those resources as a template.
{{< youtube 714uHkxKM6U >}}
- [Create a template](#create-a-template)
- [Export resources to a template](#export-resources-to-a-template)
- [Include user-definable resource names](#include-user-definable-resource-names)
- [Troubleshoot template results and permissions](#troubleshoot-template-results-and-permissions)
- [Share your InfluxDB templates](#share-your-influxdb-templates)
## Create a template
Creating a new organization to contain only your template resources is an easy way
to ensure you export the resources you want.
Follow these steps to create a template from a new organization.
1. [Start InfluxDB](/influxdb/v2.4/get-started/).
2. [Create a new organization](/influxdb/v2.4/organizations/create-org/).
3. In the InfluxDB UI, add one or more [resources](/influxdb/v2.4/influxdb-templates/#template-resources).
4. [Create an **All-Access** API token](/influxdb/v2.4/security/tokens/create-token/) (or a token that has **read** access to the organization).
5. Use the API token from **Step 4** with the [`influx export all` subcommand](/influxdb/v2.4/reference/cli/influx/export/all/) to [export all resources]() in the organization to a template file.
```sh
influx export all \
-o YOUR_INFLUX_ORG \
-t YOUR_ALL_ACCESS_TOKEN \
-f ~/templates/template.yml
```
## Export resources to a template
The [`influx export` command](/influxdb/v2.4/reference/cli/influx/export/) and subcommands let you
export [resources](#template-resources) from an organization to a template manifest.
Your [API token](/influxdb/v2.4/security/tokens/) must have **read** access to resources that you want to export.
If you want to export resources that depend on other resources, be sure to export the dependencies.
{{< cli/influx-creds-note >}}
To create a template that **adds, modifies, and deletes resources** when applied to an organization, use [InfluxDB stacks](/influxdb/v2.4/influxdb-templates/stacks/).
First, [initialize the stack](/influxdb/v2.4/influxdb-templates/stacks/init/)
and then [export the stack](#export-a-stack).
To create a template that only **adds resources** when applied to an organization (and doesn't modify existing resources there), choose one of the following:
- [Export all resources](#export-all-resources) to export all resources or a filtered
subset of resources to a template.
- [Export specific resources](#export-specific-resources) by name or ID to a template.
### Export all resources
To export all [resources](/influxdb/v2.4/influxdb-templates/#template-resources)
within an organization to a template manifest file, use the
[`influx export all` subcommand](/influxdb/v2.4/reference/cli/influx/export/all/)
with the `--file` (`-f`) option.
Provide the following:
- **Destination path and filename** for the template manifest.
The filename extension determines the output format:
- `your-template.yml`: [YAML](https://yaml.org/) format
- `your-template.json`: [JSON](https://json.org/) format
```sh
# Syntax
influx export all -f <FILE_PATH>
```
#### Export resources filtered by labelName or resourceKind
The [`influx export all` subcommand](/influxdb/v2.4/reference/cli/influx/export/all/)
accepts a `--filter` option that exports
only resources that match specified label names or resource kinds.
To filter on label name *and* resource kind, provide a `--filter` for each.
#### Export only dashboards and buckets with specific labels
The following example exports resources that match this predicate logic:
```js
(resourceKind == "Bucket" or resourceKind == "Dashboard")
and
(labelName == "Example1" or labelName == "Example2")
```
```sh
influx export all \
-f ~/templates/template.yml \
--filter=resourceKind=Bucket \
--filter=resourceKind=Dashboard \
--filter=labelName=Example1 \
--filter=labelName=Example2
```
For more options and examples, see the
[`influx export all` subcommand](/influxdb/v2.4/reference/cli/influx/export/all/).
### Export specific resources
To export specific [resources](/influxdb/v2.4/influxdb-templates/#template-resources) by name or ID, use the **[`influx export` command](/influxdb/v2.4/reference/cli/influx/export/)** with one or more lists of resources to include.
Provide the following:
- **Destination path and filename** for the template manifest.
The filename extension determines the output format:
- `your-template.yml`: [YAML](https://yaml.org/) format
- `your-template.json`: [JSON](https://json.org/) format
- **Resource options** with corresponding lists of resource IDs or resource names to include in the template.
For information about what resource options are available, see the
[`influx export` command](/influxdb/v2.4/reference/cli/influx/export/).
```sh
# Syntax
influx export -f <file-path> [resource-flags]
```
#### Export specific resources by ID
```sh
influx export \
--org-id ed32b47572a0137b \
-f ~/templates/template.yml \
-t $INFLUX_TOKEN \
--buckets=00x000ooo0xx0xx,o0xx0xx00x000oo \
--dashboards=00000xX0x0X00x000 \
--telegraf-configs=00000x0x000X0x0X0
```
#### Export specific resources by name
```sh
influx export \
--org-id ed32b47572a0137b \
-f ~/templates/template.yml \
--bucket-names=bucket1,bucket2 \
--dashboard-names=dashboard1,dashboard2 \
--telegraf-config-names=telegrafconfig1,telegrafconfig2
```
### Export a stack
To export an InfluxDB [stack](/influxdb/v2.4/influxdb-templates/stacks/) and all its associated resources as a template, use the
`influx export stack` command.
Provide the following:
- **Organization name** or **ID**
- **API token** with read access to the organization
- **Destination path and filename** for the template manifest.
The filename extension determines the output format:
- `your-template.yml`: [YAML](https://yaml.org/) format
- `your-template.json`: [JSON](https://json.org/) format
- **Stack ID**
#### Export a stack as a template
```sh
# Syntax
influx export stack \
-o <INFLUX_ORG> \
-t <INFLUX_TOKEN> \
-f <FILE_PATH> \
<STACK_ID>
# Example
influx export stack \
-o my-org \
-t mYSuP3RS3CreTt0K3n
-f ~/templates/awesome-template.yml \
05dbb791a4324000
```
## Include user-definable resource names
After exporting a template manifest, replace resource names with **environment references**
to let users customize resource names when installing your template.
1. [Export a template](#export-a-template).
2. Select any of the following resource fields to update:
- `metadata.name`
- `associations[].name`
- `endpointName` _(unique to `NotificationRule` resources)_
3. Replace the resource field value with an `envRef` object with a `key` property
that references the key of a key-value pair the user provides when installing the template.
During installation, the `envRef` object is replaced by the value of the
referenced key-value pair.
If the user does not provide the environment reference key-value pair, InfluxDB
uses the `key` string as the default value.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[YAML](#)
[JSON](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```yml
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
name:
envRef:
key: bucket-name-1
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```json
{
"apiVersion": "influxdata.com/v2alpha1",
"kind": "Bucket",
"metadata": {
"name": {
"envRef": {
"key": "bucket-name-1"
}
}
}
}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Using the example above, users are prompted to provide a value for `bucket-name-1`
when [applying the template](/influxdb/v2.4/influxdb-templates/use/#apply-templates).
Users can also include the `--env-ref` flag with the appropriate key-value pair
when installing the template.
```sh
# Set bucket-name-1 to "myBucket"
influx apply \
-f /path/to/template.yml \
--env-ref=bucket-name-1=myBucket
```
_If sharing your template, we recommend documenting what environment references
exist in the template and what keys to use to replace them._
{{% note %}}
#### Resource fields that support environment references
Only the following fields support environment references:
- `metadata.name`
- `spec.endpointName`
- `spec.associations.name`
{{% /note %}}
## Troubleshoot template results and permissions
If you get unexpected results, missing resources, or errors when exporting
templates, check the following:
- [Ensure `read` access](#ensure-read-access)
- [Use Organization ID](#use-organization-id)
- [Check for resource dependencies](#check-for-resource-dependencies)
### Ensure read access
The [API token](/influxdb/v2.4/security/tokens/) must have **read** access to resources that you want to export. The `influx export all` command only exports resources that the API token can read. For example, to export all resources in an organization that has ID `abc123`, the API token must have the `read:/orgs/abc123` permission.
To learn more about permissions, see [how to view authorizations](/influxdb/v2.4/security/tokens/view-tokens/) and [how to create a token](/influxdb/v2.4/security/tokens/create-token/) with specific permissions.
### Use Organization ID
If your token doesn't have **read** access to the organization and you want to [export specific resources](#export-specific-resources), use the `--org-id <org-id>` flag (instead of `-o <org-name>` or `--org <org-name>`) to provide the organization.
### Check for resource dependencies
If you want to export resources that depend on other resources, be sure to export the dependencies as well. Otherwise, the resources may not be usable.
## Share your InfluxDB templates
Share your InfluxDB templates with the entire InfluxData community.
Contribute your template to the [InfluxDB Community Templates](https://github.com/influxdata/community-templates/) repository on GitHub.
<a class="btn" href="https://github.com/influxdata/community-templates/" target="\_blank">View InfluxDB Community Templates</a>

View File

@ -0,0 +1,26 @@
---
title: InfluxDB stacks
description: >
Use an InfluxDB stack to manage your InfluxDB templates—add, update, or remove templates over time.
menu:
influxdb_2_4:
parent: InfluxDB templates
weight: 105
related:
- /influxdb/v2.4/reference/cli/influx/pkg/stack/
---
Use InfluxDB stacks to manage [InfluxDB templates](/influxdb/v2.4/influxdb-templates).
When you apply a template, InfluxDB associates resources in the template with a stack. Use the stack to add, update, or remove InfluxDB templates over time.
{{< children type="anchored-list" >}}
{{< children readmore=true >}}
{{% note %}}
**Key differences between stacks and templates**:
- A template defines a set of resources in a text file outside of InfluxDB. When you apply a template, a stack is automatically created to manage the applied template.
- Stacks add, modify or delete resources in an instance.
- Templates do not recognize resources in an instance. All resources in the template are added, creating duplicate resources if a resource already exists.
{{% /note %}}

View File

@ -0,0 +1,73 @@
---
title: Initialize an InfluxDB stack
list_title: Initialize a stack
description: >
InfluxDB automatically creates a new stack each time you [apply an InfluxDB template](/influxdb/v2.4/influxdb-templates/use/)
**without providing a stack ID**.
To manually create or initialize a new stack, use the [`influx stacks init` command](/influxdb/v2.4/reference/cli/influx/stacks/init/).
menu:
influxdb_2_4:
parent: InfluxDB stacks
name: Initialize a stack
weight: 202
related:
- /influxdb/v2.4/reference/cli/influx/stacks/init/
list_code_example: |
```sh
influx apply \
-o example-org \
-f path/to/template.yml
```
```sh
influx stacks init \
-o example-org \
-n "Example Stack" \
-d "InfluxDB stack for monitoring some awesome stuff" \
-u https://example.com/template-1.yml \
-u https://example.com/template-2.yml
```
---
InfluxDB automatically creates a new stack each time you [apply an InfluxDB template](/influxdb/v2.4/influxdb-templates/use/)
**without providing a stack ID**.
To manually create or initialize a new stack, use the [`influx stacks init` command](/influxdb/v2.4/reference/cli/influx/stacks/init/).
## Initialize a stack when applying a template
To automatically create a new stack when [applying an InfluxDB template](/influxdb/v2.4/influxdb-templates/use/)
**don't provide a stack ID**.
InfluxDB applies the resources in the template to a new stack and provides the **stack ID** the output.
```sh
influx apply \
-o example-org \
-f path/to/template.yml
```
## Manually initialize a new stack
Use the [`influx stacks init` command](/influxdb/v2.4/reference/cli/influx/stacks/init/)
to create or initialize a new InfluxDB stack.
**Provide the following:**
- Organization name or ID
- Stack name
- Stack description
- InfluxDB template URLs
<!-- -->
```sh
# Syntax
influx stacks init \
-o <org-name> \
-n <stack-name> \
-d <stack-description \
-u <package-url>
# Example
influx stacks init \
-o example-org \
-n "Example Stack" \
-d "InfluxDB stack for monitoring some awesome stuff" \
-u https://example.com/template-1.yml \
-u https://example.com/template-2.yml
```

View File

@ -0,0 +1,39 @@
---
title: Remove an InfluxDB stack
list_title: Remove a stack
description: >
Use the [`influx stacks remove` command](/influxdb/v2.4/reference/cli/influx/stacks/remove/)
to remove an InfluxDB stack and all its associated resources.
menu:
influxdb_2_4:
parent: InfluxDB stacks
name: Remove a stack
weight: 205
related:
- /influxdb/v2.4/reference/cli/influx/stacks/remove/
list_code_example: |
```sh
influx stacks remove \
-o example-org \
--stack-id=12ab34cd56ef
```
---
Use the [`influx stacks remove` command](/influxdb/v2.4/reference/cli/influx/stacks/remove/)
to remove an InfluxDB stack and all its associated resources.
**Provide the following:**
- Organization name or ID
- Stack ID
<!-- -->
```sh
# Syntax
influx stacks remove -o <org-name> --stack-id=<stack-id>
# Example
influx stacks remove \
-o example-org \
--stack-id=12ab34cd56ef
```

View File

@ -0,0 +1,165 @@
---
title: Save time with InfluxDB stacks
list_title: Save time with stacks
description: >
Discover how to use InfluxDB stacks to save time.
menu:
influxdb_2_4:
parent: InfluxDB stacks
name: Save time with stacks
weight: 201
related:
- /influxdb/v2.4/reference/cli/influx/stacks/
---
Save time and money using InfluxDB stacks. Here's a few ideal use cases:
- [Automate deployments with GitOps and stacks](#automate-deployments-with-gitops-and-stacks)
- [Apply updates from source-controlled templates](#apply-updates-from-source-controlled-templates)
- [Apply template updates across multiple InfluxDB instances](#apply-template-updates-across-multiple-influxdb-instances)
- [Develop templates](#develop-templates)
### Automate deployments with GitOps and stacks
GitOps is popular way to configure and automate deployments. Use InfluxDB stacks in a GitOps workflow
to automatically update distributed instances of InfluxDB OSS or InfluxDB Cloud.
To automate an InfluxDB deployment with GitOps and stacks, complete the following steps:
1. [Set up a GitHub repository](#set-up-a-github-repository)
2. [Add existing resources to the GitHub repository](#add-existing-resources-to-the-github-repository)
3. [Automate the creation of a stack for each folder](#automate-the-creation-of-a-stack-for-each-folder)
4. [Set up Github Actions or CircleCI](#set-up-github-actions-or-circleci)
#### Set up a GitHub repository
Set up a GitHub repository to back your InfluxDB instance. Determine how you want to organize the resources in your stacks within your Github repository. For example, organize resources under folders for specific teams or functions.
We recommend storing all resources for one stack in the same folder. For example, if you monitor Redis, create a `redis` stack and put your Redis monitoring resources (a Telegraf configuration, four dashboards, a label, and two alert checks) into one Redis folder, each resource in a separate file. Then, when you need to update a Redis resource, it's easy to find and make changes in one location.
{{% note %}}
Typically, we **do not recommend** using the same resource in multiple stacks. If your organization uses the same resource in multiple stacks, before you delete a stack, verify the stack does not include resources that another stack depends on. Stacks with buckets often contain data used by many different templates. Because of this, we recommend keeping buckets separate from the other stacks.
{{% /note %}}
#### Add existing resources to the GitHub repository
Skip this section if you are starting from scratch or dont have existing resources you want to add to your stack.
Use the `influx export` command to quickly export resources. Keep all your resources in a single file or have files for each one. You can always split or combine them later.
For example, if you export resources for three stacks: `buckets`, `redis`, and `mysql`, your folder structure might look something like this when you are done:
```sh
influxdb-assets/
├── buckets/
│ ├── telegraf_bucket.yml
├── redis/
│ ├── redis_overview_dashboard.yml
│ ├── redis_label.yml
│ ├── redis_cpu_check.yml
│ └── redis_mem_check.yml
├── mysql/
│ ├── mysql_assets.yml
└── README.md
```
{{% note %}}
When you export a resource, InfluxDB creates a `meta.name` for that resource. These resource names should be unique inside your InfluxDB instance. Use a good naming convention to prevent duplicate `meta.names`. Changing the `meta.name` of the InfluxDB resource will cause the stack to orphan the resource with the previous name and create a new resource with the updated name.
{{% /note %}}
Add the exported resources to your new GitHub repository.
#### Automate the creation of a stack for each folder
To automatically create a stack from each folder in your GitHub repository, create a shell script to check for an existing stack and if the stack isn't found, use the `influx stacks init` command to create a new stack. The following sample script creates a `redis` stack and automatically applies those changes to your instance:
```sh
echo "Checking for existing redis stack..."
REDIS_STACK_ID=$(influx stacks --stack-name redis --json | jq -r '.[0].ID')
if [ "$REDIS_STACK_ID" == "null" ]; then
echo "No stack found. Initializing our stack..."
REDIS_STACK_ID=$(influx stacks init -n redis --json | jq -r '.ID')
fi
# Setting the base path
BASE_PATH="$(pwd)"
echo "Applying our redis stack..."
cat $BASE_PATH/redis/*.yml | \
influx apply --force true --stack-id $REDIS_STACK_ID -q
```
{{% note %}}
The `--json` flag in the InfluxDB CLI is very useful when scripting against the CLI. This flag lets you grab important information easily using [`jq`](https://stedolan.github.io/jq/manual/v1.6/).
{{% /note %}}
Repeat this step for each of the stacks in your repository. When a resource in your stack changes, re-run this script to apply updated resources to your InfluxDB instance. Re-applying a stack with an updated resource won't add, delete, or duplicate resources.
#### Set up Github Actions or CircleCI
Once you have a script to apply changes being made to your local instance, automate the deployment to other environments as needed. Use the InfluxDB CLI to maintain multiple [configuration profiles]() to easily switch profile and issue commands against other InfluxDB instances. To apply the same script to a different InfluxDB instance, change your active configuration profile using the `influx config set` command. Or set the desired profile dynamically using the `-c, --active-config` flag.
{{% note %}}
Before you run automation scripts against shared environments, we recommend manually running the steps in your script.
{{% /note %}}
Verify your deployment automation software lets you run a custom script, and then set up the custom script you've built locally another environment. For example, here's a custom Github Action that automates deployment:
```yml
name: deploy-influxdb-resources
on:
push:
branches: [ master ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.ref }}
- name: Deploys repo to cloud
env:
# These secrets can be configured in the Github repo to connect to
# your InfluxDB instance.
INFLUX_TOKEN: ${{ secrets.INFLUX_TOKEN }}
INFLUX_ORG: ${{ secrets.INFLUX_ORG }}
INFLUX_URL: ${{ secrets.INFLUX_URL }}
GITHUB_REPO: ${{ github.repository }}
GITHUB_BRANCH: ${{ github.ref }}
run: |
cd /tmp
wget https://dl.influxdata.com/platform/nightlies/influx_nightly_linux_amd64.tar.gz
tar xvfz influx_nightly_linux_amd64.tar.gz
sudo cp influx_nightly_linux_amd64/influx /usr/local/bin/
cd $GITHUB_WORKSPACE
# This runs the script to set up your stacks
chmod +x ./setup.sh
./setup.sh prod
```
For more information about using GitHub Actions in your project, check out the complete [Github Actions documentation](https://github.com/features/actions).
### Apply updates from source-controlled templates
You can use a variety of InfluxDB templates from many different sources including
[Community Templates](https://github.com/influxdata/community-templates/) or
self-built custom templates.
As templates are updated over time, stacks let you gracefully
apply updates without creating duplicate resources.
### Apply template updates across multiple InfluxDB instances
In many cases, you may have more than one instance of InfluxDB running and want to apply
the same template to each separate instance.
Using stacks, you can make changes to a stack on one instance,
[export the stack as a template](/influxdb/v2.4/influxdb-templates/create/#export-a-stack)
and then apply the changes to your other InfluxDB instances.
### Develop templates
InfluxDB stacks aid in developing and maintaining InfluxDB templates.
Stacks let you modify and update template manifests and apply those changes in
any stack that uses the template.

View File

@ -0,0 +1,56 @@
---
title: Update an InfluxDB stack
list_title: Update a stack
description: >
Use the [`influx apply` command](/influxdb/v2.4/reference/cli/influx/apply/)
to update a stack with a modified template.
When applying a template to an existing stack, InfluxDB checks to see if the
resources in the template match existing resources.
InfluxDB updates, adds, and removes resources to resolve differences between
the current state of the stack and the newly applied template.
menu:
influxdb_2_4:
parent: InfluxDB stacks
name: Update a stack
weight: 203
related:
- /influxdb/v2.4/reference/cli/influx/apply
- /influxdb/v2.4/reference/cli/influx/stacks/update/
list_code_example: |
```sh
influx apply \
-o example-org \
-u http://example.com/template-1.yml \
-u http://example.com/template-2.yml \
--stack-id=12ab34cd56ef
```
---
Use the [`influx apply` command](/influxdb/v2.4/reference/cli/influx/apply/)
to update a stack with a modified template.
When applying a template to an existing stack, InfluxDB checks to see if the
resources in the template match existing resources.
InfluxDB updates, adds, and removes resources to resolve differences between
the current state of the stack and the newly applied template.
Each stack is uniquely identified by a **stack ID**.
For information about retrieving your stack ID, see [View stacks](/influxdb/v2.4/influxdb-templates/stacks/view/).
**Provide the following:**
- Organization name or ID
- Stack ID
- InfluxDB template URLs to apply
<!-- -->
```sh
influx apply \
-o example-org \
-u http://example.com/template-1.yml \
-u http://example.com/template-2.yml \
--stack-id=12ab34cd56ef
```
Template resources are uniquely identified by their `metadata.name` field.
If errors occur when applying changes to a stack, all applied changes are
reversed and the stack is returned to its previous state.

View File

@ -0,0 +1,69 @@
---
title: View InfluxDB stacks
list_title: View stacks
description: >
Use the [`influx stacks` command](/influxdb/v2.4/reference/cli/influx/stacks/)
to view installed InfluxDB stacks and their associated resources.
menu:
influxdb_2_4:
parent: InfluxDB stacks
name: View stacks
weight: 204
related:
- /influxdb/v2.4/reference/cli/influx/stacks/
list_code_example: |
```sh
influx stacks -o example-org
```
---
Use the [`influx stacks` command](/influxdb/v2.4/reference/cli/influx/stacks/)
to view installed InfluxDB stacks and their associated resources.
**Provide the following:**
- Organization name or ID
<!-- -->
```sh
# Syntax
influx stacks -o <org-name>
# Example
influx stacks -o example-org
```
### Filter stacks
To output information about specific stacks, use the `--stack-name` or `--stack-id`
flags to filter output by stack names or stack IDs.
##### Filter by stack name
```sh
# Syntax
influx stacks \
-o <org-name> \
--stack-name=<stack-name>
# Example
influx stacks \
-o example-org \
--stack-name=stack1 \
--stack-name=stack2
```
### Filter by stack ID
```sh
# Syntax
influx stacks \
-o <org-name> \
--stack-id=<stack-id>
# Example
influx stacks \
-o example-org \
--stack-id=12ab34cd56ef \
--stack-id=78gh910i11jk
```

View File

@ -0,0 +1,241 @@
---
title: Use InfluxDB templates
description: >
Use the `influx` command line interface (CLI) to summarize, validate, and apply
templates from your local filesystem and from URLs.
menu:
influxdb_2_4:
parent: InfluxDB templates
name: Use templates
weight: 102
influxdb/v2.4/tags: [templates]
related:
- /influxdb/v2.4/reference/cli/influx/apply/
- /influxdb/v2.4/reference/cli/influx/template/
- /influxdb/v2.4/reference/cli/influx/template/validate/
---
Use the `influx` command line interface (CLI) to summarize, validate, and apply
templates from your local filesystem and from URLs.
- [Use InfluxDB community templates](#use-influxdb-community-templates)
- [View a template summary](#view-a-template-summary)
- [Validate a template](#validate-a-template)
- [Apply templates](#apply-templates)
## Use InfluxDB community templates
The [InfluxDB community templates repository](https://github.com/influxdata/community-templates/)
is home to a growing number of InfluxDB templates developed and maintained by
others in the InfluxData community.
Apply community templates directly from GitHub using a template's download URL
or download the template.
{{< youtube 2JjW4Rym9XE >}}
{{% note %}}
When attempting to access the community templates via the URL, the templates use the following
as the root of the URL:
```sh
https://raw.githubusercontent.com/influxdata/community-templates/master/
```
For example, the Docker community template can be accessed via:
```sh
https://raw.githubusercontent.com/influxdata/community-templates/master/docker/docker.yml
```
{{% /note %}}
<a class="btn" href="https://github.com/influxdata/community-templates/" target="\_blank">View InfluxDB Community Templates</a>
## View a template summary
To view a summary of what's included in a template before applying the template,
use the [`influx template` command](/influxdb/v2.4/reference/cli/influx/template/).
View a summary of a template stored in your local filesystem or from a URL.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[From a file](#)
[From a URL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template -f <FILE_PATH>
# Example
influx template -f /path/to/template.yml
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template -u <FILE_URL>
# Example
influx template -u https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Validate a template
To validate a template before you install it or troubleshoot a template, use
the [`influx template validate` command](/influxdb/v2.4/reference/cli/influx/template/validate/).
Validate a template stored in your local filesystem or from a URL.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[From a file](#)
[From a URL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template validate -f <FILE_PATH>
# Example
influx template validate -f /path/to/template.yml
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
# Syntax
influx template validate -u <FILE_URL>
# Example
influx template validate -u https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Apply templates
Use the [`influx apply` command](/influxdb/v2.4/reference/cli/influx/apply/) to install templates
from your local filesystem or from URLs.
- [Apply a template from a file](#apply-a-template-from-a-file)
- [Apply all templates in a directory](#apply-all-templates-in-a-directory)
- [Apply a template from a URL](#apply-a-template-from-a-url)
- [Apply templates from both files and URLs](#apply-templates-from-both-files-and-urls)
- [Define environment references](#define-environment-references)
- [Include a secret when installing a template](#include-a-secret-when-installing-a-template)
{{% note %}}
#### Apply templates to an existing stack
To apply a template to an existing stack, include the stack ID when applying the template.
Any time you apply a template without a stack ID, InfluxDB initializes a new stack
and all new resources.
For more information, see [InfluxDB stacks](/influxdb/v2.4/influxdb-templates/stacks/).
{{% /note %}}
### Apply a template from a file
To install templates stored on your local machine, use the `-f` or `--file` flag
to provide the **file path** of the template manifest.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -f <FILE_PATH>
# Examples
# Apply a single template
influx apply -o example-org -f /path/to/template.yml
# Apply multiple templates
influx apply -o example-org \
-f /path/to/this/template.yml \
-f /path/to/that/template.yml
```
### Apply all templates in a directory
To apply all templates in a directory, use the `-f` or `--file` flag to provide
the **directory path** of the directory where template manifests are stored.
By default, this only applies templates stored in the specified directory.
To apply all templates stored in the specified directory and its subdirectories,
include the `-R`, `--recurse` flag.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -f <DIRECTORY_PATH>
# Examples
# Apply all templates in a directory
influx apply -o example-org -f /path/to/template/dir/
# Apply all templates in a directory and its subdirectories
influx apply -o example-org -f /path/to/template/dir/ -R
```
### Apply a template from a URL
To apply templates from a URL, use the `-u` or `--template-url` flag to provide the URL
of the template manifest.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -u <FILE_URL>
# Examples
# Apply a single template from a URL
influx apply -o example-org -u https://example.com/templates/template.yml
# Apply multiple templates from URLs
influx apply -o example-org \
-u https://example.com/templates/template1.yml \
-u https://example.com/templates/template2.yml
```
### Apply templates from both files and URLs
To apply templates from both files and URLs in a single command, include multiple
file or directory paths and URLs, each with the appropriate `-f` or `-u` flag.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -u <FILE_URL> -f <FILE_PATH>
# Example
influx apply -o example-org \
-u https://example.com/templates/template1.yml \
-u https://example.com/templates/template2.yml \
-f ~/templates/custom-template.yml \
-f ~/templates/iot/home/ \
--recurse
```
### Define environment references
Some templates include [environment references](/influxdb/v2.4/influxdb-templates/create/#include-user-definable-resource-names) that let you provide custom resource names.
The `influx apply` command prompts you to provide a value for each environment
reference in the template.
You can also provide values for environment references by including an `--env-ref`
flag with a key-value pair comprised of the environment reference key and the
value to replace it.
```sh
influx apply -o example-org -f /path/to/template.yml \
--env-ref=bucket-name-1=myBucket
--env-ref=label-name-1=Label1 \
--env-ref=label-name-2=Label2
```
### Include a secret when installing a template
Some templates use [secrets](/influxdb/v2.4/security/secrets/) in queries.
Secret values are not included in templates.
To define secret values when installing a template, include the `--secret` flag
with the secret key-value pair.
```sh
# Syntax
influx apply -o <INFLUX_ORG> -f <FILE_PATH> \
--secret=<secret-key>=<secret-value>
# Examples
# Define a single secret when applying a template
influx apply -o example-org -f /path/to/template.yml \
--secret=FOO=BAR
# Define multiple secrets when applying a template
influx apply -o example-org -f /path/to/template.yml \
--secret=FOO=bar \
--secret=BAZ=quz
```
_To add a secret after applying a template, see [Add secrets](/influxdb/v2.4/security/secrets/manage-secrets/add/)._

View File

@ -0,0 +1,757 @@
---
title: Install InfluxDB
description: Download, install, and set up InfluxDB OSS.
menu: influxdb_2_4
weight: 2
influxdb/v2.4/tags: [install]
---
The InfluxDB {{< current-version >}} time series platform is purpose-built to collect, store,
process and visualize metrics and events.
Download, install, and set up InfluxDB OSS.
{{< tabs-wrapper >}}
{{% tabs %}}
[macOS](#)
[Linux](#)
[Windows](#)
[Docker](#)
[Kubernetes](#)
[Raspberry Pi](#)
{{% /tabs %}}
<!-------------------------------- BEGIN macOS -------------------------------->
{{% tab-content %}}
## Install InfluxDB v{{< current-version >}}
Do one of the following:
- [Use Homebrew](#use-homebrew)
- [Manually download and install](#manually-download-and-install)
{{% note %}}
#### InfluxDB and the influx CLI are separate packages
The InfluxDB server ([`influxd`](/influxdb/v2.4/reference/cli/influxd/)) and the
[`influx` CLI](/influxdb/v2.4/reference/cli/influx/) are packaged and
versioned separately.
For information about installing the `influx` CLI, see
[Install and use the influx CLI](/influxdb/v2.4/tools/influx-cli/).
{{% /note %}}
### Use Homebrew
We recommend using [Homebrew](https://brew.sh/) to install InfluxDB v{{< current-version >}} on macOS:
```sh
brew update
brew install influxdb
```
{{% note %}}
Homebrew also installs `influxdb-cli` as a dependency.
For information about using the `influx` CLI, see the
[`influx` CLI reference documentation](/influxdb/v2.4/reference/cli/influx/).
{{% /note %}}
### Manually download and install
To download the InfluxDB v{{< current-version >}} binaries for macOS directly,
do the following:
1. **Download the InfluxDB package.**
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz" download>InfluxDB v{{< current-version >}} (macOS)</a>
2. **Unpackage the InfluxDB binary.**
Do one of the following:
- Double-click the downloaded package file in **Finder**.
- Run the following command in a macOS command prompt application such
**Terminal** or **[iTerm2](https://www.iterm2.com/)**:
```sh
# Unpackage contents to the current working directory
tar zxvf ~/Downloads/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz
```
3. **(Optional) Place the binary in your `$PATH`**
```sh
# (Optional) Copy the influxd binary to your $PATH
sudo cp influxdb2-{{< latest-patch >}}-darwin-amd64/influxd /usr/local/bin/
```
If you do not move the `influxd` binary into your `$PATH`, prefix the executable
`./` to run it in place.
{{< expand-wrapper >}}
{{% expand "<span class='req'>Recommended</span> Verify the authenticity of downloaded binary" %}}
For added security, use `gpg` to verify the signature of your download.
(Most operating systems include the `gpg` command by default.
If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/) for installation instructions.)
1. Download and import InfluxData's public key:
```
curl -s https://repos.influxdata.com/influxdb2.key | gpg --import -
```
2. Download the signature file for the release by adding `.asc` to the download URL.
For example:
```
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz.asc
```
3. Verify the signature with `gpg --verify`:
```
gpg --verify influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz.asc influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz
```
The output from this command should include the following:
```
gpg: Good signature from "InfluxData <support@influxdata.com>" [unknown]
```
{{% /expand %}}
{{< /expand-wrapper >}}
{{% note %}}
Both InfluxDB 1.x and 2.x have associated `influxd` and `influx` binaries.
If InfluxDB 1.x binaries are already in your `$PATH`, run the {{< current-version >}} binaries in place
or rename them before putting them in your `$PATH`.
If you rename the binaries, all references to `influxd` and `influx` in this documentation refer to your renamed binaries.
{{% /note %}}
#### Networking ports
By default, InfluxDB uses TCP port `8086` for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.4/reference/api/).
### Start and configure InfluxDB
To start InfluxDB, run the `influxd` daemon:
```bash
influxd
```
{{% note %}}
#### Run InfluxDB on macOS Catalina
macOS Catalina requires downloaded binaries to be signed by registered Apple developers.
Currently, when you first attempt to run `influxd`, macOS will prevent it from running.
To manually authorize the `influxd` binary:
1. Attempt to run `influxd`.
2. Open **System Preferences** and click **Security & Privacy**.
3. Under the **General** tab, there is a message about `influxd` being blocked.
Click **Open Anyway**.
We are in the process of updating our build process to ensure released binaries are signed by InfluxData.
{{% /note %}}
{{% warn %}}
#### "too many open files" errors
After running `influxd`, you might see an error in the log output like the
following:
```sh
too many open files
```
To resolve this error, follow the
[recommended steps](https://unix.stackexchange.com/a/221988/471569) to increase
file and process limits for your operating system version then restart `influxd`.
{{% /warn %}}
To configure InfluxDB, see [InfluxDB configuration options](/influxdb/v2.4/reference/config-options/), and the [`influxd` documentation](/influxdb/v2.4/reference/cli/influxd) for information about
available flags and options._
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
influxd --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!--------------------------------- END macOS --------------------------------->
<!-------------------------------- BEGIN Linux -------------------------------->
{{% tab-content %}}
## Download and install InfluxDB v{{< current-version >}}
Do one of the following:
- [Install InfluxDB as a service with systemd](#install-influxdb-as-a-service-with-systemd)
- [Manually download and install the influxd binary](#manually-download-and-install-the-influxd-binary)
{{% note %}}
#### InfluxDB and the influx CLI are separate packages
The InfluxDB server ([`influxd`](/influxdb/v2.4/reference/cli/influxd/)) and the
[`influx` CLI](/influxdb/v2.4/reference/cli/influx/) are packaged and
versioned separately.
For information about installing the `influx` CLI, see
[Install and use the influx CLI](/influxdb/v2.4/tools/influx-cli/).
{{% /note %}}
### Install InfluxDB as a service with systemd
1. Download and install the appropriate `.deb` or `.rpm` file using a URL from the
[InfluxData downloads page](https://portal.influxdata.com/downloads/)
with the following commands:
```sh
# Ubuntu/Debian
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-xxx.deb
sudo dpkg -i influxdb2-{{< latest-patch >}}-xxx.deb
# Red Hat/CentOS/Fedora
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-xxx.rpm
sudo yum localinstall influxdb2-{{< latest-patch >}}-xxx.rpm
```
_Use the exact filename of the download of `.rpm` package (for example, `influxdb2-{{< latest-patch >}}-amd64.rpm`)._
2. Start the InfluxDB service:
```sh
sudo service influxdb start
```
Installing the InfluxDB package creates a service file at `/lib/systemd/services/influxdb.service`
to start InfluxDB as a background service on startup.
3. Restart your system and verify that the service is running correctly:
```
$ sudo service influxdb status
● influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enable>
Active: active (running)
```
For information about where InfluxDB stores data on disk when running as a service,
see [File system layout](/influxdb/v2.4/reference/internals/file-system-layout/?t=Linux#installed-as-a-package).
To customize your InfluxDB configuration, use either
[command line flags (arguments)](#pass-arguments-to-systemd), environment variables, or an InfluxDB configuration file.
See InfluxDB [configuration options](/influxdb/v2.4/reference/config-options/) for more information.
#### Pass arguments to systemd
1. Add one or more lines like the following containing arguments for `influxd` to `/etc/default/influxdb2`:
```sh
ARG1="--http-bind-address :8087"
ARG2="<another argument here>"
```
2. Edit the `/lib/systemd/system/influxdb.service` file as follows:
```sh
ExecStart=/usr/bin/influxd $ARG1 $ARG2
```
### Manually download and install the influxd binary
1. **Download the InfluxDB binary.**
Download the InfluxDB binary [from your browser](#download-from-your-browser)
or [from the command line](#download-from-the-command-line).
#### Download from your browser
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz" download >InfluxDB v{{< current-version >}} (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz" download >InfluxDB v{{< current-version >}} (arm)</a>
#### Download from the command line
```sh
# amd64
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz
# arm
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz
```
4. **Extract the downloaded binary.**
_**Note:** The following commands are examples. Adjust the filenames, paths, and utilities if necessary._
```sh
# amd64
tar xvzf path/to/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz
# arm
tar xvzf path/to/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz
```
3. **(Optional) Place the extracted `influxd` executable binary in your system `$PATH`.**
```sh
# amd64
sudo cp influxdb2-{{< latest-patch >}}-linux-amd64/influxd /usr/local/bin/
# arm
sudo cp influxdb2-{{< latest-patch >}}-linux-arm64/influxd /usr/local/bin/
```
If you do not move the `influxd` binary into your `$PATH`, prefix the executable
`./` to run it in place.
{{< expand-wrapper >}}
{{% expand "<span class='req'>Recommended</span> Verify the authenticity of downloaded binary" %}}
For added security, use `gpg` to verify the signature of your download.
(Most operating systems include the `gpg` command by default.
If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/) for installation instructions.)
1. Download and import InfluxData's public key:
```
curl -s https://repos.influxdata.com/influxdb2.key | gpg --import -
```
2. Download the signature file for the release by adding `.asc` to the download URL.
For example:
```
wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz.asc
```
3. Verify the signature with `gpg --verify`:
```
gpg --verify influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz.asc influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz
```
The output from this command should include the following:
```
gpg: Good signature from "InfluxData <support@influxdata.com>" [unknown]
```
{{% /expand %}}
{{< /expand-wrapper >}}
## Start InfluxDB
If InfluxDB was installed as a systemd service, systemd manages the `influxd` daemon and no further action is required.
If the binary was manually downloaded and added to the system `$PATH`, start the `influxd` daemon with the following command:
```bash
influxd
```
_See the [`influxd` documentation](/influxdb/v2.4/reference/cli/influxd) for information about
available flags and options._
### Networking ports
By default, InfluxDB uses TCP port `8086` for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.4/reference/api/).
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
influxd --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!--------------------------------- END Linux --------------------------------->
<!------------------------------- BEGIN Windows ------------------------------->
{{% tab-content %}}
{{% note %}}
#### System requirements
- Windows 10
- 64-bit AMD architecture
- [Powershell](https://docs.microsoft.com/powershell/) or
[Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/)
#### Command line examples
Use **Powershell** or **WSL** to execute `influx` and `influxd` commands.
The command line examples in this documentation use `influx` and `influxd` as if
installed on the system `PATH`.
If these binaries are not installed on your `PATH`, replace `influx` and `influxd`
in the provided examples with `./influx` and `./influxd` respectively.
{{% /note %}}
## Download and install InfluxDB v{{< current-version >}}
{{% note %}}
#### InfluxDB and the influx CLI are separate packages
The InfluxDB server ([`influxd`](/influxdb/v2.4/reference/cli/influxd/)) and the
[`influx` CLI](/influxdb/v2.4/reference/cli/influx/) are packaged and
versioned separately.
For information about installing the `influx` CLI, see
[Install and use the influx CLI](/influxdb/v2.4/tools/influx-cli/).
{{% /note %}}
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-windows-amd64.zip" download >InfluxDB v{{< current-version >}} (Windows)</a>
Expand the downloaded archive into `C:\Program Files\InfluxData\` and rename the files if desired.
```powershell
> Expand-Archive .\influxdb2-{{< latest-patch >}}-windows-amd64.zip -DestinationPath 'C:\Program Files\InfluxData\'
> mv 'C:\Program Files\InfluxData\influxdb2-{{< latest-patch >}}-windows-amd64' 'C:\Program Files\InfluxData\influxdb'
```
## Networking ports
By default, InfluxDB uses TCP port `8086` for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.4/reference/api/).
## Start InfluxDB
In **Powershell**, navigate into `C:\Program Files\InfluxData\influxdb` and start
InfluxDB by running the `influxd` daemon:
```powershell
> cd -Path 'C:\Program Files\InfluxData\influxdb'
> ./influxd
```
_See the [`influxd` documentation](/influxdb/v2.4/reference/cli/influxd) for information about
available flags and options._
{{% note %}}
#### Grant network access
When starting InfluxDB for the first time, **Windows Defender** will appear with
the following message:
> Windows Defender Firewall has blocked some features of this app.
1. Select **Private networks, such as my home or work network**.
2. Click **Allow access**.
{{% /note %}}
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting `influxd`.
```bash
./influxd --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!-------------------------------- END Windows -------------------------------->
<!-------------------------------- BEGIN Docker ------------------------------->
{{% tab-content %}}
## Download and run InfluxDB v{{< current-version >}}
Use `docker run` to download and run the InfluxDB v{{< current-version >}} Docker image.
Expose port `8086`, which InfluxDB uses for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.4/reference/api/).
```sh
docker run --name influxdb -p 8086:8086 influxdb:{{< latest-patch >}}
```
_To run InfluxDB in [detached mode](https://docs.docker.com/engine/reference/run/#detached-vs-foreground), include the `-d` flag in the `docker run` command._
## Persist data outside the InfluxDB container
1. Create a new directory to store your data in and navigate into the directory.
```sh
mkdir path/to/influxdb-docker-data-volume && cd $_
```
2. From within your new directory, run the InfluxDB Docker container with the `--volume` flag to
persist data from `/var/lib/influxdb2` _inside_ the container to the current working directory in
the host file system.
```sh
docker run \
--name influxdb \
-p 8086:8086 \
--volume $PWD:/var/lib/influxdb2 \
influxdb:{{< latest-patch >}}
```
## Configure InfluxDB with Docker
To mount an InfluxDB configuration file and use it from within Docker:
1. [Persist data outside the InfluxDB container](#persist-data-outside-the-influxdb-container).
2. Use the command below to generate the default configuration file on the host file system:
```sh
docker run \
--rm influxdb:{{< latest-patch >}} \
influxd print-config > config.yml
```
3. Modify the default configuration, which will now be available under `$PWD`.
4. Start the InfluxDB container:
```sh
docker run -p 8086:8086 \
-v $PWD/config.yml:/etc/influxdb2/config.yml \
influxdb:{{< latest-patch >}}
```
(Find more about configuring InfluxDB [here](https://docs.influxdata.com/influxdb/v2.4/reference/config-options/).)
## Open a shell in the InfluxDB container
To use the `influx` command line interface, open a shell in the `influxdb` Docker container:
```sh
docker exec -it influxdb /bin/bash
```
{{% note %}}
#### InfluxDB "phone home"
By default, InfluxDB sends telemetry data back to InfluxData.
The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides
information about what data is collected and how it is used.
To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting the InfluxDB container.
```sh
docker run -p 8086:8086 influxdb:{{< latest-patch >}} --reporting-disabled
```
{{% /note %}}
{{% /tab-content %}}
<!--------------------------------- END Docker -------------------------------->
<!-------------------------------- BEGIN kubernetes---------------------------->
{{% tab-content %}}
## Install InfluxDB in a Kubernetes cluster
The instructions below use **minikube** or **kind**, but the steps should be similar in any Kubernetes cluster.
InfluxData also makes [Helm charts](https://github.com/influxdata/helm-charts) available.
1. Install [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) or
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
2. Start a local cluster:
```sh
# with minikube
minikube start
# with kind
kind create cluster
```
3. Apply the [sample InfluxDB configuration](https://github.com/influxdata/docs-v2/blob/master/static/downloads/influxdb-k8-minikube.yaml) by running:
```sh
kubectl apply -f https://raw.githubusercontent.com/influxdata/docs-v2/master/static/downloads/influxdb-k8-minikube.yaml
```
This creates an `influxdb` Namespace, Service, and StatefulSet.
A PersistentVolumeClaim is also created to store data written to InfluxDB.
**Important**: Always inspect YAML manifests before running `kubectl apply -f <url>`!
4. Ensure the Pod is running:
```sh
kubectl get pods -n influxdb
```
5. Ensure the Service is available:
```sh
kubectl describe service -n influxdb influxdb
```
You should see an IP address after `Endpoints` in the command's output.
6. Forward port 8086 from inside the cluster to localhost:
```sh
kubectl port-forward -n influxdb service/influxdb 8086:8086
```
{{% /tab-content %}}
<!--------------------------------- END kubernetes ---------------------------->
<!--------------------------------- BEGIN Rasberry Pi ------------------------->
{{% tab-content %}}
## Install InfluxDB v{{< current-version >}} on Raspberry Pi
{{% note %}}
#### Requirements
To run InfluxDB on Raspberry Pi, you need:
- a Raspberry Pi 4+ or 400
- a 64-bit operating system.
We recommend installing a [64-bit version of Ubuntu](https://ubuntu.com/download/raspberry-pi)
of Ubuntu Desktop or Ubuntu Server compatible with 64-bit Raspberry Pi.
{{% /note %}}
### Install Linux binaries
Follow the [Linux installation instructions](/influxdb/v2.4/install/?t=Linux)
to install InfluxDB on a Raspberry Pi.
### Monitor your Raspberry Pi
Use the [InfluxDB Raspberry Pi template](/influxdb/cloud/monitor-alert/templates/infrastructure/raspberry-pi/)
to easily configure collecting and visualizing system metrics for the Raspberry Pi.
#### Monitor 32-bit Raspberry Pi systems
If you have a 32-bit Raspberry Pi, [use Telegraf](/{{< latest "telegraf" >}}/)
to collect and send data to:
- [InfluxDB OSS](/influxdb/v2.4/), running on a 64-bit system
- InfluxDB Cloud with a [**Free Tier**](/influxdb/cloud/account-management/pricing-plans/#free-plan) account
- InfluxDB Cloud with a paid [**Usage-Based**](/influxdb/cloud/account-management/pricing-plans/#usage-based-plan) account with relaxed resource restrictions.
{{% /tab-content %}}
<!--------------------------------- END Rasberry Pi --------------------------->
{{< /tabs-wrapper >}}
## Download and install the influx CLI
The [`influx` CLI](/influxdb/v2.4/reference/cli/influx/) lets you manage InfluxDB
from your command line.
<a class="btn" href="/influxdb/v2.4/tools/influx-cli/" target="_blank">Download and install the influx CLI</a>
## Set up InfluxDB
The initial setup process for InfluxDB walks through creating a default organization,
user, bucket, and Operator API token.
The setup process is available in both the InfluxDB user interface (UI) and in
the `influx` command line interface (CLI).
{{% note %}}
#### Operator token permissions
The **Operator token** created in the InfluxDB setup process has
**full read and write access to all organizations** in the database.
To prevent accidental interactions across organizations, we recommend
[creating an All-Access token](/influxdb/v2.4/security/tokens/create-token/)
for each organization and using those to manage InfluxDB.
{{% /note %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[UI Setup](#)
[CLI Setup](#)
{{% /tabs %}}
<!------------------------------- BEGIN UI Setup ------------------------------>
{{% tab-content %}}
### Set up InfluxDB through the UI
1. With InfluxDB running, visit [localhost:8086](http://localhost:8086).
2. Click **Get Started**
#### Set up your initial user
1. Enter a **Username** for your initial user.
2. Enter a **Password** and **Confirm Password** for your user.
3. Enter your initial **Organization Name**.
4. Enter your initial **Bucket Name**.
5. Click **Continue**.
InfluxDB is now initialized with a primary user, organization, and bucket.
You are ready to [write or collect data](/influxdb/v2.4/write-data).
### (Optional) Set up and use the influx CLI
To avoid having to pass your InfluxDB
[API token](/influxdb/v2.4/security/tokens/) with each `influx` command, set up a configuration profile to store your credentials. To do this, complete the following steps:
1. In a terminal, run the following command:
```sh
# Set up a configuration profile
influx config create -n default \
-u http://localhost:8086 \
-o example-org \
-t mySuP3rS3cr3tT0keN \
-a
```
This configures a new profile named `default` and makes the profile active
so your `influx` CLI commands run against the specified InfluxDB instance.
For more detail, see [`influx config`](/influxdb/v2.4/reference/cli/influx/config/).
2. Learn `influx` CLI commands. To see all available `influx` commands, type
`influx -h` or check out [influx - InfluxDB command line interface](/influxdb/v2.4/reference/cli/influx/).
{{% /tab-content %}}
<!-------------------------------- END UI Setup ------------------------------->
<!------------------------------ BEGIN CLI Setup ------------------------------>
{{% tab-content %}}
### Set up InfluxDB through the influx CLI
Begin the InfluxDB setup process via the [`influx` CLI](/influxdb/v2.4/reference/cli/influx/) by running:
```bash
influx setup
```
1. Enter a **primary username**.
2. Enter a **password** for your user.
3. **Confirm your password** by entering it again.
4. Enter a name for your **primary organization**.
5. Enter a name for your **primary bucket**.
6. Enter a **retention period** for your primary bucket—valid units are
nanoseconds (`ns`), microseconds (`us` or `µs`), milliseconds (`ms`),
seconds (`s`), minutes (`m`), hours (`h`), days (`d`), and weeks (`w`).
Enter nothing for an infinite retention period.
7. Confirm the details for your primary user, organization, and bucket.
InfluxDB is now initialized with a primary user, organization, bucket, and API token.
InfluxDB also creates a configuration profile for you so that you don't have to
add your InfluxDB host, organization, and token to every command.
To view that config profile, use the [`influx config list`](/influxdb/v2.4/reference/cli/influx/config) command.
To continue to use InfluxDB via the CLI, you need the API token created during setup.
To view the token, log into the UI with the credentials created above.
(For instructions, see [View tokens in the InfluxDB UI](/influxdb/v2.4/security/tokens/view-tokens/#view-tokens-in-the-influxdb-ui).)
You are ready to [write or collect data](/influxdb/v2.4/write-data).
{{% note %}}
To automate the setup process, use [flags](/influxdb/v2.4/reference/cli/influx/setup/#flags)
to provide the required information.
{{% /note %}}
{{% /tab-content %}}
<!------------------------------- END UI Setup -------------------------------->
{{< /tabs-wrapper >}}
After youve installed InfluxDB, youre ready to [get started working with your data in InfluxDB](/influxdb/v2.4/get-started/).

View File

@ -0,0 +1,15 @@
---
title: Migrate data to InfluxDB
description: >
Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS
and InfluxDB Cloud.
menu:
influxdb_2_4:
name: Migrate data
weight: 9
---
Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS
and InfluxDB Cloud.
{{< children >}}

View File

@ -0,0 +1,372 @@
---
title: Migrate data from InfluxDB Cloud to InfluxDB OSS
description: >
To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data from
InfluxDB Cloud in time-based batches and write the data to InfluxDB OSS.
menu:
influxdb_2_4:
name: Migrate from Cloud to OSS
parent: Migrate data
weight: 102
---
To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data
from InfluxDB Cloud and write the data to InfluxDB OSS.
Because full data migrations will likely exceed your organization's limits and
adjustable quotas, migrate your data in batches.
The following guide provides instructions for setting up an InfluxDB OSS task
that queries data from an InfluxDB Cloud bucket in time-based batches and writes
each batch to an InfluxDB OSS bucket.
{{% cloud %}}
All queries against data in InfluxDB Cloud are subject to your organization's
[rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/).
{{% /cloud %}}
- [Set up the migration](#set-up-the-migration)
- [Migration task](#migration-task)
- [Configure the migration](#configure-the-migration)
- [Migration Flux script](#migration-flux-script)
- [Configuration help](#configuration-help)
- [Monitor the migration progress](#monitor-the-migration-progress)
- [Troubleshoot migration task failures](#troubleshoot-migration-task-failures)
## Set up the migration
1. [Install and set up InfluxDB OSS](/influxdb/{{< current-version-link >}}/install/).
2. **In InfluxDB Cloud**, [create an API token](/influxdb/cloud/security/tokens/create-token/)
with **read access** to the bucket you want to migrate.
3. **In InfluxDB OSS**:
1. Add your **InfluxDB Cloud API token** as a secret using the key,
`INFLUXDB_CLOUD_TOKEN`.
_See [Add secrets](/influxdb/{{< current-version-link >}}/security/secrets/add/) for more information._
2. [Create a bucket](/influxdb/{{< current-version-link >}}/organizations/buckets/create-bucket/)
**to migrate data to**.
3. [Create a bucket](/influxdb/{{< current-version-link >}}/organizations/buckets/create-bucket/)
**to store temporary migration metadata**.
4. [Create a new task](/influxdb/{{< current-version-link >}}/process-data/manage-tasks/create-task/)
using the provided [migration task](#migration-task).
Update the necessary [migration configuration options](#configure-the-migration).
5. _(Optional)_ Set up [migration monitoring](#monitor-the-migration-progress).
6. Save the task.
{{% note %}}
Newly-created tasks are enabled by default, so the data migration begins when you save the task.
{{% /note %}}
**After the migration is complete**, each subsequent migration task execution
will fail with the following error:
```
error exhausting result iterator: error calling function "die" @41:9-41:86:
Batch range is beyond the migration range. Migration is complete.
```
## Migration task
### Configure the migration
1. Specify how often you want the task to run using the `task.every` option.
_See [Determine your task interval](#determine-your-task-interval)._
2. Define the following properties in the `migration`
[record](/{{< latest "flux" >}}/data-types/composite/record/):
##### migration
- **start**: Earliest time to include in the migration.
_See [Determine your migration start time](#determine-your-migration-start-time)._
- **stop**: Latest time to include in the migration.
- **batchInterval**: Duration of each time-based batch.
_See [Determine your batch interval](#determine-your-batch-interval)._
- **batchBucket**: InfluxDB OSS bucket to store migration batch metadata in.
- **sourceHost**: [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions)
to migrate data from.
- **sourceOrg**: InfluxDB Cloud organization to migrate data from.
- **sourceToken**: InfluxDB Cloud API token. To keep the API token secure, store
it as a secret in InfluxDB OSS.
- **sourceBucket**: InfluxDB Cloud bucket to migrate data from.
- **destinationBucket**: InfluxDB OSS bucket to migrate data to.
### Migration Flux script
```js
import "array"
import "experimental"
import "influxdata/influxdb/secrets"
// Configure the task
option task = {every: 5m, name: "Migrate data from InfluxDB Cloud"}
// Configure the migration
migration = {
start: 2022-01-01T00:00:00Z,
stop: 2022-02-01T00:00:00Z,
batchInterval: 1h,
batchBucket: "migration",
sourceHost: "https://cloud2.influxdata.com",
sourceOrg: "example-cloud-org",
sourceToken: secrets.get(key: "INFLUXDB_CLOUD_TOKEN"),
sourceBucket: "example-cloud-bucket",
destinationBucket: "example-oss-bucket",
}
// batchRange dynamically returns a record with start and stop properties for
// the current batch. It queries migration metadata stored in the
// `migration.batchBucket` to determine the stop time of the previous batch.
// It uses the previous stop time as the new start time for the current batch
// and adds the `migration.batchInterval` to determine the current batch stop time.
batchRange = () => {
_lastBatchStop =
(from(bucket: migration.batchBucket)
|> range(start: migration.start)
|> filter(fn: (r) => r._field == "batch_stop")
|> filter(fn: (r) => r.srcOrg == migration.sourceOrg)
|> filter(fn: (r) => r.srcBucket == migration.sourceBucket)
|> last()
|> findRecord(fn: (key) => true, idx: 0))._value
_batchStart =
if exists _lastBatchStop then
time(v: _lastBatchStop)
else
migration.start
return {start: _batchStart, stop: experimental.addDuration(d: migration.batchInterval, to: _batchStart)}
}
// Define a static record with batch start and stop time properties
batch = {start: batchRange().start, stop: batchRange().stop}
// Check to see if the current batch start time is beyond the migration.stop
// time and exit with an error if it is.
finished =
if batch.start >= migration.stop then
die(msg: "Batch range is beyond the migration range. Migration is complete.")
else
"Migration in progress"
// Query all data from the specified source bucket within the batch-defined time
// range. To limit migrated data by measurement, tag, or field, add a `filter()`
// function after `range()` with the appropriate predicate fn.
data = () =>
from(host: migration.sourceHost, org: migration.sourceOrg, token: migration.sourceToken, bucket: migration.sourceBucket)
|> range(start: batch.start, stop: batch.stop)
// rowCount is a stream of tables that contains the number of rows returned in
// the batch and is used to generate batch metadata.
rowCount =
data()
|> group(columns: ["_start", "_stop"])
|> count()
// emptyRange is a stream of tables that acts as filler data if the batch is
// empty. This is used to generate batch metadata for empty batches and is
// necessary to correctly increment the time range for the next batch.
emptyRange = array.from(rows: [{_start: batch.start, _stop: batch.stop, _value: 0}])
// metadata returns a stream of tables representing batch metadata.
metadata = () => {
_input =
if exists (rowCount |> findRecord(fn: (key) => true, idx: 0))._value then
rowCount
else
emptyRange
return
_input
|> map(
fn: (r) =>
({
_time: now(),
_measurement: "batches",
srcOrg: migration.sourceOrg,
srcBucket: migration.sourceBucket,
dstBucket: migration.destinationBucket,
batch_start: string(v: batch.start),
batch_stop: string(v: batch.stop),
rows: r._value,
percent_complete:
float(v: int(v: r._stop) - int(v: migration.start)) / float(
v: int(v: migration.stop) - int(v: migration.start),
) * 100.0,
}),
)
|> group(columns: ["_measurement", "srcOrg", "srcBucket", "dstBucket"])
}
// Write the queried data to the specified InfluxDB OSS bucket.
data()
|> to(bucket: migration.destinationBucket)
// Generate and store batch metadata in the migration.batchBucket.
metadata()
|> experimental.to(bucket: migration.batchBucket)
```
### Configuration help
{{< expand-wrapper >}}
<!----------------------- BEGIN Determine task interval ----------------------->
{{% expand "Determine your task interval" %}}
The task interval determines how often the migration task runs and is defined by
the [`task.every` option](/influxdb/v2.4/process-data/task-options/#every).
InfluxDB Cloud rate limits and quotas reset every five minutes, so
**we recommend a `5m` task interval**.
You can do shorter task intervals and execute the migration task more often,
but you need to balance the task interval with your [batch interval](#determine-your-batch-interval)
and the amount of data returned in each batch.
If the total amount of data queried in each five-minute interval exceeds your
InfluxDB Cloud organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/),
the batch will fail until rate limits and quotas reset.
{{% /expand %}}
<!------------------------ END Determine task interval ------------------------>
<!---------------------- BEGIN Determine migration start ---------------------->
{{% expand "Determine your migration start time" %}}
The `migration.start` time should be at or near the same time as the earliest
data point you want to migrate.
All migration batches are determined using the `migration.start` time and
`migration.batchInterval` settings.
To find time of the earliest point in your bucket, run the following query:
```js
from(bucket: "example-cloud-bucket")
|> range(start: 0)
|> group()
|> first()
|> keep(columns: ["_time"])
```
{{% /expand %}}
<!----------------------- END Determine migration start ----------------------->
<!----------------------- BEGIN Determine batch interval ---------------------->
{{% expand "Determine your batch interval" %}}
The `migration.batchInterval` setting controls the time range queried by each batch.
The "density" of the data in your InfluxDB Cloud bucket and your InfluxDB Cloud
organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/)
determine what your batch interval should be.
For example, if you're migrating data collected from hundreds of sensors with
points recorded every second, your batch interval will need to be shorter.
If you're migrating data collected from five sensors with points recorded every
minute, your batch interval can be longer.
It all depends on how much data gets returned in a single batch.
If points occur at regular intervals, you can get a fairly accurate estimate of
how much data will be returned in a given time range by using the `/api/v2/query`
endpoint to execute a query for the time range duration and then measuring the
size of the response body.
The following `curl` command queries an InfluxDB Cloud bucket for the last day
and returns the size of the response body in bytes.
You can customize the range duration to match your specific use case and
data density.
```sh
INFLUXDB_CLOUD_ORG=<your_influxdb_cloud_org>
INFLUXDB_CLOUD_TOKEN=<your_influxdb_cloud_token>
INFLUXDB_CLOUD_BUCKET=<your_influxdb_cloud_bucket>
curl -so /dev/null --request POST \
https://cloud2.influxdata.com/api/v2/query?org=$INFLUXDB_CLOUD_ORG \
--header "Authorization: Token $INFLUXDB_CLOUD_TOKEN" \
--header "Accept: application/csv" \
--header "Content-type: application/vnd.flux" \
--data "from(bucket:\"$INFLUXDB_CLOUD_BUCKET\") |> range(start: -1d, stop: now())" \
--write-out '%{size_download}'
```
{{% note %}}
You can also use other HTTP API tools like [Postman](https://www.postman.com/)
that provide the size of the response body.
{{% /note %}}
Divide the output of this command by 1000000 to convert it to megabytes (MB).
```
batchInterval = (read-rate-limit-mb / response-body-size-mb) * range-duration
```
For example, if the response body of your query that returns data from one day
is 8 MB and you're using the InfluxDB Cloud Free Plan with a read limit of
300 MB per five minutes:
```js
batchInterval = (300 / 8) * 1d
// batchInterval = 37d
```
You could query 37 days of data before hitting your read limit, but this is just an estimate.
We recommend setting the `batchInterval` slightly lower than the calculated interval
to allow for variation between batches.
So in this example, **it would be best to set your `batchInterval` to `35d`**.
##### Important things to note
- This assumes no other queries are running in your InfluxDB Cloud organization.
- You should also consider your network speeds and whether a batch can be fully
downloaded within the [task interval](#determine-your-task-interval).
{{% /expand %}}
<!------------------------ END Determine batch interval ----------------------->
{{< /expand-wrapper >}}
## Monitor the migration progress
The [InfluxDB Cloud Migration Community template](https://github.com/influxdata/community-templates/tree/master/influxdb-cloud-oss-migration/)
installs the migration task outlined in this guide as well as a dashboard
for monitoring running data migrations.
{{< img-hd src="/img/influxdb/2-1-migration-dashboard.png" alt="InfluxDB Cloud migration dashboard" />}}
<a class="btn" href="https://github.com/influxdata/community-templates/tree/master/influxdb-cloud-oss-migration/#quick-install">Install the InfluxDB Cloud Migration template</a>
## Troubleshoot migration task failures
If the migration task fails, [view your task logs](/influxdb/v2.4/process-data/manage-tasks/task-run-history/)
to identify the specific error. Below are common causes of migration task failures.
- [Exceeded rate limits](#exceeded-rate-limits)
- [Invalid API token](#invalid-api-token)
- [Query timeout](#query-timeout)
### Exceeded rate limits
If your data migration causes you to exceed your InfluxDB Cloud organization's
limits and quotas, the task will return an error similar to:
```
too many requests
```
**Possible solutions**:
- Update the `migration.batchInterval` setting in your migration task to use
a smaller interval. Each batch will then query less data.
### Invalid API token
If the API token you add as the `INFLUXDB_CLOUD_SECRET` doesn't have read access to
your InfluxDB Cloud bucket, the task will return an error similar to:
```
unauthorized access
```
**Possible solutions**:
- Ensure the API token has read access to your InfluxDB Cloud bucket.
- Generate a new InfluxDB Cloud API token with read access to the bucket you
want to migrate. Then, update the `INFLUXDB_CLOUD_TOKEN` secret in your
InfluxDB OSS instance with the new token.
### Query timeout
The InfluxDB Cloud query timeout is 90 seconds. If it takes longer than this to
return the data from the batch interval, the query will time out and the
task will fail.
**Possible solutions**:
- Update the `migration.batchInterval` setting in your migration task to use
a smaller interval. Each batch will then query less data and take less time
to return results.

View File

@ -0,0 +1,64 @@
---
title: Migrate data from InfluxDB OSS to other InfluxDB instances
description: >
To migrate data from an InfluxDB OSS bucket to another InfluxDB OSS or InfluxDB
Cloud bucket, export your data as line protocol and write it to your other
InfluxDB bucket.
menu:
influxdb_2_4:
name: Migrate data from OSS
parent: Migrate data
weight: 101
---
To migrate data from an InfluxDB OSS bucket to another InfluxDB OSS or InfluxDB
Cloud bucket, export your data as line protocol and write it to your other
InfluxDB bucket.
{{% cloud %}}
#### InfluxDB Cloud write limits
If migrating data from InfluxDB OSS to InfluxDB Cloud, you are subject to your
[InfluxDB Cloud organization's rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/).
Consider exporting your data in time-based batches to limit the file size
of exported line protocol to match your InfluxDB Cloud organization's limits.
{{% /cloud %}}
1. [Find the InfluxDB OSS bucket ID](/influxdb/{{< current-version-link >}}/organizations/buckets/view-buckets/)
that contains data you want to migrate.
2. Use the `influxd inspect export-lp` command to export data in your bucket as
[line protocol](/influxdb/v2.4/reference/syntax/line-protocol/).
Provide the following:
- **bucket ID**: ({{< req >}}) ID of the bucket to migrate.
- **engine path**: ({{< req >}}) Path to the TSM storage files on disk.
The default engine path [depends on your operating system](/influxdb/{{< current-version-link >}}/reference/internals/file-system-layout/#file-system-layout),
If using a [custom engine-path](/influxdb/{{< current-version-link >}}/reference/config-options/#engine-path)
provide your custom path.
- **output path**: ({{< req >}}) File path to output line protocol to.
- **start time**: Earliest time to export.
- **end time**: Latest time to export.
- **measurement**: Export a specific measurement. By default, the command
exports all measurements.
- **compression**: ({{< req text="Recommended" color="magenta" >}})
Use Gzip compression to compress the output line protocol file.
```sh
influxd inspect export-lp \
--bucket-id 12ab34cd56ef \
--engine-path ~/.influxdbv2/engine \
--output-path path/to/export.lp
--start 2022-01-01T00:00:00Z \
--end 2022-01-31T23:59:59Z \
--compress
```
3. Write the exported line protocol to your InfluxDB OSS or InfluxDB Cloud instance.
Do any of the following:
- Write line protocol in the **InfluxDB UI**:
- [InfluxDB Cloud UI](/influxdb/cloud/write-data/no-code/load-data/#load-csv-or-line-protocol-in-ui)
- [InfluxDB OSS {{< current-version >}} UI](/influxdb/{{< current-version-link >}}/write-data/no-code/load-data/#load-csv-or-line-protocol-in-ui)
- [Write line protocol using the `influx write` command](/influxdb/{{< current-version-link >}}/reference/cli/influx/write/)
- [Write line protocol using the InfluxDB API](/influxdb/{{< current-version-link >}}/write-data/developer-tools/api/)
- [Bulk ingest data (InfluxDB Cloud)](/influxdb/cloud/write-data/bulk-ingest-cloud/)

View File

@ -0,0 +1,38 @@
---
title: Monitor data and send alerts
seotitle: Monitor data and send alerts
description: >
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints. Or use community templates to monitor supported environments.
menu:
influxdb_2_4:
name: Monitor & alert
weight: 7
influxdb/v2.4/tags: [monitor, alert, checks, notification, endpoints]
---
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints. Or use [community templates to monitor](/influxdb/v2.4/monitor-alert/templates/) supported environments.
## Overview
1. A [check](/influxdb/v2.4/reference/glossary/#check) in InfluxDB queries data and assigns a status with a `_level` based on specific conditions.
2. InfluxDB stores the output of a check in the `statuses` measurement in the `_monitoring` system bucket.
3. [Notification rules](/influxdb/v2.4/reference/glossary/#notification-rule) check data in the `statuses`
measurement and, based on conditions set in the notification rule, send a message
to a [notification endpoint](/influxdb/v2.4/reference/glossary/#notification-endpoint).
4. InfluxDB stores notifications in the `notifications` measurement in the `_monitoring` system bucket.
## Create an alert
To get started, do the following:
1. [Create checks](/influxdb/v2.4/monitor-alert/checks/create/) to monitor data and assign a status.
2. [Add notification endpoints](/influxdb/v2.4/monitor-alert/notification-endpoints/create/)
to send notifications to third parties.
3. [Create notification rules](/influxdb/v2.4/monitor-alert/notification-rules/create) to check
statuses and send notifications to your notifications endpoints.
## Manage your monitoring and alerting pipeline
{{< children >}}

View File

@ -0,0 +1,19 @@
---
title: Manage checks
seotitle: Manage monitoring checks in InfluxDB
description: >
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
menu:
influxdb_2_4:
parent: Monitor & alert
weight: 101
influxdb/v2.4/tags: [monitor, checks, notifications, alert]
related:
- /influxdb/v2.4/monitor-alert/notification-rules/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
Learn how to create and manage checks:
{{< children >}}

View File

@ -0,0 +1,150 @@
---
title: Create checks
seotitle: Create monitoring checks in InfluxDB
description: >
Create a check in the InfluxDB UI.
menu:
influxdb_2_4:
parent: Manage checks
weight: 201
related:
- /influxdb/v2.4/monitor-alert/notification-rules/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
Create a check in the InfluxDB user interface (UI).
Checks query data and apply a status to each point based on specified conditions.
## Parts of a check
A check consists of two parts a query and check configuration.
#### Check query
- Specifies the dataset to monitor.
- May include tags to narrow results.
#### Check configuration
- Defines check properties, including the check interval and status message.
- Evaluates specified conditions and applies a status (if applicable) to each data point:
- `crit`
- `warn`
- `info`
- `ok`
- Stores status in the `_level` column.
## Check types
There are two types of checks:
- [threshold](#threshold-check)
- [deadman](#deadman-check)
#### Threshold check
A threshold check assigns a status based on a value being above, below,
inside, or outside of defined thresholds.
#### Deadman check
A deadman check assigns a status to data when a series or group doesn't report
in a specified amount of time.
## Create a check
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}** and select the [type of check](#check-types) to create.
3. Click **Name this check** in the top left corner and provide a unique name for the check, and then do the following:
- [Configure the check query](#configure-the-check-query)
- [Configure the check](#configure-the-check)
4. _(Optional)_ In the **Name this check** field at the top, enter a unique name for the check.
#### Configure the check query
1. Select the **bucket**, **measurement**, **field** and **tag sets** to query.
2. If creating a threshold check, select an **aggregate function**.
Aggregate functions aggregate data between the specified check intervals and
return a single value for the check to process.
In the **Aggregate functions** column, select an interval from the interval drop-down list
(for example, "Every 5 minutes") and an aggregate function from the list of functions.
3. Click **{{< caps >}}Submit{{< /caps >}}** to run the query and preview the results.
To see the raw query results, click the **View Raw Data {{< icon "toggle" >}}** toggle.
#### Configure the check
1. Click **{{< caps >}}2. Configure Check{{< /caps >}}** near the top of the window.
2. In the **{{< caps >}}Properties{{< /caps >}}** column, configure the following:
##### Schedule Every
Select the interval to run the check (for example, "Every 5 minutes").
This interval matches the aggregate function interval for the check query.
_Changing the interval here will update the aggregate function interval._
##### Offset
Delay the execution of a task to account for any late data.
Offset queries do not change the queried time range.
{{% note %}}Your offset must be shorter than your [check interval](#schedule-every).
{{% /note %}}
##### Tags
Add custom tags to the query output.
Each custom tag appends a new column to each row in the query output.
The column label is the tag key and the column value is the tag value.
Use custom tags to associate additional metadata with the check.
Common metadata tags across different checks lets you easily group and organize checks.
You can also use custom tags in [notification rules](/influxdb/v2.4/monitor-alert/notification-rules/create/).
3. In the **{{< caps >}}Status Message Template{{< /caps >}}** column, enter
the status message template for the check.
Use [Flux string interpolation](/{{< latest "flux" >}}/data-types/basic/string/#interpolate-strings)
to populate the message with data from the query.
Check data is represented as a record, `r`.
Access specific column values using dot notation: `r.columnName`.
Use data from the following columns:
- columns included in the query output
- [custom tags](#tags) added to the query output
- `_check_id`
- `_check_name`
- `_level`
- `_source_measurement`
- `_type`
###### Example status message template
```
From ${r._check_name}:
${r._field} is ${r._level}.
Its value is ${string(v: r.field_name)}.
```
When a check generates a status, it stores the message in the `_message` column.
4. Define check conditions that assign statuses to points.
Condition options depend on your check type.
##### Configure a threshold check
1. In the **{{< caps >}}Thresholds{{< /caps >}}** column, click the status name (CRIT, WARN, INFO, or OK)
to define conditions for that specific status.
2. From the **When value** drop-down list, select a threshold: is above, is below,
is inside of, is outside of.
3. Enter a value or values for the threshold.
You can also use the threshold sliders in the data visualization to define threshold values.
##### Configure a deadman check
1. In the **{{< caps >}}Deadman{{< /caps >}}** column, enter a duration for the deadman check in the **for** field.
For example, `90s`, `5m`, `2h30m`, etc.
2. Use the **set status to** drop-down list to select a status to set on a dead series.
3. In the **And stop checking after** field, enter the time to stop monitoring the series.
For example, `30m`, `2h`, `3h15m`, etc.
5. Click the green **{{< icon "check" >}}** in the top right corner to save the check.
## Clone a check
Create a new check by cloning an existing check.
1. Go to **Alerts > Alerts** in the navigation on the left.
{{< nav-icon "alerts" >}}
2. Click the **{{< icon "gear" >}}** icon next to the check you want to clone
and then click **Clone**.

View File

@ -0,0 +1,33 @@
---
title: Delete checks
seotitle: Delete monitoring checks in InfluxDB
description: >
Delete checks in the InfluxDB UI.
menu:
influxdb_2_4:
parent: Manage checks
weight: 204
related:
- /influxdb/v2.4/monitor-alert/notification-rules/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
If you no longer need a check, use the InfluxDB user interface (UI) to delete it.
{{% warn %}}
Deleting a check cannot be undone.
{{% /warn %}}
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Click the **{{< icon "delete" >}}** icon, and then click **{{< caps >}}Confirm{{< /caps >}}**.
After a check is deleted, all statuses generated by the check remain in the `_monitoring`
bucket until the retention period for the bucket expires.
{{% note %}}
You can also [disable a check](/influxdb/v2.4/monitor-alert/checks/update/#enable-or-disable-a-check)
without having to delete it.
{{% /note %}}

View File

@ -0,0 +1,60 @@
---
title: Update checks
seotitle: Update monitoring checks in InfluxDB
description: >
Update, rename, enable or disable checks in the InfluxDB UI.
menu:
influxdb_2_4:
parent: Manage checks
weight: 203
related:
- /influxdb/v2.4/monitor-alert/notification-rules/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
Update checks in the InfluxDB user interface (UI).
Common updates include:
- [Update check queries and logic](#update-check-queries-and-logic)
- [Enable or disable a check](#enable-or-disable-a-check)
- [Rename a check](#rename-a-check)
- [Add or update a check description](#add-or-update-a-check-description)
- [Add a label to a check](#add-a-label-to-a-check)
To update checks, select **Alerts > Alerts** in the navigation menu on the left.
{{< nav-icon "alerts" >}}
## Update check queries and logic
1. Click the name of the check you want to update. The check builder appears.
2. To edit the check query, click **{{< caps >}}1. Define Query{{< /caps >}}** at the top of the check builder window.
3. To edit the check logic, click **{{< caps >}}2. Configure Check{{< /caps >}}** at the top of the check builder window.
_For details about using the check builder, see [Create checks](/influxdb/v2.4/monitor-alert/checks/create/)._
## Enable or disable a check
Click the {{< icon "toggle" >}} toggle next to a check to enable or disable it.
## Rename a check
1. Hover over the name of the check you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the check name.
2. Enter a new name and click out of the name field or press enter to save.
_You can also rename a check in the [check builder](#update-check-queries-and-logic)._
## Add or update a check description
1. Hover over the check description you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the description.
2. Enter a new description and click out of the name field or press enter to save.
## Add a label to a check
1. Click **{{< icon "add-label" >}} Add a label** next to the check you want to add a label to.
The **Add Labels** box appears.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **{{< caps >}}Create Label{{< /caps >}}**.
4. To remove a label, click **{{< icon "x" >}}** on the label.

View File

@ -0,0 +1,37 @@
---
title: View checks
seotitle: View monitoring checks in InfluxDB
description: >
View check details and statuses and notifications generated by checks in the InfluxDB UI.
menu:
influxdb_2_4:
parent: Manage checks
weight: 202
related:
- /influxdb/v2.4/monitor-alert/notification-rules/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
View check details and statuses and notifications generated by checks in the InfluxDB user interface (UI).
- [View a list of all checks](#view-a-list-of-all-checks)
- [View check details](#view-check-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-check)
- [View notifications triggered by a check](#view-notifications-triggered-by-a-check)
To view checks, click **Alerts > Alerts** in navigation menu on the left.
{{< nav-icon "alerts" >}}
## View a list of all checks
The **{{< caps >}}Checks{{< /caps >}}** section of the Alerts landing page displays all existing checks.
## View check details
Click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a check
1. Click the **{{< icon "view" >}}** icon on the check.
2. Click **View History**.
The Statuses History page displays statuses generated by the selected check.

View File

@ -0,0 +1,96 @@
---
title: Create custom checks
seotitle: Custom checks
description: >
Create custom checks with a Flux task.
menu:
influxdb_2_4:
parent: Monitor & alert
weight: 201
influxdb/v2.4/tags: [alerts, checks, tasks, Flux]
---
In the UI, you can create two kinds of [checks](/influxdb/v2.4/reference/glossary/#check):
[`threshold`](/influxdb/v2.4/monitor-alert/checks/create/#threshold-check) and
[`deadman`](/influxdb/v2.4/monitor-alert/checks/create/#deadman-check).
Using a Flux task, you can create a custom check that provides a couple advantages:
- Customize and transform the data you would like to use for the check.
- Set up custom criteria for your alert (other than `threshold` and `deadman`).
## Create a task
1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left.
{{< nav-icon "tasks" >}}
2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}**.
3. In the **Name** field, enter a descriptive name,
and then enter how often to run the task in the **Every** field (for example, `10m`).
For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/v2.4/process-data/task-options/).
4. Enter the Flux script for your custom check, including the [`monitor.check`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/monitor/check/) function.
{{% note %}}
Use the [`/api/v2/checks/{checkID}/query` API endpoint](/influxdb/v2.4/api/#operation/DeleteDashboardsIDOwnersID)
to see the Flux code for a check built in the UI.
This can be useful for constructing custom checks.
{{% /note %}}
### Example: Monitor failed tasks
The script below is fairly complex, and can be used as a framework for similar tasks.
It does the following:
- Import the necessary `influxdata/influxdb/monitor` package, and other packages for data processing.
- Query the `_tasks` bucket to retrieve all statuses generated by your check.
- Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`.
- Create a `check` object that specifies an ID, name, and type for the check.
- Define the `ok` and `crit` statuses.
- Execute the `monitor` function on the `check` using the `task_data`.
#### Example alert task script
```js
import "strings"
import "regexp"
import "influxdata/influxdb/monitor"
import "influxdata/influxdb/schema"
option task = {name: "Failed Tasks Check", every: 1h, offset: 4m}
task_data = from(bucket: "_tasks")
|> range(start: -task.every)
|> filter(fn: (r) => r["_measurement"] == "runs")
|> filter(fn: (r) => r["_field"] == "logs")
|> map(fn: (r) => ({r with name: strings.split(v: regexp.findString(r: /option task = \{([^\}]+)/, v: r._value), t: "\\\\\\\"")[1]}))
|> drop(columns: ["_value", "_start", "_stop"])
|> group(columns: ["name", "taskID", "status", "_measurement"])
|> map(fn: (r) => ({r with _value: if r.status == "failed" then 1 else 0}))
|> last()
check = {
// 16 characters, alphanumeric
_check_id: "0000000000000001",
// Name string
_check_name: "Failed Tasks Check",
// Check type (threshold, deadman, or custom)
_type: "custom",
tags: {},
}
ok = (r) => r["logs"] == 0
crit = (r) => r["logs"] == 1
messageFn = (r) => "The task: ${r.taskID} - ${r.name} has a status of ${r.status}"
task_data
|> schema["fieldsAsCols"]()
|> monitor["check"](data: check, messageFn: messageFn, ok: ok, crit: crit)
```
{{% note %}}
Creating a custom check does not send a notification email.
For information on how to create notification emails, see
[Create notification endpoints](/influxdb/v2.4/monitor-alert/notification-endpoints/create),
[Create notification rules](/influxdb/v2.4/monitor-alert/notification-rules/create),
and [Send alert email](/influxdb/v2.4/monitor-alert/send-email/)
{{% /note %}}

View File

@ -0,0 +1,19 @@
---
title: Manage notification endpoints
list_title: Manage notification endpoints
description: >
Create, read, update, and delete endpoints in the InfluxDB UI.
influxdb/v2.4/tags: [monitor, endpoints, notifications, alert]
menu:
influxdb_2_4:
parent: Monitor & alert
weight: 102
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-rules/
---
Notification endpoints store information to connect to a third-party service.
Create a connection to a HTTP, Slack, or PagerDuty endpoint.
{{< children >}}

View File

@ -0,0 +1,67 @@
---
title: Create notification endpoints
description: >
Create notification endpoints to send alerts on your time series data.
menu:
influxdb_2_4:
name: Create endpoints
parent: Manage notification endpoints
weight: 201
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-rules/
---
To send notifications about changes in your data, start by creating a notification endpoint to a third-party service. After creating notification endpoints, [create notification rules](/influxdb/v2.4/monitor-alert/notification-rules/create) to send alerts to third-party services on [check statuses](/influxdb/v2.4/monitor-alert/checks/create).
{{% cloud-only %}}
#### Endpoints available in InfluxDB Cloud
The following endpoints are available for the InfluxDB Cloud Free Plan and Usage-based Plan:
| Endpoint | Free Plan | Usage-based Plan |
|:-------- |:-------------------: |:----------------------------:|
| **Slack** | **{{< icon "check" >}}** | **{{< icon "check" >}}** |
| **PagerDuty** | | **{{< icon "check" >}}** |
| **HTTP** | | **{{< icon "check" >}}** |
{{% /cloud-only %}}
## Create a notification endpoint
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}**.
3. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**.
4. From the **Destination** drop-down list, select a destination endpoint to send notifications to.
{{% cloud-only %}}_See [available endpoints](#endpoints-available-in-influxdb-cloud)._{{% /cloud-only %}}
5. In the **Name** and **Description** fields, enter a name and description for the endpoint.
6. Enter information to connect to the endpoint:
- **For HTTP**, enter the **URL** to send the notification.
Select the **auth method** to use: **None** for no authentication.
To authenticate with a username and password, select **Basic** and then
enter credentials in the **Username** and **Password** fields.
To authenticate with an API token, select **Bearer**, and then enter the
API token in the **Token** field.
- **For Slack**, create an [Incoming WebHook](https://api.slack.com/incoming-webhooks#posting_with_webhooks)
in Slack, and then enter your webHook URL in the **Slack Incoming WebHook URL** field.
- **For PagerDuty**:
- [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service),
[add an integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service),
and then enter the PagerDuty integration key for your new service in the **Routing Key** field.
- The **Client URL** provides a useful link in your PagerDuty notification.
Enter any URL that you'd like to use to investigate issues.
This URL is sent as the `client_url` property in the PagerDuty trigger event.
By default, the **Client URL** is set to your Monitoring & Alerting History
page, and the following included in the PagerDuty trigger event:
```json
"client_url": "http://localhost:8086/orgs/<your-org-ID>/alert-history"
```
6. Click **{{< caps >}}Create Notification Endpoint{{< /caps >}}**.

View File

@ -0,0 +1,28 @@
---
title: Delete notification endpoints
description: >
Delete a notification endpoint in the InfluxDB UI.
menu:
influxdb_2_4:
name: Delete endpoints
parent: Manage notification endpoints
weight: 204
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-rules/
---
If notifications are no longer sent to an endpoint, complete the steps below to
delete the endpoint, and then [update notification rules](/influxdb/v2.4/monitor-alert/notification-rules/update)
with a new notification endpoint as needed.
## Delete a notification endpoint
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}** and find the rule
you want to delete.
3. Click the **{{< icon "trash" >}}** icon on the notification you want to delete
and then click **{{< caps >}}Confirm{{< /caps >}}**.

View File

@ -0,0 +1,55 @@
---
title: Update notification endpoints
description: >
Update notification endpoints in the InfluxDB UI.
menu:
influxdb_2_4:
name: Update endpoints
parent: Manage notification endpoints
weight: 203
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-rules/
---
Complete the following steps to update notification endpoint details.
To update the notification endpoint selected for a notification rule, see [update notification rules](/influxdb/v2.4/monitor-alert/notification-rules/update/).
**To update a notification endpoint**
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}** and then do the following as needed:
- [Update the name or description for notification endpoint](#update-the-name-or-description-for-notification-endpoint)
- [Change endpoint details](#change-endpoint-details)
- [Disable notification endpoint](#disable-notification-endpoint)
- [Add a label to notification endpoint](#add-a-label-to-notification-endpoint)
## Update the name or description for notification endpoint
1. Hover over the name or description of the endpoint and click the pencil icon
(**{{< icon "edit" >}}**) to edit the field.
2. Click outside of the field to save your changes.
## Change endpoint details
1. Click the name of the endpoint to update.
2. Update details as needed, and then click **Edit Notification Endpoint**.
For details about each field, see [Create notification endpoints](/influxdb/v2.4/monitor-alert/notification-endpoints/create/).
## Disable notification endpoint
Click the {{< icon "toggle" >}} toggle to disable the notification endpoint.
## Add a label to notification endpoint
1. Click **{{< icon "add-label" >}} Add a label** next to the endpoint you want to add a label to.
The **Add Labels** box opens.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **{{< caps >}}Create Label{{< /caps >}}**.
4. To remove a label, click **{{< icon "x" >}}** on the label.

View File

@ -0,0 +1,40 @@
---
title: View notification endpoint history
seotitle: View notification endpoint details and history
description: >
View notification endpoint details and history in the InfluxDB UI.
menu:
influxdb_2_4:
name: View endpoint history
parent: Manage notification endpoints
weight: 202
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-rules/
---
View notification endpoint details and history in the InfluxDB user interface (UI).
1. In the navigation menu on the left, select **Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}**.
- [View notification endpoint details](#view-notification-endpoint-details)
- [View history notification endpoint history](#view-notification-endpoint-history), including statues and notifications sent to the endpoint
## View notification endpoint details
On the notification endpoints page:
1. Click the name of the notification endpoint you want to view.
2. View the notification endpoint destination, name, and information to connect to the endpoint.
## View notification endpoint history
On the notification endpoints page, click the **{{< icon "gear" >}}** icon,
and then click **View History**.
The Check Statuses History page displays:
- Statuses generated for the selected notification endpoint
- Notifications sent to the selected notification endpoint

View File

@ -0,0 +1,17 @@
---
title: Manage notification rules
description: >
Manage notification rules in InfluxDB.
weight: 103
influxdb/v2.4/tags: [monitor, notifications, alert]
menu:
influxdb_2_4:
parent: Monitor & alert
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
The following articles provide information on managing your notification rules:
{{< children >}}

View File

@ -0,0 +1,44 @@
---
title: Create notification rules
description: >
Create notification rules to send alerts on your time series data.
weight: 201
menu:
influxdb_2_4:
parent: Manage notification rules
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
Once you've set up checks and notification endpoints, create notification rules to alert you.
_For details, see [Manage checks](/influxdb/v2.4/monitor-alert/checks/) and
[Manage notification endpoints](/influxdb/v2.4/monitor-alert/notification-endpoints/)._
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
- [Create a new notification rule in the UI](#create-a-new-notification-rule-in-the-ui)
- [Clone an existing notification rule in the UI](#clone-an-existing-notification-rule-in-the-ui)
## Create a new notification rule
1. On the notification rules page, click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**.
2. Complete the **About** section:
1. In the **Name** field, enter a name for the notification rule.
2. In the **Schedule Every** field, enter how frequently the rule should run.
3. In the **Offset** field, enter an offset time. For example,if a task runs on the hour, a 10m offset delays the task to 10 minutes after the hour. Time ranges defined in the task are relative to the specified execution time.
3. In the **Conditions** section, build a condition using a combination of status and tag keys.
- Next to **When status is equal to**, select a status from the drop-down field.
- Next to **AND When**, enter one or more tag key-value pairs to filter by.
4. In the **Message** section, select an endpoint to notify.
5. Click **{{< caps >}}Create Notification Rule{{< /caps >}}**.
## Clone an existing notification rule
On the notification rules page, click the **{{< icon "gear" >}}** icon and select **Clone**.
The cloned rule appears.

View File

@ -0,0 +1,24 @@
---
title: Delete notification rules
description: >
If you no longer need to receive an alert, delete the associated notification rule.
weight: 204
menu:
influxdb_2_4:
parent: Manage notification rules
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
If you no longer need to receive an alert, delete the associated notification rule.
## Delete a notification rule
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
3. Click the **{{< icon "trash" >}}** icon on the notification rule you want to delete.
4. Click **{{< caps >}}Confirm{{< /caps >}}**.

View File

@ -0,0 +1,50 @@
---
title: Update notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 203
menu:
influxdb_2_4:
parent: Manage notification rules
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
Update notification rules to update the notification message or change the schedule or conditions.
1. In the navigation menu on the left, select **Alerts > Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
- [Update the name or description for notification rules](#update-the-name-or-description-for-notification-rules)
- [Enable or disable notification rules](#enable-or-disable-notification-rules)
- [Add a label to notification rules](#add-a-label-to-notification-rules)
## Update the name or description for notification rules
On the Notification Rules page:
1. Hover over the name or description of a rule and click the pencil icon
(**{{< icon "edit" >}}**) to edit the field.
2. Click outside of the field to save your changes.
## Enable or disable notification rules
On the notification rules page, click the {{< icon "toggle" >}} toggle to
enable or disable the notification rule.
## Add a label to notification rules
On the notification rules page:
1. Click **{{< icon "add-label" >}} Add a label**
next to the rule you want to add a label to.
The **Add Labels** box opens.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **{{< caps >}}Create Label{{< /caps >}}**.
4. To remove a label, click **{{< icon "x" >}}** on the label.

View File

@ -0,0 +1,44 @@
---
title: View notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 202
menu:
influxdb_2_4:
parent: Manage notification rules
related:
- /influxdb/v2.4/monitor-alert/checks/
- /influxdb/v2.4/monitor-alert/notification-endpoints/
---
View notification rule details and statuses and notifications generated by notification rules in the InfluxDB user interface (UI).
- [View a list of all notification rules](#view-a-list-of-all-notification-rules)
- [View notification rule details](#view-notification-rule-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-notification-rule)
- [View notifications triggered by a notification rule](#view-notifications-triggered-by-a-notification-rule)
**To view notification rules:**
1. In the navigation menu on the left, select **Alerts**.
{{< nav-icon "alerts" >}}
2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page.
## View a list of all notification rules
The **{{< caps >}}Notification Rules{{< /caps >}}** section of the Alerts landing page displays all existing checks.
## View notification rule details
Click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a notification rule
Click the **{{< icon "gear" >}}** icon on the notification rule, and then **View History**.
The Statuses History page displays statuses generated by the selected check.
## View notifications triggered by a notification rule
1. Click the **{{< icon "gear" >}}** icon on the notification rule, and then **View History**.
2. In the top left corner, click **{{< caps >}}Notifications{{< /caps >}}**.
The Notifications History page displays notifications initiated by the selected notification rule.

View File

@ -0,0 +1,295 @@
---
title: Send alert email
description: >
Send an alert email.
menu:
influxdb_2_4:
parent: Monitor & alert
weight: 104
influxdb/v2.4/tags: [alert, email, notifications, check]
related:
- /influxdb/v2.4/monitor-alert/checks/
---
Send an alert email using a third-party service, such as [SendGrid](https://sendgrid.com/), [Amazon Simple Email Service (SES)](https://aws.amazon.com/ses/), [Mailjet](https://www.mailjet.com/), or [Mailgun](https://www.mailgun.com/). To send an alert email, complete the following steps:
1. [Create a check](/influxdb/v2.4/monitor-alert/checks/create/#create-a-check-in-the-influxdb-ui) to identify the data to monitor and the status to alert on.
2. Set up your preferred email service (sign up, retrieve API credentials, and send test email):
- **SendGrid**: See [Getting Started With the SendGrid API](https://sendgrid.com/docs/API_Reference/api_getting_started.html)
- **AWS Simple Email Service (SES)**: See [Using the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email.html). Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html).
- **Mailjet**: See [Getting Started with Mailjet](https://dev.mailjet.com/email/guides/getting-started/)
- **Mailgun**: See [Mailgun Signup](https://signup.mailgun.com/new/signup)
3. [Create an alert email task](#create-an-alert-email-task) to call your email service and send an alert email.
{{% note %}}
In the procedure below, we use the **Task** page in the InfluxDB UI (user interface) to create a task. Explore other ways to [create a task](/influxdb/v2.4/process-data/manage-tasks/create-task/).
{{% /note %}}
### Create an alert email task
1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left.
{{< nav-icon "tasks" >}}
2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}**.
3. In the **Name** field, enter a descriptive name, for example, **Send alert email**,
and then enter how often to run the task in the **Every** field, for example, `10m`.
For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/v2.4/process-data/task-options/).
4. In the right panel, enter the following detail in your **task script** (see [examples below](#examples)):
- Import the [Flux HTTP package](/{{< latest "flux" >}}/stdlib/http/).
- (Optional) Store your API key as a secret for reuse.
First, [add your API key as a secret](/influxdb/v2.4/security/secrets/manage-secrets/add/),
and then import the [Flux InfluxDB Secrets package](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/secrets/).
- Query the `statuses` measurement in the `_monitoring` bucket to retrieve all statuses generated by your check.
- Set the time range to monitor; use the same interval that the task is scheduled to run. For example, `range (start: -task.every)`.
- Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`.
- Use the `map()` function to evaluate the criteria to send an alert using `http.post()`.
- Specify your email service `url` (endpoint), include applicable request `headers`, and verify your request `data` format follows the format specified for your email service.
#### Examples
{{< tabs-wrapper >}}
{{% tabs %}}
[SendGrid](#)
[AWS SES](#)
[Mailjet](#)
[Mailgun](#)
{{% /tabs %}}
<!-------------------------------- BEGIN SendGrid -------------------------------->
{{% tab-content %}}
The example below uses the SendGrid API to send an alert email when more than 3 critical statuses occur since the previous task run.
```js
import "http"
import "json"
// Import the Secrets package if you store your API key as a secret.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secret if applicable. Otherwise, skip this line
// and add the API key as the Bearer token in the Authorization header.
SENDGRID_APIKEY = secrets.get(key: "SENDGRID_APIKEY")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "statuses" and r._level == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 3 then
{r with _value: http.post(
url: "https://api.sendgrid.com/v3/mail/send",
headers: {"Content-Type": "application/json", "Authorization": "Bearer ${SENDGRID_APIKEY}"},
data: json.encode(
v: {
"personalizations": [
{
"to": [
{
"email": "jane.doe@example.com"
}
]
}
],
"from": {
"email": "john.doe@example.com"
},
"subject": "InfluxDB critical alert",
"content": [
{
"type": "text/plain",
"value": "There have been ${r._value} critical statuses."
}
]
}
)
)}
else
{r with _value: 0},
)
```
{{% /tab-content %}}
<!-------------------------------- BEGIN AWS SES -------------------------------->
{{% tab-content %}}
The example below uses the AWS SES API v2 to send an alert email when more than 3 critical statuses occur since the last task run.
{{% note %}}
Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). We recommend signing your AWS API requests using the [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html).
{{% /note %}}
```js
import "http"
import "json"
// Import the Secrets package if you store your API credentials as secrets.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secrets if applicable. Otherwise, skip this line
// and add the API key as the Bearer token in the Authorization header.
AWS_AUTH_ALGORITHM = secrets.get(key: "AWS_AUTH_ALGORITHM")
AWS_CREDENTIAL = secrets.get(key: "AWS_CREDENTIAL")
AWS_SIGNED_HEADERS = secrets.get(key: "AWS_SIGNED_HEADERS")
AWS_CALCULATED_SIGNATURE = secrets.get(key: "AWS_CALCULATED_SIGNATURE")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r.measurement == "statuses" and r._level == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 3 then
{r with _value: http.post(
url: "https://email.your-aws-region.amazonaws.com/sendemail/v2/email/outbound-emails",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer ${AWS_AUTH_ALGORITHM}${AWS_CREDENTIAL}${AWS_SIGNED_HEADERS}${AWS_CALCULATED_SIGNATURE}"},
data: json.encode(v: {
"Content": {
"Simple": {
"Body": {
"Text": {
"Charset": "UTF-8",
"Data": "There have been ${r._value} critical statuses."
}
},
"Subject": {
"Charset": "UTF-8",
"Data": "InfluxDB critical alert"
}
}
},
"Destination": {
"ToAddresses": [
"john.doe@example.com"
]
}
}
)
)}
else
{r with _value: 0},
)
```
For details on the request syntax, see [SendEmail API v2 reference](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html).
{{% /tab-content %}}
<!-------------------------------- BEGIN Mailjet ------------------------------->
{{% tab-content %}}
The example below uses the Mailjet Send API to send an alert email when more than 3 critical statuses occur since the last task run.
{{% note %}}
To view your Mailjet API credentials, sign in to Mailjet and open the [API Key Management page](https://app.mailjet.com/account/api_keys).
{{% /note %}}
```js
import "http"
import "json"
// Import the Secrets package if you store your API keys as secrets.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secrets if applicable. Otherwise, skip this line
// and add the API keys as Basic credentials in the Authorization header.
MAILJET_APIKEY = secrets.get(key: "MAILJET_APIKEY")
MAILJET_SECRET_APIKEY = secrets.get(key: "MAILJET_SECRET_APIKEY")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r.measurement == "statuses" and "r.level" == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 3 then
{r with
_value: http.post(
url: "https://api.mailjet.com/v3.1/send",
headers: {
"Content-type": "application/json",
"Authorization": "Basic ${MAILJET_APIKEY}:${MAILJET_SECRET_APIKEY}"
},
data: json.encode(
v: {
"Messages": [
{
"From": {"Email": "jane.doe@example.com"},
"To": [{"Email": "john.doe@example.com"}],
"Subject": "InfluxDB critical alert",
"TextPart": "There have been ${r._value} critical statuses.",
"HTMLPart": "<h3>${r._value} critical statuses</h3><p>There have been ${r._value} critical statuses.",
},
],
},
),
),
}
else
{r with _value: 0},
)
```
{{% /tab-content %}}
<!-------------------------------- BEGIN Mailgun ---------------------------->
{{% tab-content %}}
The example below uses the Mailgun API to send an alert email when more than 3 critical statuses occur since the last task run.
{{% note %}}
To view your Mailgun API keys, sign in to Mailjet and open [Account Security - API security](https://app.mailgun.com/app/account/security/api_keys). Mailgun requires that a domain be specified via Mailgun. A domain is automatically created for you when you first set up your account. You must include this domain in your `url` endpoint (for example, `https://api.mailgun.net/v3/YOUR_DOMAIN` or `https://api.eu.mailgun.net/v3/YOUR_DOMAIN`. If you're using a free version of Mailgun, you can set up a maximum of five authorized recipients (to receive email alerts) for your domain. To view your Mailgun domains, sign in to Mailgun and view the [Domains page](https://app.mailgun.com/app/sending/domains).
{{% /note %}}
```js
import "http"
import "json"
// Import the Secrets package if you store your API key as a secret.
// For detail on how to do this, see Step 4 above.
import "influxdata/influxdb/secrets"
// Retrieve the secret if applicable. Otherwise, skip this line
// and add the API key as the Bearer token in the Authorization header.
MAILGUN_APIKEY = secrets.get(key: "MAILGUN_APIKEY")
numberOfCrits = from(bucket: "_monitoring")
|> range(start: -task.every)
|> filter(fn: (r) => r["_measurement"] == "statuses")
|> filter(fn: (r) => r["_level"] == "crit")
|> count()
numberOfCrits
|> map(
fn: (r) => if r._value > 1 then
{r with _value: http.post(
url: "https://api.mailgun.net/v3/YOUR_DOMAIN/messages",
headers: {
"Content-type": "application/json",
"Authorization": "Basic api:${MAILGUN_APIKEY}"
},
data: json.encode(v: {
"from": "Username <mailgun@YOUR_DOMAIN_NAME>",
"to": "email@example.com",
"subject": "InfluxDB critical alert",
"text": "There have been ${r._value} critical statuses."
}
)
)}
else
{r with _value: 0},
)
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}

View File

@ -0,0 +1,14 @@
---
title: Monitor with templates
description: >
Use community templates to monitor data in many supported environments. Monitor infrastructure, networking, IoT, software, security, TICK stack, and more.
menu:
influxdb_2_4:
parent: Monitor & alert
weight: 104
influxdb/v2.4/tags: [monitor, templates]
---
Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor data in supported environments.
{{< children >}}

View File

@ -0,0 +1,14 @@
---
title: Monitor infrastructure
description: >
Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor your infrastructure.
menu:
influxdb_2_4:
parent: Monitor with templates
weight: 104
influxdb/v2.4/tags: [monitor, templates, infrastructure]
---
Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor your infrastructure.
{{< children >}}

View File

@ -0,0 +1,59 @@
---
title: Monitor Amazon Web Services (AWS)
description: >
Use the AWS CloudWatch Monitoring template to monitor data from Amazon Web Services (AWS), Amazon Elastic Compute Cloud (EC2), and Amazon Elastic Load Balancing (ELB) with the AWS CloudWatch Service.
menu:
influxdb_2_4:
parent: Monitor infrastructure
name: AWS CloudWatch
weight: 201
---
Use the [AWS CloudWatch Monitoring template](https://github.com/influxdata/community-templates/tree/master/aws_cloudwatch) to monitor data from [Amazon Web Services (AWS)](https://aws.amazon.com/), [Amazon Elastic Compute Cloud (EC2)](https://aws.amazon.com/ec2/), and [Amazon Elastic Load Balancing (ELB)](https://aws.amazon.com/elasticloadbalancing/) with the [AWS CloudWatch Service](https://aws.amazon.com/cloudwatch/).
The AWS CloudWatch Monitoring template includes the following:
- two [dashboards](/influxdb/v2.4/reference/glossary/#dashboard):
- **AWS CloudWatch NLB (Network Load Balancers) Monitoring**: Displays data from the `cloudwatch_aws_network_elb measurement`
- **AWS CloudWatch Instance Monitoring**: Displays data from the `cloudwatch_aws_ec2` measurement
- two [buckets](/influxdb/v2.4/reference/glossary/#bucket): `kubernetes` and `cloudwatch`
- two labels: `inputs.cloudwatch`, `AWS`
- one variable: `v.bucket`
- one [Telegraf configuration](/influxdb/v2.4/telegraf-configs/): [AWS CloudWatch input plugin](/{{< latest "telegraf" >}}/plugins//#cloudwatch)
## Apply the template
1. Use the [`influx` CLI](/influxdb/v2.4/reference/cli/influx/) to run the following command:
```sh
influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/aws_cloudwatch/aws_cloudwatch.yml
```
For more information, see [influx apply](/influxdb/v2.4/reference/cli/influx/apply/).
2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the CloudWatch API and [InfluxDB v2 API](/influxdb/v2.4/reference/api/).
3. In your Telegraf configuration file (`telegraf.conf`), find the following example `influxdb_v2` output plugins, and then **replace** the `urls` to specify the servers to monitor:
```sh
## k8s
[[outputs.influxdb_v2]]
urls = ["http://influxdb.monitoring:8086"]
organization = "InfluxData"
bucket = "kubernetes"
token = "secret-token"
## cloudv2 sample
[[outputs.influxdb_v2]]
urls = ["$INFLUX_HOST"]
token = "$INFLUX_TOKEN"
organization = "$INFLUX_ORG"
bucket = “cloudwatch"
```
4. [Start Telegraf](/influxdb/v2.4/write-data/no-code/use-telegraf/auto-config/#start-telegraf).
## View the incoming data
1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation.
{{< nav-icon "dashboards" >}}
2. Open your AWS dashboards, and then set the `v.bucket` variable to specify the
bucket to query data from (`kubernetes` or `cloudwatch`).

View File

@ -0,0 +1,57 @@
---
title: Monitor Docker
description: >
Use the [Docker Monitoring template](https://github.com/influxdata/community-templates/tree/master/docker) to monitor your Docker containers.
menu:
influxdb_2_4:
parent: Monitor infrastructure
name: Docker
weight: 202
---
Use the [Docker Monitoring template](https://github.com/influxdata/community-templates/tree/master/docker) to monitor your Docker containers. First, [apply the template](#apply-the-template), and then [view incoming data](#view-incoming-data).
This template uses the [Docker input plugin](/{{< latest "telegraf" >}}/plugins//#docker) to collect metrics stored in InfluxDB and display these metrics in a dashboard.
The Docker Monitoring template includes the following:
- one [dashboard](/influxdb/v2.4/reference/glossary/#dashboard): **Docker**
- one [bucket](/influxdb/v2.4/reference/glossary/#bucket): `docker, 7d retention`
- labels: Docker input plugin labels
- one [Telegraf configuration](/influxdb/v2.4/telegraf-configs/): Docker input plugin
- one variable: `bucket`
- four [checks](/influxdb/v2.4/reference/glossary/#check): `Container cpu`, `mem`, `disk`, `non-zero exit`
- one [notification endpoint](/influxdb/v2.4/reference/glossary/#notification-endpoint): `Http Post`
- one [notification rule](/influxdb/v2.4/reference/glossary/#notification-rule): `Crit Alert`
For more information about how checks, notification endpoints, and notifications rules work together, see [monitor data and send alerts](/influxdb/v2.4/monitor-alert/).
## Apply the template
1. Use the [`influx` CLI](/influxdb/v2.4/reference/cli/influx/) to run the following command:
```sh
influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/docker/docker.yml
```
For more information, see [influx apply](/influxdb/v2.4/reference/cli/influx/apply/).
{{% note %}}
Ensure your `influx` CLI is configured with your account credentials and that configuration is active. For more information, see [influx config](/influxdb/v2.4/reference/cli/influx/config/).
{{% /note %}}
2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the Docker containers and [InfluxDB v2 API](/influxdb/v2.4/reference/api/).
3. In your [Telegraf configuration file (`telegraf.conf`)](/influxdb/v2.4/telegraf-configs/), do the following:
- Depending on how you run Docker, you may need to customize the [Docker input plugin](/{{< latest "telegraf" >}}/plugins//#docker) configuration, for example, you may need to specify the `endpoint` value.
- Set the following environment variables:
- INFLUX_TOKEN: Token must have permissions to read Telegraf configurations and write data to the `telegraf` bucket. See how to [view tokens](/influxdb/v2.4/security/tokens/view-tokens/).
- INFLUX_ORG: Name of your organization. See how to [view your organization](/influxdb/v2.4/organizations/view-orgs/).
- INFLUX_HOST: Your InfluxDB host URL, for example, localhost, a remote instance, or InfluxDB Cloud.
4. [Start Telegraf](/influxdb/v2.4/write-data/no-code/use-telegraf/auto-config/#start-telegraf).
## View incoming data
1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation.
{{< nav-icon "dashboards" >}}
2. Open the **Docker** dashboard to start monitoring.

Some files were not shown because too many files have changed in this diff Show More