Version restructure (#5133)

* mass changes for version restructure

* fixed latest-patch and flux version generator

* updated hugo configs

* fixed flux frontmatter injector

* fixed flux frontmatter injector

* WIP api generator updates for version restructure (#5128)

* fixed telegraf plugin list

* removed latest shortcode

* fixed current-version

* fixed product dropdown crosslinking

* fixed alt links

* WIP fixing links

* fixed broken links

* updated api doc generation

* fixed additional resources

* added version redirects to edge.js

* fixed search placeholder

* fixed paged titles
pull/5134/head
Scott Anderson 2023-09-12 23:33:31 -06:00 committed by GitHub
parent 3b0a469906
commit 35ad46c4c2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8426 changed files with 264341 additions and 1208477 deletions

View File

@ -0,0 +1,11 @@
title: InfluxDB Clustered API Service
description: |
The InfluxDB HTTP API provides a programmatic interface for all interactions with InfluxDB.
Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/master/contracts/ref/cloud.yml).
version: Cloud 2.x
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -0,0 +1,8 @@
- url: https://{baseurl}
description: InfluxDB Clustered API URL
variables:
baseurl:
enum:
- 'cluster-host.com'
default: 'cluster-host.com'
description: InfluxDB Clustered URL

View File

@ -0,0 +1,13 @@
- name: Using the InfluxDB HTTP API
tags:
- Quick start
- Authentication
- Headers
- Pagination
- Response codes
- System information endpoints
- name: All endpoints
tags:
- Ping
- Query
- Write

2060
api-docs/clustered/ref.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -88,15 +88,34 @@ for version in $versions
do
# Trim the trailing slash off the directory name
version="${version%/}"
menu="influxdb_$(echo $version | sed 's/\./_/g;s/v//g;')_ref"
# Define the menu key
if [[ $version == "cloud-serverless" ]] || [[ $version == "cloud-dedicated" ]] || [[ $version == "clustered" ]]; then
menu="influxdb_$(echo $version | sed 's/\./_/g;s/-/_/g;')"
else
menu="influxdb_$(echo $version | sed 's/\./_/g;')_ref"
fi
# Define the title text based on the version
if [[ $version == "cloud" ]]; then
titleVersion="Cloud"
elif [[ $version == "cloud-iox" ]]; then
titleVersion="Cloud (IOx)"
elif [[ $version == "cloud-serverless" ]]; then
titleVersion="Cloud Serverless"
elif [[ $version == "cloud-dedicated" ]]; then
titleVersion="Cloud Dedicated"
elif [[ $version == "clustered" ]]; then
titleVersion="Clustered"
else
titleVersion="$version"
fi
# Define frontmatter version
if [[ $version == "cloud-serverless" ]] || [[ $version == "cloud-dedicated" ]] || [[ $version == "clustered" ]]; then
frontmatterVersion="v3"
else
frontmatterVersion="v2"
fi
# Generate the frontmatter
v2frontmatter="---
title: InfluxDB $titleVersion API documentation
@ -113,14 +132,22 @@ weight: 102
v1compatfrontmatter="---
title: InfluxDB $titleVersion v1 compatibility API documentation
description: >
The InfluxDB v1 compatibility API provides a programmatic interface for interactions with InfluxDB $titleVersion using InfluxDB v1.x compatibility endpoints.
The InfluxDB v1 compatibility API provides a programmatic interface for interactions with InfluxDB $titleVersion using InfluxDB v1 compatibility endpoints.
layout: api
menu:
$menu:
parent: 1.x compatibility
parent: v1 compatibility
name: View v1 compatibility API docs
weight: 304
---
"
v3frontmatter="---
title: InfluxDB $titleVersion API documentation
description: >
The InfluxDB API provides a programmatic interface for interactions with InfluxDB $titleVersion.
layout: api
weight: 102
---
"
# If the v2 spec file differs from master, regenerate the HTML.
@ -139,7 +166,11 @@ weight: 304
generateHtml $filePath $outFilename $titleVersion $titleSubmodule
# Create temp file with frontmatter and Redoc html
echo "$v2frontmatter" >> $version$outFilename.tmp
if [[ $frontmatterVersion == "v3" ]]; then
echo "$v3frontmatter" >> $version$outFilename.tmp
else
echo "$v2frontmatter" >> $version$outFilename.tmp
fi
buildHugoTemplate $version v2 $outFilename
fi

View File

@ -1,7 +0,0 @@
title: InfluxDB OSS API Service
version: 2.0.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

File diff suppressed because it is too large Load Diff

View File

@ -1,502 +0,0 @@
openapi: 3.0.0
info:
title: Influx API Service (V1 compatible endpoints)
version: 0.1.0
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.
If you want to use the latest InfluxDB `/api/v2` API instead,
see the [InfluxDB v2 API documentation](/influxdb/v2.0/api).
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1 compatible format.
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: >-
The bucket to write to. If none exist a bucket will be created with
a default 3 day retention policy.
- in: query
name: rp
schema:
type: string
description: The retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: >-
When present, its value indicates to the database that compression
is applied to the line protocol body.
schema:
type: string
description: >-
Specifies that the line protocol in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: >-
Write data is correctly formatted and accepted for writing to the
bucket.
'400':
description: >-
Line protocol poorly formed and no points were written. Response
can be used to determine the first malformed line in the body
line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: >-
Token does not have sufficient permissions to write to this
organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: >-
Write has been rejected because the payload is too large. Error
message returns max size supported. All data in body was rejected
and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
'503':
description: >-
Server is temporarily unavailable to accept writes. The Retry-After
header describes when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: >-
Specifies how query results should be encoded in the response.
**Note:** With `application/csv`, query results include epoch
timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: >-
The Accept-Encoding request HTTP header advertises which content
encoding, usually a compression algorithm, the client is able to
understand.
schema:
type: string
description: >-
Specifies that the query response in the body should be encoded
with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: The bucket to query.
- in: query
name: rp
schema:
type: string
description: The retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: >-
The Content-Encoding entity header is used to compress the
media-type. When present, its value indicates which encodings
were applied to the entity-body
schema:
type: string
description: >-
Specifies that the response in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: >-
The Trace-Id header reports the request's trace ID, if one was
generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the read again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
items:
type: object
properties:
statement_id:
type: integer
series:
type: array
items:
type: object
properties:
name:
type: string
columns:
type: array
items:
type: integer
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: >
name,tags,time,test_field,test_tag
test_measurement,,1603740794286107366,1,tag_value
test_measurement,,1603740870053205649,2,tag_value
test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: >-
Op describes the logical code operation during error. Useful for
debugging.
type: string
err:
readOnly: true
description: >-
Err is a stack of errors that occurred during processing of the
request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: >
Use the [Token
authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and
an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/v2.1/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/v2.1/security/tokens).
BasicAuthentication:
type: http
scheme: basic
description: >
Use the HTTP [Basic
authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username
and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a
username and password](/influxdb/v2.1/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: >
Use the [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through
the query string.
For examples and more information, see how to [authenticate with a
username and password](/influxdb/v2.1/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: >
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- Query
- Write
x-tagGroups:
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags:
- Query
- Write

View File

@ -1,7 +0,0 @@
title: InfluxDB OSS API Service
version: 2.0.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,4 +0,0 @@
title: InfluxDB OSS v1 compatibility API documentation
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,9 +0,0 @@
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags: []

File diff suppressed because it is too large Load Diff

View File

@ -1,502 +0,0 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 0.1.0
description: |
The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.
If you want to use the latest InfluxDB `/api/v2` API instead,
see the [InfluxDB v2 API documentation](/influxdb/v2.1/api/).
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: >-
Bucket to write to. If none exist a bucket will be created with a
default 3 day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: >-
When present, its value indicates to the database that compression
is applied to the line protocol body.
schema:
type: string
description: >-
Specifies that the line protocol in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: >-
Write data is correctly formatted and accepted for writing to the
bucket.
'400':
description: >-
Line protocol poorly formed and no points were written. Response
can be used to determine the first malformed line in the body
line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: >-
Token does not have sufficient permissions to write to this
organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: >-
Write has been rejected because the payload is too large. Error
message returns max size supported. All data in body was rejected
and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
'503':
description: >-
Server is temporarily unavailable to accept writes. The Retry-After
header describes when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: >-
Specifies how query results should be encoded in the response.
**Note:** With `application/csv`, query results include epoch
timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: >-
The Accept-Encoding request HTTP header advertises which content
encoding, usually a compression algorithm, the client is able to
understand.
schema:
type: string
description: >-
Specifies that the query response in the body should be encoded
with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: >-
The Content-Encoding entity header is used to compress the
media-type. When present, its value indicates which encodings
were applied to the entity-body
schema:
type: string
description: >-
Specifies that the response in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: >-
The Trace-Id header reports the request's trace ID, if one was
generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the read again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
items:
type: object
properties:
statement_id:
type: integer
series:
type: array
items:
type: object
properties:
name:
type: string
columns:
type: array
items:
type: integer
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: >
name,tags,time,test_field,test_tag
test_measurement,,1603740794286107366,1,tag_value
test_measurement,,1603740870053205649,2,tag_value
test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: >-
Op describes the logical code operation during error. Useful for
debugging.
type: string
err:
readOnly: true
description: >-
Err is a stack of errors that occurred during processing of the
request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: >
Use the [Token
authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and
an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/v2.1/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/v2.1/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: >
Use the HTTP [Basic
authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username
and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a
username and password](/influxdb/v2.1/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: >
Use the [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through
the query string.
For examples and more information, see how to [authenticate with a
username and password](/influxdb/v2.1/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: >
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- Query
- Write
x-tagGroups:
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags:
- Query
- Write

View File

@ -1,10 +0,0 @@
title: InfluxDB OSS API Service
version: 2.2.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss/contracts/ref/oss.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,12 +0,0 @@
title: InfluxDB OSS v1 compatibility API documentation
version: 2.2.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,9 +0,0 @@
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags: []

File diff suppressed because it is too large Load Diff

View File

@ -1,508 +0,0 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 2.2.0 v1 compatibility
description: >
The InfluxDB 1.x compatibility /write and /query endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana and
others.
If you want to use the latest InfluxDB /api/v2 API instead, see the
[InfluxDB v2 API documentation](/influxdb/v2.2/api/).
This documentation is generated from the
[InfluxDB OpenAPI
specification](https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss/contracts/swaggerV1Compat.yml).
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: >-
Bucket to write to. If none exists, InfluxDB creates a bucket with a
default 3-day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: >-
When present, its value indicates to the database that compression
is applied to the line protocol body.
schema:
type: string
description: >-
Specifies that the line protocol in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: >-
Write data is correctly formatted and accepted for writing to the
bucket.
'400':
description: >-
Line protocol poorly formed and no points were written. Response
can be used to determine the first malformed line in the body
line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: >-
Token does not have sufficient permissions to write to this
organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: >-
Write has been rejected because the payload is too large. Error
message returns max size supported. All data in body was rejected
and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
'503':
description: >-
Server is temporarily unavailable to accept writes. The Retry-After
header describes when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: >-
Specifies how query results should be encoded in the response.
**Note:** With `application/csv`, query results include epoch
timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: >-
The Accept-Encoding request HTTP header advertises which content
encoding, usually a compression algorithm, the client is able to
understand.
schema:
type: string
description: >-
Specifies that the query response in the body should be encoded
with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: >-
The Content-Encoding entity header is used to compress the
media-type. When present, its value indicates which encodings
were applied to the entity-body
schema:
type: string
description: >-
Specifies that the response in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: >-
The Trace-Id header reports the request's trace ID, if one was
generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the read again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
items:
type: object
properties:
statement_id:
type: integer
series:
type: array
items:
type: object
properties:
name:
type: string
columns:
type: array
items:
type: integer
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: >
name,tags,time,test_field,test_tag
test_measurement,,1603740794286107366,1,tag_value
test_measurement,,1603740870053205649,2,tag_value
test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: >-
Op describes the logical code operation during error. Useful for
debugging.
type: string
err:
readOnly: true
description: >-
Err is a stack of errors that occurred during processing of the
request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: >
Use the [Token
authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and
an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/cloud/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: >
Use the HTTP [Basic
authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username
and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a
username and password](/influxdb/cloud/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: >
Use the [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through
the query string.
For examples and more information, see how to [authenticate with a
username and password](/influxdb/cloud/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: >
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- name: Query
- name: Write
x-tagGroups:
- name: Overview
tags:
- Authentication
- name: Data I/O endpoints
tags:
- Write
- Query
- name: All endpoints
tags:
- Query
- Write

View File

@ -1,10 +0,0 @@
title: InfluxDB OSS API Service
version: 2.3.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.3.0/contracts/ref/oss.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,12 +0,0 @@
title: InfluxDB OSS v1 compatibility API documentation
version: 2.3.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.3.0/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

File diff suppressed because it is too large Load Diff

View File

@ -1,512 +0,0 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 2.3.0 v1 compatibility
description: >
The InfluxDB 1.x compatibility /write and /query endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana and
others.
If you want to use the latest InfluxDB /api/v2 API instead, see the
[InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI
specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.3.0/contracts/swaggerV1Compat.yml).
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: >-
Bucket to write to. If none exists, InfluxDB creates a bucket with a
default 3-day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: >-
When present, its value indicates to the database that compression
is applied to the line protocol body.
schema:
type: string
description: >-
Specifies that the line protocol in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: >-
Write data is correctly formatted and accepted for writing to the
bucket.
'400':
description: >-
Line protocol poorly formed and no points were written. Response
can be used to determine the first malformed line in the body
line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: >-
Token does not have sufficient permissions to write to this
organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: >-
Write has been rejected because the payload is too large. Error
message returns max size supported. All data in body was rejected
and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
'503':
description: >-
Server is temporarily unavailable to accept writes. The Retry-After
header describes when to try the write again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: >-
Specifies how query results should be encoded in the response.
**Note:** With `application/csv`, query results include epoch
timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: >-
The Accept-Encoding request HTTP header advertises which content
encoding, usually a compression algorithm, the client is able to
understand.
schema:
type: string
description: >-
Specifies that the query response in the body should be encoded
with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: >-
The Content-Encoding entity header is used to compress the
media-type. When present, its value indicates which encodings
were applied to the entity-body
schema:
type: string
description: >-
Specifies that the response in the body is encoded with gzip
or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: >-
The Trace-Id header reports the request's trace ID, if one was
generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: >-
Token is temporarily over quota. The Retry-After header describes
when to try the read again.
headers:
Retry-After:
description: >-
A non-negative decimal integer indicating the seconds to delay
after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
oneOf:
- required:
- statement_id
- error
- required:
- statement_id
- series
items:
type: object
properties:
statement_id:
type: integer
error:
type: string
series:
type: array
items:
type: object
properties:
name:
type: string
tags:
type: object
additionalProperties:
type: string
partial:
type: boolean
columns:
type: array
items:
type: string
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: >
name,tags,time,test_field,test_tag
test_measurement,,1603740794286107366,1,tag_value
test_measurement,,1603740870053205649,2,tag_value
test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: >-
Op describes the logical code operation during error. Useful for
debugging.
type: string
err:
readOnly: true
description: >-
Err is a stack of errors that occurred during processing of the
request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: >
Use the [Token
authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and
an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/cloud/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: >
Use the HTTP [Basic
authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username
and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a
username and password](/influxdb/cloud/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: >
Use the [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through
the query string.
For examples and more information, see how to [authenticate with a
username and password](/influxdb/cloud/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: >
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- name: Query
- name: Write
x-tagGroups: []

View File

@ -1,10 +0,0 @@
title: InfluxDB OSS API Service
version: 2.4.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.4.0/contracts/ref/oss.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,12 +0,0 @@
title: InfluxDB OSS v1 compatibility API documentation
version: 2.4.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.4.0/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

File diff suppressed because it is too large Load Diff

View File

@ -1,429 +0,0 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 2.4.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.4.0/contracts/swaggerV1Compat.yml).
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: When present, its value indicates to the database that compression is applied to the line protocol body.
schema:
type: string
description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: Write data is correctly formatted and accepted for writing to the bucket.
'400':
description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: Token does not have sufficient permissions to write to this organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
'503':
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
oneOf:
- required:
- statement_id
- error
- required:
- statement_id
- series
items:
type: object
properties:
statement_id:
type: integer
error:
type: string
series:
type: array
items:
type: object
properties:
name:
type: string
tags:
type: object
additionalProperties:
type: string
partial:
type: boolean
columns:
type: array
items:
type: string
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: |
name,tags,time,test_field,test_tag test_measurement,,1603740794286107366,1,tag_value test_measurement,,1603740870053205649,2,tag_value test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: Op describes the logical code operation during error. Useful for debugging.
type: string
err:
readOnly: true
description: Err is a stack of errors that occurred during processing of the request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: |
Use the [Token authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/cloud/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: |
Use the HTTP [Basic authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: |
Use the [Querystring authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through the query string.
For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: |
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- name: Query
- name: Write
x-tagGroups: []

View File

@ -1,10 +0,0 @@
title: InfluxDB OSS API Service
version: 2.5.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.6.0/contracts/ref/oss.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,12 +0,0 @@
title: InfluxDB OSS v1 compatibility API documentation
version: 2.5.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.4.0/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

File diff suppressed because it is too large Load Diff

View File

@ -1,432 +0,0 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 2.5.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.5.0/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: https://opensource.org/licenses/MIT
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: When present, its value indicates to the database that compression is applied to the line protocol body.
schema:
type: string
description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: Write data is correctly formatted and accepted for writing to the bucket.
'400':
description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: Token does not have sufficient permissions to write to this organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
'503':
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
oneOf:
- required:
- statement_id
- error
- required:
- statement_id
- series
items:
type: object
properties:
statement_id:
type: integer
error:
type: string
series:
type: array
items:
type: object
properties:
name:
type: string
tags:
type: object
additionalProperties:
type: string
partial:
type: boolean
columns:
type: array
items:
type: string
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: |
name,tags,time,test_field,test_tag test_measurement,,1603740794286107366,1,tag_value test_measurement,,1603740870053205649,2,tag_value test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: Op describes the logical code operation during error. Useful for debugging.
type: string
err:
readOnly: true
description: Err is a stack of errors that occurred during processing of the request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: |
Use the [Token authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/cloud/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: |
Use the HTTP [Basic authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: |
Use the [Querystring authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through the query string.
For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: |
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- name: Query
- name: Write
x-tagGroups: []

View File

@ -1,10 +0,0 @@
title: InfluxDB OSS API Service
version: 2.6.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.6.0/contracts/ref/oss.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

View File

@ -1,12 +0,0 @@
title: InfluxDB OSS v1 compatibility API documentation
version: 2.6.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.6.0/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'

File diff suppressed because it is too large Load Diff

View File

@ -1,431 +0,0 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 2.6.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/latest/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.6.0/contracts/swaggerV1Compat.yml).
license:
name: MIT
url: https://opensource.org/licenses/MIT
servers:
- url: /
paths:
/write:
post:
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a V1-compatible format
requestBody:
description: Line protocol body
required: true
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: query
name: db
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: precision
schema:
type: string
description: Write precision.
- in: header
name: Content-Encoding
description: When present, its value indicates to the database that compression is applied to the line protocol body.
schema:
type: string
description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
responses:
'204':
description: Write data is correctly formatted and accepted for writing to the bucket.
'400':
description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolError'
'401':
description: Token does not have sufficient permissions to write to this organization and bucket or the organization and bucket do not exist.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: No token was sent and they are required.
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'413':
description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written.
content:
application/json:
schema:
$ref: '#/components/schemas/LineProtocolLengthError'
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
'503':
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/query:
post:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: header
name: Content-Type
schema:
type: string
enum:
- application/vnd.influxql
- in: query
name: db
schema:
type: string
required: true
description: Bucket to query.
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- in: query
name: q
description: Defines the influxql query to run.
schema:
type: string
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
parameters:
TraceSpan:
in: header
name: Zap-Trace-Span
description: OpenTracing span context
example:
trace_id: '1'
span_id: '1'
baggage:
key: value
required: false
schema:
type: string
AuthUserV1:
in: query
name: u
required: false
schema:
type: string
description: Username.
AuthPassV1:
in: query
name: p
required: false
schema:
type: string
description: User token.
schemas:
InfluxQLResponse:
properties:
results:
type: array
oneOf:
- required:
- statement_id
- error
- required:
- statement_id
- series
items:
type: object
properties:
statement_id:
type: integer
error:
type: string
series:
type: array
items:
type: object
properties:
name:
type: string
tags:
type: object
additionalProperties:
type: string
partial:
type: boolean
columns:
type: array
items:
type: string
values:
type: array
items:
type: array
items: {}
InfluxQLCSVResponse:
type: string
example: |
name,tags,time,test_field,test_tag test_measurement,,1603740794286107366,1,tag_value test_measurement,,1603740870053205649,2,tag_value test_measurement,,1603741221085428881,3,tag_value
Error:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- unprocessable entity
- empty value
- unavailable
- forbidden
- too many requests
- unauthorized
- method not allowed
message:
readOnly: true
description: Message is a human-readable message.
type: string
required:
- code
- message
LineProtocolError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- internal error
- not found
- conflict
- invalid
- empty value
- unavailable
message:
readOnly: true
description: Message is a human-readable message.
type: string
op:
readOnly: true
description: Op describes the logical code operation during error. Useful for debugging.
type: string
err:
readOnly: true
description: Err is a stack of errors that occurred during processing of the request. Useful for debugging.
type: string
line:
readOnly: true
description: First line within sent body containing malformed data
type: integer
format: int32
required:
- code
- message
- op
- err
LineProtocolLengthError:
properties:
code:
description: Code is the machine-readable error code.
readOnly: true
type: string
enum:
- invalid
message:
readOnly: true
description: Message is a human-readable message.
type: string
maxLength:
readOnly: true
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required:
- code
- message
- maxLength
securitySchemes:
TokenAuthentication:
type: apiKey
name: Authorization
in: header
description: |
Use the [Token authentication](#section/Authentication/TokenAuthentication)
scheme to authenticate to the InfluxDB API.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and an InfluxDB API token.
The word `Token` is case-sensitive.
### Syntax
`Authorization: Token YOUR_INFLUX_TOKEN`
For examples and more information, see the following:
- [`/authorizations`](#tag/Authorizations) endpoint.
- [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication).
- [Manage API tokens](/influxdb/cloud/security/tokens/).
BasicAuthentication:
type: http
scheme: basic
description: |
Use the HTTP [Basic authentication](#section/Authentication/BasicAuthentication)
scheme with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme):
For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/).
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: |
Use the [Querystring authentication](#section/Authentication/QuerystringAuthentication)
scheme with InfluxDB 1.x API parameters to provide credentials through the query string.
For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/).
security:
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: |
The InfluxDB 1.x API requires authentication for all requests.
InfluxDB Cloud uses InfluxDB API tokens to authenticate requests.
For more information, see the following:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring authentication](#section/Authentication/QuerystringAuthentication)
<!-- ReDoc-Inject: <security-definitions> -->
x-traitTag: true
- name: Query
- name: Write

View File

@ -1,13 +0,0 @@
- name: Using the InfluxDB HTTP API
tags:
- Quick start
- Authentication
- Supported operations
- Headers
- Pagination
- Response codes
- Data I/O endpoints
- Security and access endpoints
- System information endpoints
- name: All endpoints
tags: []

View File

@ -25,19 +25,12 @@ hrefTargetBlank = true
smartDashes = false
[taxonomies]
"influxdb/v2.7/tag" = "influxdb/v2.7/tags"
"influxdb/v2.6/tag" = "influxdb/v2.6/tags"
"influxdb/v2.5/tag" = "influxdb/v2.5/tags"
"influxdb/v2.4/tag" = "influxdb/v2.4/tags"
"influxdb/v2.3/tag" = "influxdb/v2.3/tags"
"influxdb/v2.2/tag" = "influxdb/v2.2/tags"
"influxdb/v2.1/tag" = "influxdb/v2.1/tags"
"influxdb/v2.0/tag" = "influxdb/v2.0/tags"
"influxdb/v2/tag" = "influxdb/v2/tags"
"influxdb/cloud/tag" = "influxdb/cloud/tags"
"influxdb/cloud-serverless/tag" = "influxdb/cloud-serverless/tags"
"influxdb/cloud-dedicated/tag" = "influxdb/cloud-dedicated/tags"
"influxdb/clustered/tag" = "influxdb/clustered/tags"
"flux/v0.x/tag" = "flux/v0.x/tags"
"flux/v0/tag" = "flux/v0/tags"
[markup]
[markup.goldmark]

View File

@ -21,19 +21,12 @@ hrefTargetBlank = true
smartDashes = false
[taxonomies]
"influxdb/v2.7tag" = "influxdb/v2.7/tags"
"influxdb/v2.6/tag" = "influxdb/v2.6/tags"
"influxdb/v2.5/tag" = "influxdb/v2.5/tags"
"influxdb/v2.4/tag" = "influxdb/v2.4/tags"
"influxdb/v2.3/tag" = "influxdb/v2.3/tags"
"influxdb/v2.2/tag" = "influxdb/v2.2/tags"
"influxdb/v2.1/tag" = "influxdb/v2.1/tags"
"influxdb/v2.0/tag" = "influxdb/v2.0/tags"
"influxdb/v2/tag" = "influxdb/v2/tags"
"influxdb/cloud/tag" = "influxdb/cloud/tags"
"influxdb/cloud-serverless/tag" = "influxdb/cloud-serverless/tags"
"influxdb/cloud-dedicated/tag" = "influxdb/cloud-dedicated/tags"
"influxdb/clustered/tag" = "influxdb/clustered/tags"
"flux/v0.x/tag" = "flux/v0.x/tags"
"flux/v0/tag" = "flux/v0/tags"
[markup]
[markup.goldmark]

View File

@ -1,62 +0,0 @@
---
title: Chronograf 1.10 documentation
description: >
Chronograf is InfluxData's open source web application.
Use Chronograf with the other components of the TICK stack to visualize your
monitoring data and easily create alerting and automation rules.
menu:
chronograf_1_10:
name: Chronograf v1.10
weight: 1
---
Chronograf is InfluxData's open source web application.
Use Chronograf with the other components of the [TICK stack](https://www.influxdata.com/products/) to visualize your monitoring data and easily create alerting and automation rules.
## Key features
### Infrastructure monitoring
* View all hosts and their statuses in your infrastructure
* View the configured applications on each host
* Monitor your applications with Chronograf's [pre-created dashboards](/chronograf/v1.10/guides/using-precreated-dashboards/)
### Alert management
Chronograf offers a UI for [Kapacitor](https://github.com/influxdata/kapacitor), InfluxData's data processing framework for creating alerts, running ETL jobs, and detecting anomalies in your data.
* Generate threshold, relative, and deadman alerts on your data
* Easily enable and disable existing alert rules
* View all active alerts on an alert dashboard
* Send alerts to the supported event handlers, including Slack, PagerDuty, HipChat, and [more](/chronograf/v1.10/guides/configuring-alert-endpoints/)
### Data visualization
* Monitor your application data with Chronograf's [pre-created dashboards](/chronograf/v1.10/guides/using-precreated-dashboards/)
* Create your own customized dashboards complete with various graph types and [template variables](/chronograf/v1.10/guides/dashboard-template-variables/)
* Investigate your data with Chronograf's data explorer and query templates
### Database management
* Create and delete databases and retention policies
* View currently-running queries and stop inefficient queries from overloading your system
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
### Query management
* View a list of databases, queries and their status
* Kill a query
* Download a list of queries in your instance to a CSV file
### Multi-organization and multi-user support
{{% note %}}
**Note:** To use this feature, OAuth 2.0 authentication must be configured.
Once configured, the Chronograf Admin tab on the Admin menu is visible.
For details, see [Managing Chronograf security](/chronograf/v1.10/administration/managing-security/).
{{% /note %}}
* Create organizations and assign users to those organizations
* Restrict access to administrative functions
* Allow users to set up and maintain unique dashboards for their organizations

View File

@ -1,45 +0,0 @@
---
title: About the Chronograf project
description: Learn about Chronograf, the user interface (UI) for InfluxDB.
menu:
chronograf_1_10:
name: About the project
weight: 10
---
Chronograf is the user interface component of the [InfluxData time series platform](https://www.influxdata.com/time-series-platform/). It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with realtime visualizations of your data.
Follow the links below for more information.
{{< children >}}
Chronograf is released under the GNU Affero General Public License. This Free Software Foundation license is fairly new,
and differs from the more widely known and understood GPL.
Our goal with using AGPL is to preserve the concept of copyleft with Chronograf.
With traditional GPL, copyleft was associated with the concept of distribution of software.
The problem is that nowadays, distribution of software is rare: things tend to run in the cloud. AGPL fixes this “loophole”
in GPL by saying that if you use the software over a network, you are bound by the copyleft. Other than that,
the license is virtually the same as GPL v3.
To say this another way: if you modify the core source code of Chronograf, the goal is that you have to contribute
those modifications back to the community.
Note however that it is NOT required that your dashboards and alerts created by using Chronograf be published.
The copyleft applies only to the source code of Chronograf itself.
If this explanation isn't good enough for you and your use case, we dual license Chronograf under our
[standard commercial license](https://www.influxdata.com/legal/slsa/).
[Contact sales for more information](https://www.influxdata.com/contact-sales/).
## Third Party Software
InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
software of third parties that is incorporated in InfluxData products.
Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
Third party suppliers assume no liability for any claim that might arise with respect to such third party software,
nor for a customers use of or inability to use the third party software.
The [list of third party software components, including references to associated license and other materials](https://github.com/influxdata/chronograf/blob/master/LICENSE_OF_DEPENDENCIES.md),
is maintained on a version by version basis.

View File

@ -1,13 +0,0 @@
---
title: InfluxData Contributor License Agreement (CLA)
description: >
Before contributing to the Chronograf project, submit the InfluxData Contributor License Agreement.
menu:
chronograf_1_10:
weight: 30
parent: About the project
params:
url: https://www.influxdata.com/legal/cla/
---
Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site.

View File

@ -1,13 +0,0 @@
---
title: Contribute to Chronograf
description: Contribute to the Chronograf project.
menu:
chronograf_1_10:
name: Contribute
weight: 20
parent: About the project
params:
url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md
---
See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project.

View File

@ -1,13 +0,0 @@
---
title: Open source license for Chronograf
description: Find the open source license for Chronograf.
menu:
chronograf_1_10:
Name: Open source license
weight: 40
parent: About the project
params:
url: https://github.com/influxdata/chronograf/blob/master/LICENSE
---
The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project.

View File

@ -1,14 +0,0 @@
---
title: Administering Chronograf
description: >
Upgrade and configure Chronograf, plus manage connections, users, security, and organizations.
menu:
chronograf_1_10:
name: Administration
weight: 40
---
Follow the links below for more information.
{{< children >}}

View File

@ -1,22 +0,0 @@
---
title: Connecting Chronograf to InfluxDB Enterprise clusters
description: Work with InfluxDB Enterprise clusters through the Chronograf UI.
menu:
chronograf_1_10:
name: Connecting Chronograf to InfluxDB Enterprise
weight: 40
parent: Administration
---
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
Enter the HTTP bind address of one of your cluster's meta nodes into that input and Chronograf takes care of the rest.
![Cluster connection details](/img/chronograf/1-6-faq-cluster-connection.png)
Note that the example above assumes that you do not have authentication enabled.
If you have authentication enabled, the form requires username and password information.
For details about monitoring InfluxDB Enterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/chronograf/v1.10/guides/monitoring-influxenterprise-clusters).

View File

@ -1,682 +0,0 @@
---
title: Chronograf configuration options
description: >
Options available in the Chronograf configuration file and environment variables.
menu:
chronograf_1_10:
name: Configuration options
weight: 30
parent: Administration
---
Chronograf is configured using the configuration file (/etc/default/chronograf) and environment variables. If you do not uncomment a configuration option, the system uses its default setting. The configuration settings in this document are set to their default settings. For more information, see [Configure Chronograf](/chronograf/v1.10/administration/configuration/).
* [Usage](#usage)
* [Chronograf service options](#chronograf-service-options)
- [InfluxDB connection options](#influxdb-connection-options)
- [Kapacitor connection options](#kapacitor-connection-options)
- [TLS (Transport Layer Security) options](#tls-transport-layer-security-options)
- [etcd options](#etcd-options)
- [Other service options](#other-service-options)
* [Authentication options](#authentication-options)
* [General authentication options](#general-authentication-options)
* [GitHub-specific OAuth 2.0 authentication options](#github-specific-oauth-20-authentication-options)
* [Google-specific OAuth 2.0 authentication options](#google-specific-oauth-20-authentication-options)
* [Auth0-specific OAuth 2.0 authentication options](#auth0-specific-oauth-20-authentication-options)
* [Heroku-specific OAuth 2.0 authentication options](#heroku-specific-oauth-20-authentication-options)
* [Generic OAuth 2.0 authentication options](#generic-oauth-20-authentication-options)
## Usage
Start the Chronograf service, and include any options after `chronograf`, where `[OPTIONS]` are options separated by spaces:
```sh
chronograf [OPTIONS]
```
**Linux examples**
- To start `chronograf` without options:
```sh
sudo systemctl start chronograf
```
- To start `chronograf` and set options for develop mode and to disable reporting:
```sh
sudo systemctl start chronograf --develop --reporting-disabled
```
**MacOS X examples**
- To start `chronograf` without options:
```sh
chronograf
```
- To start `chronograf` and add shortcut options for develop mode and to disable reporting:
```sh
chronograf -d -r
```
{{% note %}}
***Note:*** Command line options take precedence over corresponding environment variables.
{{% /note %}}
## Chronograf service options
#### `--host=`
The IP that the `chronograf` service listens on.
Default value: `0.0.0.0`
Example: `--host=0.0.0.0`
Environment variable: `$HOST`
#### `--port=`
The port that the `chronograf` service listens on for insecure connections.
Default: `8888`
Environment variable: `$PORT`
#### `--bolt-path=` | `-b`
The file path to the BoltDB file.
Default value: `./chronograf-v1.db`
Environment variable: `$BOLT_PATH`
#### `--canned-path=` | `-c`
The path to the directory of [canned dashboards](/chronograf/v1.10/guides/using-precreated-dashboards) files. Canned dashboards (also known as pre-created dashboards or application layouts) cannot be edited. They're delivered with Chronograf and available depending on which Telegraf input plugins you have enabled.
Default value: `/usr/share/chronograf/canned`
Environment variable: `$CANNED_PATH`
#### `--resources-path=`
Path to directory of sources (.src files), Kapacitor connections (.kap files), organizations (.org files), and dashboards (.dashboard files).
{{% note %}}
**Note:** If you have a dashboard with the `.json` extension, rename it with the `.dashboard` extension in this directory to ensure the dashboard is loaded.
{{% /note %}}
Default value: `/usr/share/chronograf/resources`
Environment variable: `$RESOURCES_PATH`
#### `--basepath=` | `-p`
The URL path prefix under which all `chronograf` routes will be mounted.
Environment variable: `$BASE_PATH`
#### `--status-feed-url=`
URL of JSON feed to display as a news feed on the client Status page.
Default value: `https://www.influxdata.com/feed/json`
Environment variable: `$STATUS_FEED_URL`
#### `--version` | `-v`
Displays the version of the Chronograf service.
Example:
```sh
$ chronograf -v
2018/01/03 14:11:19 Chronograf {{< latest-patch >}} (git: b74ae387)
```
## InfluxDB connection options
{{% note %}}
InfluxDB connection details specified via command line when starting Chronograf do not persist when Chronograf is shut down.
To persist connection details, [include them in a `.src` file](/chronograf/v1.10/administration/creating-connections/#manage-influxdb-connections-using-src-files) located in your [`--resources-path`](#resources-path).
**Only InfluxDB 1.x connections are configurable in a `.src` file.**
Configure InfluxDB 2.x and Cloud connections with CLI flags or in the
[Chronograf UI](/chronograf/v1.10/administration/creating-connections/#manage-influxdb-connections-using-the-chronograf-ui).
{{% /note %}}
### `--influxdb-url`
The location of your InfluxDB instance, including the protocol, IP address, and port.
Example: `--influxdb-url http://localhost:8086`
Environment variable: `$INFLUXDB_URL`
### `--influxdb-username`
The [username] for your InfluxDB instance.
Environment variable: `$INFLUXDB_USERNAME`
### `--influxdb-password`
The [password] for your InfluxDB instance.
Environment variable: `$INFLUXDB_PASSWORD`
### `--influxdb-org`
InfluxDB 2.x or InfluxDB Cloud organization name.
Environment variable: `$INFLUXDB_ORG`
### `--influxdb-token`
InfluxDB 2.x or InfluxDB Cloud [authentication token](/influxdb/cloud/security/tokens/).
Environment variable: `$INFLUXDB_TOKEN`
## Kapacitor connection options
{{% note %}}
Kapacitor connection details specified via command line when starting Chronograf do not persist when Chronograf is shut down.
To persist connection details, [include them in a `.kap` file](/chronograf/v1.10/administration/creating-connections/#manage-kapacitor-connections-using-kap-files) located in your [`--resources-path`](#resources-path).
{{% /note %}}
### `--kapacitor-url=`
The location of your Kapacitor instance, including `http://`, IP address, and port.
Example: `--kapacitor-url=http://0.0.0.0:9092`.
Environment variable: `$KAPACITOR_URL`
### `--kapacitor-username=`
The username for your Kapacitor instance.
Environment variable: `$KAPACITOR_USERNAME`
### `--kapacitor-password=`
The password for your Kapacitor instance.
Environment variable: `$KAPACITOR_PASSWORD`
### TLS (Transport Layer Security) options
See [Configuring TLS (Transport Layer Security) and HTTPS](/chronograf/v1.10/administration/managing-security/#configure-tls-transport-layer-security-and-https) for more information.
#### `--cert=`
The file path to PEM-encoded public key certificate.
Environment variable: `$TLS_CERTIFICATE`
#### `--key=`
The file path to private key associated with given certificate.
Environment variable: `$TLS_PRIVATE_KEY`
### etcd options
#### `--etcd-endpoints=` | `-e`
List of etcd endpoints.
##### CLI example
```sh
## Single parameter
--etcd-endpoints=localhost:2379
## Mutiple parameters
--etcd-endpoints=localhost:2379 \
--etcd-endpoints=192.168.1.61:2379 \
--etcd-endpoints=192.192.168.1.100:2379
```
Environment variable: `$ETCD_ENDPOINTS`
##### Environment variable example
```sh
## Single parameter
ETCD_ENDPOINTS=localhost:2379
## Mutiple parameters
ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
```
#### `--etcd-username=`
Username to log into etcd.
Environment variable: `$ETCD_USERNAME`
#### `--etcd-password=`
Password to log into etcd.
Environment variable: `$ETCD_PASSWORD`
#### `--etcd-dial-timeout=`
Total time to wait before timing out while connecting to etcd endpoints.
0 means no timeout.
The default is 1s.
Environment variable: `$ETCD_DIAL_TIMEOUT`
#### `--etcd-request-timeout=`
Total time to wait before timing out an etcd view or update request.
0 means no timeout.
The default is 1s.
Environment variable: `$ETCD_REQUEST_TIMEOUT`
#### `--etcd-cert=`
Path to etcd PEM-encoded TLS public key certificate.
Environment variable: `$ETCD_CERTIFICATE`
#### `--etcd-key=`
Path to private key associated with specified etcd certificate.
Environment variable: `$ETCD_PRIVATE_KEY`
#### `--etcd-root-ca`
Path to root CA certificate for TLS verification.
Environment variable: `$ETCD_ROOT_CA`
### Other service options
#### `--custom-auto-refresh`
Add custom auto-refresh intervals to the list of available auto-refresh intervals in Chronograf dashboards.
Provide a semi-colon-delimited list of key-value pairs where the key is the interval
name that appears in the auto-refresh dropdown menu and the value is the auto-refresh interval in milliseconds.
Example: `--custom-auto-refresh "500ms=500;1s=1000"`
Environment variable: `$CUSTOM_AUTO_REFRESH`
#### `--custom-link <display_name>:<link_address>`
Custom link added to Chronograf User menu options. Useful for providing links to internal company resources for your Chronograf users. Can be used when any OAuth 2.0 authentication is enabled. To add another custom link, repeat the custom link option.
Example: `--custom-link InfluxData:http://www.influxdata.com/`
#### `--develop` | `-d`
Run the `chronograf` service in developer mode.
#### `--help` | `-h`
Displays the command line help for `chronograf`.
#### `--host-page-disabled` | `-H`
Disables rendering and serving of the Hosts List page (/sources/$sourceId/hosts).
Environment variable: `$HOST_PAGE_DISABLED`
#### `--log-level=` | `-l`
Set the logging level.
Valid values: `debug` | `info` | `error`
Default value: `info`
Example: `--log-level=debug`
Environment variable: `$LOG_LEVEL`
#### `--reporting-disabled` | `-r`
Disables reporting of usage statistics.
Usage statistics reported once every 24 hours include: `OS`, `arch`, `version`, `cluster_id`, and `uptime`.
Environment variable: `$REPORTING_DISABLED`
## Authentication options
### General authentication options
#### `--auth-duration=`
The total duration (in hours) of cookie life for authentication.
Default value: `720h`
Authentication expires on browser close when `--auth-duration=0`.
Environment variable: `$AUTH_DURATION`
#### `--inactivity-duration=`
The duration that a token is valid without any new activity.
Default value: `5m`
Environment variable: `$INACTIVITY_DURATION`
#### `--public-url=`
The public URL required to access Chronograf using a web browser. For example, if you access Chronograf using the default URL, the public URL value would be `http://localhost:8888`.
Required for Google OAuth 2.0 authentication. Used for Auth0 and some generic OAuth 2.0 authentication providers.
Environment variable: `$PUBLIC_URL`
#### `--token-secret=` | `-t`
The secret for signing tokens.
Environment variable: `$TOKEN_SECRET`
### GitHub-specific OAuth 2.0 authentication options
See [Configuring GitHub authentication](/chronograf/v1.10/administration/managing-security/#configure-github-authentication) for more information.
#### `--github-url`
{{< req "Required if using Github Enterprise" >}}
GitHub base URL. Default is `https://github.com`.
Environment variable: `$GH_URL`
#### `--github-client-id` | `-i`
The GitHub client ID value for OAuth 2.0 support.
Environment variable: `$GH_CLIENT_ID`
#### `--github-client-secret` | `-s`
The GitHub Client Secret value for OAuth 2.0 support.
Environment variable: `$GH_CLIENT_SECRET`
#### `--github-organization` | `-o`
[Optional] Specify a GitHub organization membership required for a user.
##### CLI example
```sh
## Single parameter
--github-organization=org1
## Mutiple parameters
--github-organization=org1 \
--github-organization=org2 \
--github-organization=org3
```
Environment variable: `$GH_ORGS`
##### Environment variable example
```sh
## Single parameter
GH_ORGS=org1
## Mutiple parameters
GH_ORGS=org1,org2,org3
```
### Google-specific OAuth 2.0 authentication options
See [Configuring Google authentication](/chronograf/v1.10/administration/managing-security/#configure-google-authentication) for more information.
#### `--google-client-id=`
The Google Client ID value required for OAuth 2.0 support.
Environment variable: `$GOOGLE_CLIENT_ID`
#### `--google-client-secret=`
The Google Client Secret value required for OAuth 2.0 support.
Environment variable: `$GOOGLE_CLIENT_SECRET`
#### `--google-domains=`
[Optional] Restricts authorization to users from specified Google email domains.
##### CLI example
```sh
## Single parameter
--google-domains=delorean.com
## Mutiple parameters
--google-domains=delorean.com \
--google-domains=savetheclocktower.com
```
Environment variable: `$GOOGLE_DOMAINS`
##### Environment variable example
```sh
## Single parameter
GOOGLE_DOMAINS=delorean.com
## Mutiple parameters
GOOGLE_DOMAINS=delorean.com,savetheclocktower.com
```
### Auth0-specific OAuth 2.0 authentication options
See [Configuring Auth0 authentication](/chronograf/v1.10/administration/managing-security/#configure-auth0-authentication) for more information.
#### `--auth0-domain=`
The subdomain of your Auth0 client; available on the configuration page for your Auth0 client.
Example: https://myauth0client.auth0.com
Environment variable: `$AUTH0_DOMAIN`
#### `--auth0-client-id=`
The Auth0 Client ID value required for OAuth 2.0 support.
Environment variable: `$AUTH0_CLIENT_ID`
#### `--auth0-client-secret=`
The Auth0 Client Secret value required for OAuth 2.0 support.
Environment variable: `$AUTH0_CLIENT_SECRET`
#### `--auth0-organizations=`
[Optional] The Auth0 organization membership required to access Chronograf.
Organizations are set using an "organization" key in the user's `app_metadata`.
Lists are comma-separated and are only available when using environment variables.
##### CLI example
```sh
## Single parameter
--auth0-organizations=org1
## Mutiple parameters
--auth0-organizations=org1 \
--auth0-organizations=org2 \
--auth0-organizations=org3
```
Environment variable: `$AUTH0_ORGS`
##### Environment variable example
```sh
## Single parameter
AUTH0_ORGS=org1
## Mutiple parameters
AUTH0_ORGS=org1,org2,org3
```
### Heroku-specific OAuth 2.0 authentication options
See [Configuring Heroku authentication](/chronograf/v1.10/administration/managing-security/#configure-heroku-authentication) for more information.
### `--heroku-client-id=`
The Heroku Client ID for OAuth 2.0 support.
**Environment variable:** `$HEROKU_CLIENT_ID`
### `--heroku-secret=`
The Heroku Secret for OAuth 2.0 support.
**Environment variable:** `$HEROKU_SECRET`
### `--heroku-organization=`
The Heroku organization memberships required for access to Chronograf.
##### CLI example
```sh
## Single parameter
--heroku-organization=org1
## Mutiple parameters
--heroku-organization=org1 \
--heroku-organization=org2 \
--heroku-organization=org3
```
**Environment variable:** `$HEROKU_ORGS`
##### Environment variable example
```sh
## Single parameter
HEROKU_ORGS=org1
## Mutiple parameters
HEROKU_ORGS=org1,org2,org3
```
### Generic OAuth 2.0 authentication options
See [Configure OAuth 2.0](/chronograf/v1.10/administration/managing-security/#configure-oauth-2-0) for more information.
#### `--generic-name=`
The generic OAuth 2.0 name presented on the login page.
Environment variable: `$GENERIC_NAME`
#### `--generic-client-id=`
The generic OAuth 2.0 Client ID value.
Can be used for a custom OAuth 2.0 service.
Environment variable: `$GENERIC_CLIENT_ID`
#### `--generic-client-secret=`
The generic OAuth 2.0 Client Secret value.
Environment variable: `$GENERIC_CLIENT_SECRET`
#### `--generic-scopes=`
The scopes requested by provider of web client.
Default value: `user:email`
##### CLI example
```sh
## Single parameter
--generic-scopes=api
## Mutiple parameters
--generic-scopes=api \
--generic-scopes=openid \
--generic-scopes=read_user
```
Environment variable: `$GENERIC_SCOPES`
##### Environment variable example
```sh
## Single parameter
GENERIC_SCOPES=api
## Mutiple parameters
GENERIC_SCOPES=api,openid,read_user
```
#### `--generic-domains=`
The email domain required for user email addresses.
Example: `--generic-domains=example.com`
##### CLI example
```sh
## Single parameter
--generic-domains=delorean.com
## Mutiple parameters
--generic-domains=delorean.com \
--generic-domains=savetheclocktower.com
```
Environment variable: `$GENERIC_DOMAINS`
##### Environment variable example
```sh
## Single parameter
GENERIC_DOMAINS=delorean.com
## Mutiple parameters
GENERIC_DOMAINS=delorean.com,savetheclocktower.com
```
#### `--generic-auth-url`
The authorization endpoint URL for the OAuth 2.0 provider.
Environment variable: `$GENERIC_AUTH_URL`
#### `--generic-token-url`
The token endpoint URL for the OAuth 2.0 provider.
Environment variable: `$GENERIC_TOKEN_URL`
#### `--generic-api-url`
The URL that returns OpenID UserInfo-compatible information.
Environment variable: `$GENERIC_API_URL`
#### `--oauth-no-pkce`
Disable OAuth PKCE (Proof Key for Code Exchange).
Environment variable: `$OAUTH_NO_PKCE`

View File

@ -1,70 +0,0 @@
---
title: Configure Chronograf
description: >
Configure Chronograf, including security, multiple users, and multiple organizations.
menu:
chronograf_1_10:
name: Configure
weight: 20
parent: Administration
---
Configure Chronograf by passing command line options when starting the Chronograf service. Or set custom default configuration options in the filesystem so they dont have to be passed in when starting Chronograf.
- [Start the Chronograf service](#start-the-chronograf-service)
- [Set custom default Chronograf configuration options](#set-custom-default-chronograf-configuration-options)
- [Set up security, organizations, and users](#set-up-security-organizations-and-users)
## Start the Chronograf service
Use one of the following commands to start Chronograf:
- **If you installed Chronograf using an official Debian or RPM package and are running a distro with `systemd`. For example, Ubuntu 15 or later.**
```sh
systemctl start chronograf
```
- **If you installed Chronograf using an official Debian or RPM package:**
```sh
service chronograf start
```
- **If you built Chronograf from source:**
```bash
$GOPATH/bin/chronograf
```
## Set custom default Chronograf configuration options
Custom default Chronograf configuration settings can be defined in `/etc/default/chronograf`.
This file consists of key-value pairs. See keys (environment variables) for [Chronograf configuration options](/chronograf/v1.10/administration/config-options), and set values for the keys you want to configure.
```conf
HOST=0.0.0.0
PORT=8888
TLS_CERTIFICATE=/path/to/cert.pem
TOKEN_SECRET=MySup3rS3cretT0k3n
LOG_LEVEL=info
```
{{% note %}}
**Note:** `/etc/default/chronograf` is only created when installing the `.deb or `.rpm` package.
{{% /note %}}
## Set up security, organizations, and users
To set up security for Chronograf, configure:
* [OAuth 2.0 authentication](/chronograf/v1.10/administration/managing-security/#configure-oauth-2-0)
* [TLS (Transport Layer Security) for HTTPS](/chronograf/v1.10/administration/managing-security/#configure-tls-transport-layer-security-and-https)
After you configure OAuth 2.0 authentication, you can set up multiple organizations, roles, and users. For details, check out the following topics:
* [Managing organizations](/chronograf/v1.10/administration/managing-organizations/)
* [Managing Chronograf users](/chronograf/v1.10/administration/managing-chronograf-users/)
<!-- TODO ## Configuring Chronograf for InfluxDB Enterprise clusters) -->

View File

@ -1,72 +0,0 @@
---
title: Create a Chronograf HA configuration
description: Create a Chronograf high-availability (HA) cluster using etcd.
menu:
chronograf_1_10:
weight: 10
parent: Administration
---
To create a Chronograf high-availability (HA) configuration using an etcd cluster as a shared data store, do the following:
1. [Install and start etcd](#install-and-start-etcd)
2. Set up a load balancer for Chronograf
3. [Start Chronograf](#start-chronograf)
Have an existing Chronograf configuration store that you want to use with a Chronograf HA configuration? Learn how to [migrate your Chrongraf configuration](/chronograf/v1.10/administration/migrate-to-high-availability/) to a shared data store.
## Architecture
{{< svg "/static/img/chronograf/1-8-ha-architecture.svg" >}}
## Install and start etcd
1. Download the latest etcd release [from GitHub](https://github.com/etcd-io/etcd/releases/).
(For detailed installation instructions specific to your operating system, see [Install and deploy etcd](http://play.etcd.io/install).)
2. Extract the `etcd` binary and place it in your system PATH.
3. Start etcd.
## Start Chronograf
Run the following command to start Chronograf using `etcd` as the storage layer. The syntax depends on whether you're using command line flags or the `ETCD_ENDPOINTS` environment variable.
##### Define etcd endpoints with command line flags
```sh
# Syntax
chronograf --etcd-endpoints=<etcd-host>
# Examples
# Add a single etcd endpoint when starting Chronograf
chronograf --etcd-endpoints=localhost:2379
# Add multiple etcd endpoints when starting Chronograf
chronograf \
--etcd-endpoints=localhost:2379 \
--etcd-endpoints=192.168.1.61:2379 \
--etcd-endpoints=192.192.168.1.100:2379
```
##### Define etcd endpoints with the ETCD_ENDPOINTS environment variable
```sh
# Provide etcd endpoints in a comma-separated list
export ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
# Start Chronograf
chronograf
```
##### Define etcd endpoints with TLS enabled
Use the `--etcd-cert` flag to specify the path to the etcd PEM-encoded public
certificate file and the `--etcd-key` flag to specify the path to the private key
associated with the etcd certificate.
```sh
chronograf --etcd-endpoints=localhost:2379 \
--etcd-cert=path/to/etcd-certificate.pem \
--etcd-key=path/to/etcd-private-key.key
```
For more information, see [Chronograf etcd configuration options](/chronograf/v1.10/administration/config-options#etcd-options).

View File

@ -1,289 +0,0 @@
---
title: Create InfluxDB and Kapacitor connections
description: Create and manage InfluxDB and Kapacitor connections in the UI.
menu:
chronograf_1_10:
name: Create InfluxDB and Kapacitor connections
weight: 50
parent: Administration
related:
- /influxdb/v2.0/tools/chronograf/
---
Connections to InfluxDB and Kapacitor can be configured through the Chronograf user interface (UI) or with JSON configuration files:
- [Manage InfluxDB connections using the Chronograf UI](#manage-influxdb-connections-using-the-chronograf-ui)
- [Manage InfluxDB connections using .src files](#manage-influxdb-connections-using-src-files)
- [Manage Kapacitor connections using the Chronograf UI](#manage-kapacitor-connections-using-the-chronograf-ui)
- [Manage Kapacitor connections using .kap files](#manage-kapacitor-connections-using-kap-files)
{{% note %}}
**Note:** Connection details are stored in Chronografs internal database `chronograf-v1.db`.
You may administer the internal database when [restoring a Chronograf database](/chronograf/v1.10/administration/restoring-chronograf-db/)
or when [migrating a Chronograf configuration from BoltDB to etcd](/chronograf/v1.10/administration/migrate-to-high-availability/).
{{% /note %}}
## Manage InfluxDB connections using the Chronograf UI
To create an InfluxDB connection in the Chronograf UI:
1. Open Chronograf and click **Configuration** (wrench icon) in the navigation menu.
2. Click **Add Connection**.
![Chronograf connections landing page](/img/chronograf/1-6-connection-landing-page.png)
3. Provide the necessary connection credentials.
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxDB 1.x](#)
[InfluxDB Cloud or OSS 2.x ](#)
{{% /tabs %}}
{{% tab-content %}}
<img src="/img/chronograf/1-8-influxdb-v1-connection-config.png" style="width:100%; max-width:798px;"/>
- **Connection URL**: hostname or IP address and port of the InfluxDB 1.x instance
- **Connection Name**: Unique name for this connection.
- **Username**: InfluxDB 1.x username
_(Required only if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) in InfluxDB)_
- **Password**: InfluxDB password
_(Required only if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) in InfluxDB)_
- **Telegraf Database Name**: the database Chronograf uses to populate parts of the application, including the Host List page (default is `telegraf`)
- **Default Retention Policy**: default [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp)
(if left blank, defaults to `autogen`)
- **Default connection**: use this connection as the default connection for data exploration, dashboards, and administrative actions
{{% /tab-content %}}
{{% tab-content %}}
<img src="/img/chronograf/1-8-influxdb-v2-connection-config.png" style="width:100%; max-width:798px;"/>
- **Enable the {{< req "InfluxDB v2 Auth" >}} option**
- **Connection URL**: [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions/)
or [InfluxDB OSS 2.x URL](/influxdb/v2.0/reference/urls/)
```
http://localhost:8086
```
- **Connection Name**: Unique name for this connection.
- **Organiziation**: InfluxDB [organization](/influxdb/v2.0/organizations/)
- **Token**: InfluxDB [authentication token](/influxdb/v2.0/security/tokens/)
- **Telegraf Database Name:** InfluxDB [bucket](/influxdb/v2.0/organizations/buckets/)
Chronograf uses to populate parts of the application, including the Host List page (default is `telegraf`)
- **Default Retention Policy:** default [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp)
_**(leave blank)**_
- **Default connection**: use this connection as the default connection for data exploration and dashboards
{{% note %}}
For more information about connecting Chronograf to an InfluxDB Cloud or OSS 2.x instance, see:
- [Use Chronograf with InfluxDB Cloud](/influxdb/cloud/tools/chronograf/)
- [Use Chronograf with InfluxDB OSS 2.x](/{{< latest "influxdb" "v2" >}}/tools/chronograf/)
{{% /note %}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
4. Click **Add Connection**
* If the connection is valid, the Dashboards window appears, allowing you to import dashboard templates you can use to display and analyze your data. For details, see [Creating dashboards](/chronograf/v1.10/guides/create-a-dashboard).
* If the connection cannot be created, the following error message appears:
"Unable to create source: Error contacting source."
If this occurs, ensure all connection credentials are correct and that the InfluxDB instance is running and accessible.
The following dashboards are available:
- Docker
- Kubernetes Node
- Riak
- Consul
- Kubernetes Overview
- Mesos
- IIS
- RabbitMQ
- System
- VMware vSphere Overview
- Apache
- Elastisearch
- InfluxDB
- Memcached
- NSQ
- PostgreSQL
- Consul Telemetry
- HAProxy
- Kubernetes Pod
- NGINX
- Redis
- VMware vSphere VMs
- VMware vSphere Hosts
- PHPfpm
- Win System
- MySQL
- Ping
## Manage InfluxDB connections using .src files
Manually create `.src` files to store InfluxDB connection details.
`.src` files are simple JSON files that contain key-value paired connection details.
The location of `.src` files is defined by the [`--resources-path`](/chronograf/v1.10/administration/config-options/#resources-path)
command line option, which is, by default, the same as the [`--canned-path`](/chronograf/v1.10/administration/config-options/#canned-path-c).
A `.src` file contains the details for a single InfluxDB connection.
{{% note %}}
**Only InfluxDB 1.x connections are configurable in a `.src` file.**
Configure InfluxDB 2.x and Cloud connections with [CLI flags](/chronograf/v1.10/administration/config-options/#influxdb-connection-options)
or in the [Chronograf UI](#manage-influxdb-connections-using-the-chronograf-ui).
{{% /note %}}
Create a new file named `example.src` (the filename is arbitrary) and place it at Chronograf's `resource-path`.
All `.src` files should contain the following:
{{< keep-url >}}
```json
{
"id": "10000",
"name": "My InfluxDB",
"username": "test",
"password": "test",
"url": "http://localhost:8086",
"type": "influx",
"insecureSkipVerify": false,
"default": true,
"telegraf": "telegraf",
"organization": "example_org"
}
```
#### `id`
A unique, stringified non-negative integer.
Using a 4 or 5 digit number is recommended to avoid interfering with existing datasource IDs.
#### `name`
Any string you want to use as the display name of the source.
#### `username`
Username used to access the InfluxDB server or cluster.
*Only required if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) on the InfluxDB instance to which you're connecting.*
#### `password`
Password used to access the InfluxDB server or cluster.
*Only required if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) on the InfluxDB instance to which you're connecting.*
#### `url`
URL of the InfluxDB server or cluster.
#### `type`
Defines the type or distribution of InfluxDB to which you are connecting.
Below are the following options:
| InfluxDB Distribution | `type` Value |
| --------------------- | ------------ |
| InfluxDB OSS | `influx` |
| InfluxDB Enterprise | `influx-enterprise` |
#### `insecureSkipVerify`
Skips the SSL certificate verification process.
Set to `true` if you are using a self-signed SSL certificate on your InfluxDB server or cluster.
#### `default`
Set to `true` if you want the connection to be the default data connection used upon first login.
#### `telegraf`
The name of the Telegraf database on your InfluxDB server or cluster.
#### `organization`
The ID of the organization you want the data source to be associated with.
### Environment variables in .src files
`.src` files support the use of environment variables to populate InfluxDB connection details.
Environment variables can be loaded using the `"{{ .VARIABLE_KEY }}"` syntax:
```JSON
{
"id": "10000",
"name": "My InfluxDB",
"username": "{{ .INFLUXDB_USER }}",
"password": "{{ .INFLUXDB_PASS }}",
"url": "{{ .INFLUXDB_URL }}",
"type": "influx",
"insecureSkipVerify": false,
"default": true,
"telegraf": "telegraf",
"organization": "example_org"
}
```
## Manage Kapacitor connections using the Chronograf UI
Kapacitor is the data processing component of the TICK stack.
To use Kapacitor in Chronograf, create Kapacitor connections and configure alert endpoints.
To create a Kapacitor connection using the Chronograf UI:
1. Open Chronograf and click **Configuration** (wrench icon) in the navigation menu.
2. Next to an existing [InfluxDB connection](#manage-influxdb-connections-using-the-chronograf-ui), click **Add Kapacitor Connection** if there are no existing Kapacitor connections or select **Add Kapacitor Connection** in the **Kapacitor Connection** dropdown list.
![Add a new Kapacitor connection in Chronograf](/img/chronograf/1-6-connection-kapacitor.png)
3. In the **Connection Details** section, enter values for the following fields:
<img src="/img/chronograf/1-7-kapacitor-connection-config.png" style="width:100%; max-width:600px;">
* **Kapacitor URL**: Enter the hostname or IP address of the Kapacitor instance and the port. The field is prefilled with `http://localhost:9092`.
* **Name**: Enter the name for this connection.
* **Username**: Enter the username that will be shared for this connection.
*Only required if [authorization is enabled](/{{< latest "kapacitor" >}}/administration/security/#kapacitor-authentication-and-authorization) on the Kapacitor instance or cluster to which you're connecting.*
* **Password**: Enter the password.
*Only required if [authorization is enabled](/{{< latest "kapacitor" >}}/administration/security/#kapacitor-authentication-and-authorization) on the Kapacitor instance or cluster to which you're connecting.*
4. Click **Continue**. If the connection is valid, the message "Kapacitor Created! Configuring endpoints is optional." appears. To configure alert endpoints, see [Configuring alert endpoints](/chronograf/v1.10/guides/configuring-alert-endpoints/).
## Manage Kapacitor connections using .kap files
Manually create `.kap` files to store Kapacitor connection details.
`.kap` files are simple JSON files that contain key-value paired connection details.
The location of `.kap` files is defined by the `--resources-path` command line option, which is, by default, the same as the [`--canned-path`](/chronograf/v1.10/administration/config-options/#canned-path-c).
A `.kap` files contains the details for a single InfluxDB connection.
Create a new file named `example.kap` (the filename is arbitrary) and place it at Chronograf's `resource-path`.
All `.kap` files should contain the following:
```json
{
"id": "10000",
"srcID": "10000",
"name": "My Kapacitor",
"url": "http://localhost:9092",
"active": true,
"organization": "example_org"
}
```
#### `id`
A unique, stringified non-negative integer.
Using a 4 or 5 digit number is recommended to avoid interfering with existing datasource IDs.
#### `srcID`
The unique, stringified non-negative integer `id` of the InfluxDB server or cluster with which the Kapacitor service is associated.
#### `name`
Any string you want to use as the display name of the Kapacitor connection.
#### `url`
URL of the Kapacitor server.
#### `active`
If `true`, specifies that this is the Kapacitor connection that should be used when displaying Kapacitor-related information in Chronograf.
#### `organization`
The ID of the organization you want the Kapacitor connection to be associated with.
### Environment variables in .kap files
`.kap` files support the use of environment variables to populate Kapacitor connection details.
Environment variables can be loaded using the `"{{ .VARIABLE_KEY }}"` syntax:
```JSON
{
"id": "10000",
"srcID": "10000",
"name": "My Kapacitor",
"url": "{{ .KAPACITOR_URL }}",
"active": true,
"organization": "example_org"
}
```

View File

@ -1,60 +0,0 @@
---
title: Import and export Chronograf dashboards
description: Share dashboard JSON files between Chronograf instances, or add dashboards as resources to include in a deployment.
menu:
chronograf_1_10:
weight: 120
parent: Administration
---
Chronograf makes it easy to recreate robust dashboards without having to manually configure them from the ground up. Import and export dashboards between instances, or add dashboards as resources to include in a deployment.
- [Export a dashboard](#export-a-dashboard)
- [Load a dashboard as a resource](#load-a-dashboard-as-a-resource)
- [Import a dashboard](#import-a-dashboard)
- [Required user roles](#required-user-roles)
## Required user roles
All users can export a dashboard. To import a dashboard, a user must have an Admin or Editor role.
| Task vs Role | Admin | Editor | Viewer |
|------------------|:-----:|:------:|:------:|
| Export Dashboard | ✅ | ✅ | ✅ |
| Import Dashboard | ✅ | ✅ | ❌ |
## Export a dashboard
1. On the Dashboards page, hover over the dashboard you want to export, and then click the **Export**
button on the right.
<img src="/img/chronograf/1-6-dashboard-export.png" alt="Exporting a Chronograf dashboard" style="width:100%;max-width:912px"/>
This downloads a JSON file containing dashboard information including template variables, cells and cell information such as the query, cell-sizing, color scheme, visualization type, etc.
> No time series data is exported with a dashboard.
> Exports include only dashboard-related information as mentioned above.
## Load a dashboard as a resource
Automatically load the dashboard as a resource (useful for adding a dashboard to a deployment).
1. Rename the dashboard `.json` extension to `.dashboard`.
2. Use the [`resources-path` configuration option](/chronograf/v1.10/administration/config-options/#--resources-path) to save the dashboard in the `/resources` directory (by default, `/usr/share/chronograf/resources`).
## Import a dashboard
1. On your Dashboards page, click the **Import Dashboard** button.
2. Either drag and drop or select the JSON export file to import.
3. Click the **Upload Dashboard** button.
The newly imported dashboard is included in your list of dashboards.
![Importing a Chronograf dashboard](/img/chronograf/1-6-dashboard-import.gif)
### Reconcile unmatched sources
If the data sources defined in the imported dashboard file do not match any of your local sources,
reconcile each of the unmatched sources during the import process, and then click **Done**.
![Reconcile unmatched sources](/img/chronograf/1-6-dashboard-import-reconcile.png)

View File

@ -1,268 +0,0 @@
---
title: Manage Chronograf users
description: >
Manage users and roles, including SuperAdmin permission and organization-bound users.
menu:
chronograf_1_10:
name: Manage Chronograf users
weight: 90
parent: Administration
---
**On this page**
* [Manage Chronograf users and roles](#manage-chronograf-users-and-roles)
* [Organization-bound users](#organization-bound-users)
* [InfluxDB and Kapacitor users within Chronograf](#influxdb-and-kapacitor-users-within-chronograf)
* [Chronograf-owned resources](#chronograf-owned-resources)
* [Chronograf-accessed resources](#chronograf-accessed-resources)
* [Readers](#readers-rolereader)
* [Members](#members-role-member)
* [Viewers](#viewers-role-viewer)
* [Editors](#editors-role-editor)
* [Admins](#admins-role-admin)
* [Cross-organization SuperAdmin permission](#cross-organization-superadmin-permission)
* [All New Users are SuperAdmins configuration option](#all-new-users-are-superadmins-configuration-option)
* [Create users](#create-users)
* [Update users](#update-users)
* [Remove users](#remove-users)
* [Navigate organizations](#navigate-organizations)
* [Log in and log out](#log-in-and-log-out)
* [Switch the current organization](#switch-the-current-organization)
* [Purgatory](#purgatory)
## Manage Chronograf users and roles
{{% note %}}
**Note:** Support for organizations and user roles is available in Chronograf 1.4 or later.
First, OAuth 2.0 authentication must be configured (if it is, you'll see the
Chronograf Admin tab on the Admin menu).
For more information, see [Managing security](/chronograf/v1.10/administration/managing-security/).
{{% /note %}}
Chronograf includes four organization-bound user roles and one cross-organization SuperAdmin permission. In an organization, admins (with the `admin` role) or users with SuperAdmin permission can create, update, and assign roles to a user or remove a role assignment.
### Organization-bound users
Chronograf users are assigned one of the following organization-bound user roles, listed here in order of increasing capabilities:
- [`reader`](#readers-rolereader)
- [`member`](#members-role-member)
- [`viewer`](#viewers-role-viewer)
- [`editor`](#editors-role-editor)
- [`admin`](#admins-role-admin)
Each of these roles, described in detail below, have different capabilities for the following Chronograf-owned or Chronograf-accessed resources.
#### InfluxDB and Kapacitor users within Chronograf
Chronograf uses InfluxDB and Kapacitor connections to manage user access control to InfluxDB and Kapacitor resources within Chronograf. The permissions of the InfluxDB and Kapacitor user specified within such a connection determine the capabilities for any Chronograf user with access (i.e., viewers, editors, and administrators) to that connection. Administrators include either an admin (`admin` role) or a user of any role with SuperAdmin permission.
{{% note %}}
**Note:** Chronograf users are entirely separate from InfluxDB and Kapacitor users.
The Chronograf user and authentication system applies to the Chronograf user interface.
InfluxDB and Kapacitor users and their permissions are managed separately.
[Chronograf connections](/chronograf/v1.10/administration/creating-connections/)
determine which InfluxDB or Kapacitor users to use when when connecting to each service.
{{% /note %}}
#### Chronograf-owned resources
Chronograf-owned resources include internal resources that are under the full control of Chronograf, including:
- Kapacitor connections
- InfluxDB connections
- Dashboards
- Canned layouts
- Chronograf organizations
- Chronograf users
- Chronograf Status Page content for News Feeds and Getting Started
#### Chronograf-accessed resources
Chronograf-accessed resources include external resources that can be accessed using Chronograf, but are under limited control by Chronograf. Chronograf users with the roles of `viewer`, `editor`, and `admin`, or users with SuperAdmin permission, have equal access to these resources:
- InfluxDB databases, users, queries, and time series data (if using InfluxDB Enterprise, InfluxDB roles can be accessed too)
- Kapacitor alerts and alert rules (called tasks in Kapacitor)
#### Readers (role:`reader`)
Readers are Chronograf users who are only able to view dashboards in read-only mode. They are not able to alter or manipulate dashboard queries. Readers are not able to view tasks, alerts, admin pages, logs or any artifacts other than dashboards.
#### Members (role:`member`)
Members are Chronograf users who have been added to organizations but do not have any functional capabilities. Members cannot access any resources within an organization and thus effectively cannot use Chronograf. Instead, a member can only access Purgatory, where the user can [switch into organizations](#navigate-organizations) based on assigned roles.
By default, new organizations have a default role of `member`. If the Default organization is Public, then anyone who can authenticate, would become a member, but not be able to use Chronograf until an administrator assigns a different role.
#### Viewers (role:`viewer`)
Viewers are Chronograf users with effectively read-only capabilities for Chronograf-owned resources within their current organization:
- View canned dashboards
- View canned layouts
- View InfluxDB connections
- Switch current InfluxDB connection to other available connections
- Access InfluxDB resources through the current connection
- View the name of the current Kapacitor connection associated with each InfluxDB connection
- Access Kapacitor resources through the current connection
- [Switch into organizations](#navigate-organizations) where the user has a role
For Chronograf-accessed resources, viewers can:
- InfluxDB
- Read and write time series data
- Create, view, edit, and delete databases and retention policies
- Create, view, edit, and delete InfluxDB users
- View and kill queries
- _InfluxDB Enterprise_: Create, view, edit, and delete InfluxDB Enterprise roles
- Kapacitor
- View alerts
- Create, edit, and delete alert rules
#### Editors (role:`editor`)
Editors are Chronograf users with limited capabilities for Chronograf-owned resources within their current organization:
- Create, view, edit, and delete dashboards
- View canned layouts
- Create, view, edit, and delete InfluxDB connections
- Switch current InfluxDB connection to other available connections
- Access InfluxDB resources through the current connection
- Create, view, edit, and delete Kapacitor connections associated with InfluxDB connections
- Switch current Kapacitor connection to other available connections
- Access Kapacitor resources through the current connection
- [Switch into organizations](#navigate-organizations) where the user has a role
For Chronograf-accessed resources, editors can:
- InfluxDB
- Read and write time series data
- Create, view, edit, and delete databases and retention policies
- Create, view, edit, and delete InfluxDB users
- View and kill queries
- _InfluxDB Enterprise_: Create, view, edit, and delete InfluxDB Enterprise roles
- Kapacitor
- View alerts
- Create, edit, and delete alert rules
#### Admins (role:`admin`)
Admins are Chronograf users with all capabilities for the following Chronograf-owned resources within their current organization:
- Create, view, update, and remove Chronograf users
- Create, view, edit, and delete dashboards
- View canned layouts
- Create, view, edit, and delete InfluxDB connections
- Switch current InfluxDB connection to other available connections
- Access InfluxDB resources through the current connection
- Create, view, edit, and delete Kapacitor connections associated with InfluxDB connections
- Switch current Kapacitor connection to other available connections
- Access Kapacitor resources through the current connection
- [Switch into organizations](#navigate-organizations) where the user has a role
For Chronograf-accessed resources, admins can:
- InfluxDB
- Read and write time series data
- Create, view, edit, and delete databases and retention policies
- Create, view, edit, and delete InfluxDB users
- View and kill queries
- _InfluxDB Enterprise_: Create, view, edit, and delete InfluxDB Enterprise roles
- Kapacitor
- View alerts
- Create, edit, and delete alert rules
### Cross-organization SuperAdmin permission
SuperAdmin permission is a Chronograf permission that allows any user, regardless of role, to perform all administrator functions both within organizations, as well as across organizations. A user with SuperAdmin permission has _unlimited_ capabilities, including for the following Chronograf-owned resources:
* Create, view, update, and remove organizations
* Create, view, update, and remove users within an organization
* Grant or revoke the SuperAdmin permission of another user
* [Switch into any organization](#navigate-organizations)
* Toggle the Public setting of the Default organization
* Toggle the global config setting for [All new users are SuperAdmin](#all-new-users-are-superadmins-configuration-option)
Important SuperAdmin behaviors:
* SuperAdmin permission grants any user (whether `member`, `viewer`, `editor`, or `admin`) the full capabilities of admins and the SuperAdmin capabilities listed above.
* When a Chronograf user with SuperAdmin permission creates a new organization or switches into an organization where that user has no role, that SuperAdmin user is automatically assigned the `admin` role by default.
* SuperAdmin users cannot revoke their own SuperAdmin permission.
* SuperAdmin users are the only ones who can change the SuperAdmin permission of other Chronograf users. Regular admins who do not have SuperAdmin permission can perform normal operations on SuperAdmin users (create that user within their organization, change roles, and remove them), but they will not see that these users have SuperAdmin permission, nor will any of their actions affect the SuperAdmin permission of these users.
* If a user has their SuperAdmin permission revoked, that user will retain their assigned roles within their organizations.
#### All New Users are SuperAdmins configuration option
By default, the **Config** setting for "**All new users are SuperAdmins"** is **On**. Any user with SuperAdmin permission can toggle this under the **Admin > Chronograf > Organizations** tab. If this setting is **On**, any new user (who is created or who authenticates) will_ automatically have SuperAdmin permisison. If this setting is **Off**, any new user (who is created or who authenticates) will _not_ have SuperAdmin permisison unless they are explicitly granted it later by another user with SuperAdmin permission.
### Create users
Role required: `admin`
**To create a user:**
1. Open Chronograf in your web browser and select **Admin {{< icon "crown" >}}**.
2. Click the **Users** tab and then click **Create User**.
3. Add the following user information:
* **Username**: Enter the username as provided by the OAuth provider.
* **Role**: Select the Chronograf role.
* **Provider**: Enter the OAuth 2.0 provider to be used for authentication. Valid values are: `github`, `google`, `auth0`, `heroku`, or other names defined in the [`GENERIC_NAME` environment variable](/chronograf/v1.10/administration/config-options#generic-name).
* **Scheme**: Displays `oauth2`, which is the only supported authentication scheme in this release.
4. Click **Save** to finish creating the user.
### Update users
Role required: `admin`
Only a user's role can be updated. A user's username, provider, and scheme cannot be updated. (Effectively, to "update" a user's username, provider, or scheme, the user must be removed and added again with the desired values.)
**To change a user's role:**
1. Open Chronograf in your web browser and select **Admin (crown icon) > Chronograf**.
2. Click the **Users** tab to display the list of users within the current organization.
3. Select a new role for the user. The update is automatically persisted.
### Remove users
Role required: `admin`
**To remove a user:**
1. Open Chronograf in your web browser and select **Admin {{< icon "crown" >}}**.
2. Click the **Users** tab to display the list of users.
3. Hover your cursor over the user you want to remove and then click **Remove** and **Confirm**.
### Navigate organizations
Chronograf is always used in the context of an organization. When a user logs in to Chronograf, that user will access only the resources owned by their current organization. The only exception to this is that users with SuperAdmin permission will also be able to [manage organizations](/chronograf/v1.10/administration/managing-organizations/) in the Chronograf Admin page.
#### Log in and log out
Log in from the Chronograf homepage using any configured OAuth 2.0 provider.
Log out by hovering over the **User {{< icon "person" >}}** in the left navigation bar and clicking **Log out**.
#### Switch the current organization
A user's current organization and role is highlighted in the **Switch Organizations** list, which can be found by hovering over the **User {{< icon "person" >}}** in the left navigation bar.
A user can log in from the Chronograf homepage using any configured OAuth 2.0 provider.
A user can log out by hovering over the **User (person icon)** in the left navigation bar and clicking **Log out**.
#### Switch the current organization
A user's current organization and role is highlighted in the **Switch Organizations** list, which can be found by hovering over the **User (person icon)** in the left navigation bar.
When a user has a role in more than one organization, that user can switch into any other organization where they have a role by selecting the desired organization in the **Switch Organizations** list.
#### Purgatory
If at any time, a user is a `member` within their current organization and does not have SuperAdmin permission, that user will be redirected to a page called Purgatory. There, the user will see their current organization and role, as well as a message to contact an administrator for access.
On the same page, that user will see a list of all of their organizations and roles. The user can switch into any listed organization where their role is `viewer`, `editor`, or `admin` by clicking **Log in** next to the desired organization.
**Note** In the rare case that a user is granted SuperAdmin permission while in Purgatory, they will be able to switch into any listed organization, as expected.

View File

@ -1,342 +0,0 @@
---
title: Manage InfluxDB users in Chronograf
description: >
Enable authentication and manage InfluxDB OSS and InfluxDB Enterprise users in Chronograf.
aliases:
- /chronograf/v1.10/administration/user-management/
menu:
chronograf_1_10:
name: Manage InfluxDB users
weight: 60
parent: Administration
---
The **Chronograf Admin** provides InfluxDB user management for InfluxDB OSS and InfluxDB Enterprise users.
{{% note %}}
***Note:*** For details on Chronograf user authentication and management, see [Managing security](/chronograf/v1.10/administration/managing-security/).
{{% /note %}}
{{% note %}}
#### Disabled administrative features
If connected to **InfluxDB OSS v2.x** or **InfluxDB Cloud**, all InfluxDB administrative
features are disabled in Chronograf. Use the InfluxDB OSS v2.x or InfluxDB Cloud user
interfaces, CLIs, or APIs to complete administrative tasks.
{{% /note %}}
**On this page:**
* [Enable authentication](#enable-authentication)
* [InfluxDB OSS user management](#influxdb-oss-user-management)
* [InfluxDB Enterprise user management](#influxdb-enterprise-user-management)
## Enable authentication
Follow the steps below to enable authentication.
The steps are the same for InfluxDB OSS instances and InfluxDB Enterprise clusters.
{{% note %}}
_**InfluxDB Enterprise clusters:**_
Repeat the first three steps for each data node in a cluster.
{{% /note %}}
### Step 1: Enable authentication.
Enable authentication in the InfluxDB configuration file.
For most Linux installations, the configuration file is located in `/etc/influxdb/influxdb.conf`.
In the `[http]` section of the InfluxDB configuration file (`influxdb.conf`), uncomment the `auth-enabled` option and set it to `true`, as shown here:
```
[http]
# Determines whether HTTP endpoint is enabled.
# enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true #
```
### Step 2: Restart the InfluxDB service.
Restart the InfluxDB service for your configuration changes to take effect:
```
~# sudo systemctl restart influxdb
```
### Step 3: Create an admin user.
Because authentication is enabled, you need to create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) before you can do anything else in the database.
Run the `curl` command below to create an admin user, replacing:
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxDB Enterprise data nodes
* `chronothan` with your own username
* `supersecret` with your own password (note that the password requires single quotes)
{{< keep-url >}}
```
~# curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE USER chronothan WITH PASSWORD 'supersecret' WITH ALL PRIVILEGES"
```
A successful `CREATE USER` query returns a blank result:
```
{"results":[{"statement_id":0}]} <--- Success!
```
### Step 4: Edit the InfluxDB source in Chronograf.
If you've already [connected your database to Chronograf](/chronograf/v1.10/introduction/installation/#connect-chronograf-to-your-influxdb-instance-or-influxdb-enterprise-cluster), update the connection configuration in Chronograf with your new username and password.
Edit existing InfluxDB database sources by navigating to the Chronograf configuration page and clicking on the name of the source.
## InfluxDB OSS User Management
On the **Chronograf Admin** page:
* View, create, and delete admin and non-admin users
* Change user passwords
* Assign admin and remove admin permissions to or from a user
![InfluxDB OSS user management](/img/chronograf/1-6-admin-usermanagement-oss.png)
InfluxDB users are either admin users or non-admin users.
See InfluxDB's [authentication and authorization](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) documentation for more information about those user types.
{{% note %}}
Chronograf currently does not support assigning InfluxDB database `READ`or `WRITE` access to non-admin users.
As a workaround, grant `READ`, `WRITE`, or `ALL` (`READ` and `WRITE`) permissions to non-admin users with the following curl commands, replacing anything inside `< >` with your own values:
#### Grant `READ` permission:
```sh
curl --request POST "http://<InfluxDB-IP>:8086/query?u=<username>&p=<password>" \
--data-urlencode "q=GRANT READ ON <database-name> TO <non-admin-username>"
```
#### Grant `WRITE` permission:
```sh
curl --request POST "http://<InfluxDB-IP>:8086/query?u=<username>&p=<password>" \
--data-urlencode "q=GRANT WRITE ON <database-name> TO <non-admin-username>"
```
#### Grant `ALL` permission:
```sh
curl --request POST "http://<InfluxDB-IP>:8086/query?u=<username>&p=<password>" \
--data-urlencode "q=GRANT ALL ON <database-name> TO <non-admin-username>"
```
In all cases, a successful `GRANT` query returns a blank result:
```sh
{"results":[{"statement_id":0}]} # <--- Success!
```
Remove `READ`, `WRITE`, or `ALL` permissions from non-admin users by replacing `GRANT` with `REVOKE` in the curl commands above.
{{% /note %}}
## InfluxDB Enterprise user management using the UI
To create, manage, and delete users, click **Admin {{< icon "crown" >}}** in the left navigation bar.
To create a user do the following:
1. Select the **Users** tab.
2. Click **+ Create User**.
3. Add a user name.
4. Add a password.
5. Click **Create**.
6. Assign a role to the user in the `Roles` section. To create a role see [Roles](#roles).
7. Click **Apply Changes**.
To make changes to a user simply click on the username, make any changes and click **Apply Changes**. To delete a user click **Delete User**.
### User types
Admin users have the following permissions by default:
* [CreateDatabase](#createdatabase)
* [CreateUserAndRole](#createuserandrole)
* [DropData](#dropdata)
* [DropDatabase](#dropdatabase)
* [ManageContinuousQuery](#managecontinuousquery)
* [ManageQuery](#managequery)
* [ManageShard](#manageshard)
* [ManageSubscription](#managesubscription)
* [Monitor](#monitor)
* [ReadData](#readdata)
* [WriteData](#writedata)
Non-admin users have no permissions by default.
Assign permissions and roles to both admin and non-admin users.
### Permissions
#### AddRemoveNode
Permission to add or remove nodes from a cluster.
**Relevant `influxd-ctl` arguments**:
[`add-data`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#add-data),
[`add-meta`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#add-meta),
[`join`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#join),
[`remove-data`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#remove-data),
[`remove-meta`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#remove-meta), and
[`leave`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#leave)
**Pages in Chronograf that require this permission**: NA
#### CopyShard
Permission to copy shards.
**Relevant `influxd-ctl` arguments**:
[`copy-shard`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#copy-shard)
**Pages in Chronograf that require this permission**: NA
#### CreateDatabase
Permission to create databases, create [retention policies](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp), alter retention policies, and view retention policies.
**Relevant InfluxQL queries**:
[`CREATE DATABASE`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-database),
[`CREATE RETENTION POLICY`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-retention-policies-with-create-retention-policy),
[`ALTER RETENTION POLICY`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy), and
[`SHOW RETENTION POLICIES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-retention-policies)
**Pages in Chronograf that require this permission**: Dashboards, Data Explorer, and Databases on the Admin page
#### CreateUserAndRole
Permission to manage users and roles; create users, drop users, grant admin status to users, grant permissions to users, revoke admin status from users, revoke permissions from users, change user passwords, view user permissions, and view users and their admin status.
**Relevant InfluxQL queries**:
[`CREATE USER`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
[`DROP USER`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#general-admin-and-non-admin-user-management),
[`GRANT ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
[`GRANT [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management),
[`REVOKE ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
[`REVOKE [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management),
[`SET PASSWORD`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#general-admin-and-non-admin-user-management),
[`SHOW GRANTS`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management), and
[`SHOW USERS`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards, Users and Roles on the Admin page
#### DropData
Permission to drop data, in particular [series](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#series) and [measurements](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement).
**Relevant InfluxQL queries**:
[`DROP SERIES`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#drop-series-from-the-index-with-drop-series),
[`DELETE`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-series-with-delete), and
[`DROP MEASUREMENT`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-measurements-with-drop-measurement)
**Pages in Chronograf that require this permission**: NA
#### DropDatabase
Permission to drop databases and retention policies.
**Relevant InfluxQL queries**:
[`DROP DATABASE`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-a-database-with-drop-database) and
[`DROP RETENTION POLICY`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-retention-policies-with-drop-retention-policy)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards, Databases on the Admin page
#### KapacitorAPI
Permission to access the API for InfluxKapacitor Enterprise.
This does not include configuration-related API calls.
**Pages in Chronograf that require this permission**: NA
#### KapacitorConfigAPI
Permission to access the configuration-related API calls for InfluxKapacitor Enterprise.
**Pages in Chronograf that require this permission**: NA
#### ManageContinuousQuery
Permission to create, drop, and view [continuous queries](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#continuous-query-cq).
**Relevant InfluxQL queries**:
[`CreateContinuousQueryStatement`](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/),
[`DropContinuousQueryStatement`](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#deleting-continuous-queries), and
[`ShowContinuousQueriesStatement`](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#listing-continuous-queries)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards
#### ManageQuery
Permission to view and kill queries.
**Relevant InfluxQL queries**:
[`SHOW QUERIES`](/{{< latest "influxdb" "v1" >}}/troubleshooting/query_management/#list-currently-running-queries-with-show-queries) and
[`KILL QUERY`](/{{< latest "influxdb" "v1" >}}/troubleshooting/query_management/#stop-currently-running-queries-with-kill-query)
**Pages in Chronograf that require this permission**: Queries on the Admin page
#### ManageShard
Permission to copy, delete, and view [shards](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#shard).
**Relevant InfluxQL queries**:
[`DropShardStatement`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-a-shard-with-drop-shard),
[`ShowShardGroupsStatement`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shard-groups), and
[`ShowShardsStatement`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shards)
**Pages in Chronograf that require this permission**: NA
#### ManageSubscription
Permission to create, drop, and view [subscriptions](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#subscription).
**Relevant InfluxQL queries**:
[`CREATE SUBSCRIPTION`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#create-subscription),
[`DROP SUBSCRIPTION`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#drop-subscription), and
[`SHOW SUBSCRIPTIONS`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-subscriptions)
**Pages in Chronograf that require this permission**: Alerting
#### Monitor
Permission to view cluster statistics and diagnostics.
**Relevant InfluxQL queries**:
[`SHOW DIAGNOSTICS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-diagnostics) and
[`SHOW STATS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-stats)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards
#### ReadData
Permission to read data.
**Relevant InfluxQL queries**:
[`SHOW FIELD KEYS`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-field-keys),
[`SHOW MEASUREMENTS`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-measurements),
[`SHOW SERIES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-series),
[`SHOW TAG KEYS`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-keys),
[`SHOW TAG VALUES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-values), and
[`SHOW RETENTION POLICIES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-retention-policies)
**Pages in Chronograf that require this permission**: Admin, Alerting, Dashboards, Data Explorer, Host List
#### WriteData
Permission to write data.
**Relevant InfluxQL queries**: NA
**Pages in Chronograf that require this permission**: NA
### Roles
Roles are groups of permissions. Assign roles to one or more users.
To create a role, do the following:
1. Click **{{< icon "crown" "v2" >}} Admin** in the left navigation bar. You will be taken to the `InfluxDB Admin` page.
2. Select the **Roles** tab.
3. Click **+ Create Role**.
4. Give the role a name.
5. Click **Create**.
6. Assign users to the role in the `Users` section
7. Add permissions to the role in the `Permissions` section. You will see a list of databases and all permissions for that database. Select
the permissions you want for the role.
8. Click **Apply Changes**.
The role with all permissions will appear in the list.

View File

@ -1,120 +0,0 @@
---
title: Manage Chronograf organizations
description: Create, configure, map, and remove organizations in Chronograf.
menu:
chronograf_1_10:
name: Manage Chronograf organizations
weight: 80
parent: Administration
---
**On this page:**
* [About Chronograf organizations](#about-chronograf-organizations)
* [Use the default organization](#use-the-default-organization)
* [Create organizations](#create-organizations)
* [Configure organizations](#configure-organizations)
* [Map organizations](#map-organizations)
* [Remove organizations](#remove-organizations)
## About Chronograf organizations
{{% note %}}
**Note:** Support for organizations and user roles is available in Chronograf 1.4 or later.
First, OAuth 2.0 authentication must be configured (if it is, you'll see the Chronograf Admin tab on the Admin menu).
For more information, see [managing security](/chronograf/v1.10/administration/managing-security/).
{{% /note %}}
For information about the new user roles and SuperAdmin permission, see [Managing Chronograf users](/chronograf/v1.10/administration/managing-chronograf-users/).
A Chronograf organization is a collection of Chronograf users who share common Chronograf-owned resources, including dashboards, InfluxDB connections, and Kapacitor connections. Organizations can be used to represent companies, functional units, projects, or teams. Chronograf users can be members of multiple organizations.
{{% note %}}
**Note:** Only users with SuperAdmin permission can manage organizations. Admins, editors, viewers, and members cannot manage organizations unless they have SuperAdmin permission.
{{% /note %}}
## Use the default organization
{{% note %}}
**Note:** The default organization can be used to support Chronograf as configured in versions earlier than 1.4.
Upon upgrading, any Chronograf resources that existed prior to 1.4 automatically become owned by the Default organization.
{{% /note %}}
Upon installation, the default organization is ready for use and allows Chronograf to be used as-is.
## Create organizations
Your company, organizational units, teams, and projects may require the creation of additional organizations, beyond the Default organization. Additional organizations can be created as described below.
**To create an organization:**
**Required permission:** SuperAdmin
1) In the Chronograf navigation bar, click **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) In the **All Orgs** tab, click **Create Organization**.
3) Under **Name**, click on **"Untitled Organization"** and enter the new organization name.
4) Under **Default Role**, select the default role for new users within that organization. Valid options include `member` (default), `viewer`, `editor`, and `admin`.
5) Click **Save**.
## Configure organizations
**Required permission:** SuperAdmin
You can configure existing and new organizations in the **Organizations** tab of the **Chronograf Admin** page as follows:
* **Name**: The name of the organization. Click on the organization name to change it.
> ***Note:*** You can change the Default organization's name, but that organization will always be the default organization.
* **Public**: [Default organization only] Indicates whether a user can authenticate without being explicitly added to the organization. When **Public** is toggled to **Off**, new users cannot authenticate into your Chronograf instance unless they have been explicitly added to the organization by an administrator.
> ***Note:*** All organizations other than the Default organization require users to be explicitly added by an administrator.
* **Default Role**: The role granted to new users by default when added to an organization. Valid options are `member` (default), `viewer`, `editor`, and `admin`.
See the following pages for more information about managing Chronograf users and security:
* [Manage Chronograf users](/chronograf/v1.10/administration/managing-chronograf-users/)
* [Manage security](/chronograf/v1.10/administration/managing-security/)
## Map organizations
**To create an organization mapping:**
**Required permission:** SuperAdmin
1) In the Chronograf navigation bar, select **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) Click the **Org Mappings** tab to view a list of organization mappings.
3) To add an organization mapping, click the **Create Mapping** button. A new row is added to the listing.
4) In the new row, enter the following:
- **Scheme**, select `oauth2`.
- **Provider**: Enter the provider. Valid values include `Google` and `GitHub`.
- **Provider Org**: [Optional] Enter the email domain(s) you want to accept.
- **Organization**: Select the organization that can use this authentication provider.
**To remove an organization mapping:**
**Required permission:** SuperAdmin
1) In the Chronograf navigation bar, select **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) Click the **Org Mappings** tab to view a list of organization mappings.
3) To remove an organization mapping, click the **Delete** button at the end of the mapping row you want to remove, and then confirm the action.
## Remove organizations
When an organization is removed:
* Users within that organization are removed from that organization and will be logged out of the application.
* All users with roles in that organization are updated to no longer have a role in that organization
* All resources owned by that organization are deleted.
**To remove an organization:**
**Required permission:** SuperAdmin
1) In the navigation bar of the Chronograf application, select **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) Click the **All Orgs** tab to view a list of organizations.
3) To the right of the the organization that you want to remove, click the **Remove** button (trashcan icon) and then confirm by clicking the **Save** button.

View File

@ -1,632 +0,0 @@
---
title: Manage Chronograf security
description: Manage Chronograf security with OAuth 2.0 providers.
aliases: /chronograf/v1.10/administration/security-best-practices/
menu:
chronograf_1_10:
name: Manage Chronograf security
weight: 70
parent: Administration
---
To enhance security, configure Chronograf to authenticate and authorize with [OAuth 2.0](https://oauth.net/) and use TLS/HTTPS.
(Basic authentication with username and password is also available.)
- [Configure Chronograf to authenticate with OAuth 2.0](#configure-chronograf-to-authenticate-with-oauth-20)
1. [Generate a Token Secret](#generate-a-token-secret)
2. [Set configurations for your OAuth provider](#set-configurations-for-your-oauth-provider)
3. [Configure authentication duration](#configure-authentication-duration)
- [Configure Chronograf to authenticate with a username and password](#configure-chronograf-to-authenticate-with-a-username-and-password)
- [Configure TLS (Transport Layer Security) and HTTPS](#configure-tls-transport-layer-security-and-https)
## Configure Chronograf to authenticate with OAuth 2.0
{{% note %}}
After configuring OAuth 2.0, the Chronograf Admin tab becomes visible.
You can then set up [multiple organizations](/chronograf/v1.10/administration/managing-organizations/)
and [users](/chronograf/v1.10/administration/managing-influxdb-users/).
{{% /note %}}
Configure Chronograf to use an OAuth 2.0 provider and JWT (JSON Web Token) to authenticate users and enable role-based access controls.
(For more details on OAuth and JWT, see [RFC 6749](https://tools.ietf.org/html/rfc6749) and [RFC 7519](https://tools.ietf.org/html/rfc7519).)
{{% note %}}
#### OAuth PKCE
OAuth configurations in **Chronograf 1.9+** use [OAuth PKCE](https://oauth.net/2/pkce/) to
mitigate the threat of having the authorization code intercepted during the OAuth token exchange.
OAuth integrations that do no currently support PKCE are not affected.
**To disable OAuth PKCE** and revert to the previous token exchange, use the
[`--oauth-no-pkce` Chronograf configuration option](/chronograf/v1.10/administration/config-options/#--oauth-no-pkce)
or set the `OAUTH_NO_PCKE` environment variable to `true`.
{{% /note %}}
### Generate a Token Secret
To configure any of the supported OAuth 2.0 providers to work with Chronograf,
you must configure the `TOKEN_SECRET` environment variable (or command line option).
Chronograf will use this secret to generate the JWT Signature for all access tokens.
1. Generate a high-entropy pseudo-random string.
For example, to do this with OpenSSL, run this command:
```sh
openssl rand -base64 256 | tr -d '\n'
```
2. Set the environment variable:
```
TOKEN_SECRET=<mysecret>
```
{{% note %}}
***InfluxDB Enterprise clusters:*** If you are running multiple Chronograf servers in a high availability configuration,
set the `TOKEN_SECRET` environment variable on each server to ensure that users can stay logged in.
{{% /note %}}
### JWKS Signature Verification (optional)
If the OAuth provider implements OpenID Connect with RS256 signatures, you need to enable this feature with the `USE_ID_TOKEN` variable
and provide a JSON Web Key Set (JWKS) document (holding the certificate chain) to validate the RSA signatures against.
This certificate chain is regularly rolled over (when the certificates expire), so it is fetched from the `JWKS_URL` on demand.
**Example:**
```sh
export USE_ID_TOKEN=true
export JWKS_URL=https://example.com/adfs/discovery/keys
```
### Set configurations for your OAuth provider
To enable OAuth 2.0 authorization and authentication in Chronograf,
you must set configuration options that are specific for the OAuth 2.0 authentication provider you want to use.
Configuration steps for the following supported authentication providers are provided in these sections below:
* [GitHub](#configure-github-authentication)
* [Google](#configure-google-authentication)
* [Auth0](#configure-auth0-authentication)
* [Heroku](#configure-heroku-authentication)
* [Okta](#configure-okta-authentication)
* [Gitlab](#configure-gitlab-authentication)
* [Azure Active Directory](#configure-azure-active-directory-authentication)
* [Bitbucket](#configure-bitbucket-authentication)
* [Configure Chronograf to use any OAuth 2.0 provider](#configure-chronograf-to-use-any-oauth-20-provider)
{{% note %}}
If you haven't already, you must first [generate a token secret](#generate-a-token-secret) before proceeding.
{{% /note %}}
---
#### Configure GitHub authentication
1. Follow the steps to [Register a new OAuth application](https://github.com/settings/applications/new)
on GitHub to obtain your Client ID and Client Secret.
On the GitHub application registration page, enter the following values:
- **Homepage URL**: the full Chronograf server name and port.
For example, to run the application locally with default settings, set the this URL to `http://localhost:8888`.
- **Authorization callback URL**: the **Homepage URL** plus the callback URL path `/oauth/github/callback`
(for example, `http://localhost:8888/oauth/github/callback`).
2. Set the Chronograf environment variables with the credentials provided by GitHub:
```sh
export GH_CLIENT_ID=<github-client-id>
export GH_CLIENT_SECRET=<github-client-secret>
# If using Github Enterprise
export GH_URL=https://github.custom-domain.com
```
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
Alternatively, set environment variables using the equivalent command line options:
- [`--github-url`](/chronograf/v1.10/administration/config-options/#--github-url)
- [`--github-client-id`](/chronograf/v1.10/administration/config-options/#--github-client-id-i)
- [`--github-client-secret`](/chronograf/v1.10/administration/config-options/#--github-client-secret-s)
- [`--token_secret=`](/chronograf/v1.10/administration/config-options/#--token-secret-t)
For details on the command line options and environment variables, see [GitHub OAuth 2.0 authentication options](/chronograf/v1.10/administration/config-options#github-specific-oauth-20-authentication-options).
##### GitHub organizations (optional)
To require GitHub organization membership for authenticating users, set the `GH_ORGS` environment variable with the name of your organization.
```sh
export GH_ORGS=biffs-gang
```
If the user is not a member of the specified GitHub organization, then the user will not be granted access.
To support multiple organizations, use a comma-delimited list.
```sh
export GH_ORGS=hill-valley-preservation-sociey,the-pinheads
```
{{% note %}}
When logging in for the first time, make sure to grant access to the organization you configured.
The OAuth application can only see membership in organizations it has been granted access to.
{{% /note %}}
##### Example GitHub OAuth configuration
```bash
# Github Enterprise base URL
export GH_URL=https://github.mydomain.com
# GitHub Client ID
export GH_CLIENT_ID=b339dd4fddd95abec9aa
# GitHub Client Secret
export GH_CLIENT_SECRET=260041897d3252c146ece6b46ba39bc1e54416dc
# Secret used to generate JWT tokens
export TOKEN_SECRET=Super5uperUdn3verGu355!
# Restrict to specific GitHub organizations
export GH_ORGS=biffs-gang
```
#### Configure Google authentication
1. Follow the steps in [Obtain OAuth 2.0 credentials](https://developers.google.com/identity/protocols/OpenIDConnect#getcredentials)
to obtain the required Google OAuth 2.0 credentials, including a Google Client ID and Client Secret, by
2. Verify that Chronograf is publicly accessible using a fully-qualified domain name so that Google can properly redirect users back to the application.
3. Set the Chronograf environment variables for the Google OAuth 2.0 credentials and **Public URL** used to access Chronograf:
```sh
export GOOGLE_CLIENT_ID=812760930421-kj6rnscmlbv49pmkgr1jq5autblc49kr.apps.googleusercontent.com
export GOOGLE_CLIENT_SECRET=wwo0m29iLirM6LzHJWE84GRD
export PUBLIC_URL=http://localhost:8888
```
4. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
Alternatively, the environment variables discussed above can be set using their corresponding command line options:
* [`--google-client-id=`](/chronograf/v1.10/administration/config-options/#google-client-id)
* [`--google-client-secret=`](/chronograf/v1.10/administration/config-options/#google-client-secret)
* [`--public-url=`](/chronograf/v1.10/administration/config-options/#public-url)
* [`--token_secret=`](/chronograf/v1.10/administration/config-options/#token-secret-t)
For details on Chronograf command line options and environment variables, see [Google OAuth 2.0 authentication options](/chronograf/v1.10/administration/config-options#google-specific-oauth-20-authentication-options).
##### Optional Google domains
Configure Google authentication to restrict access to Chronograf to specific domains.
Set the `GOOGLE_DOMAINS` environment variable or the [`--google-domains`](/chronograf/v1.10/administration/config-options/#google-domains) command line option.
Separate multiple domains using commas.
For example, to permit access only from `biffspleasurepalace.com` and `savetheclocktower.com`, set the environment variable as follows:
```sh
export GOOGLE_DOMAINS=biffspleasurepalace.com,savetheclocktower.com
```
#### Configure Auth0 authentication
See [OAuth 2.0](https://auth0.com/docs/protocols/oauth2) for details about the Auth0 implementation.
1. Set up your Auth0 account to obtain the necessary credentials.
1. From the Auth0 user dashboard, click **Create Application**.
2. Choose **Regular Web Applications** as the type of application and click **Create**.
3. In the **Settings** tab, set **Token Endpoint Authentication** to **None**.
4. Set **Allowed Callback URLs** to `https://www.example.com/oauth/auth0/callback` (substituting `example.com` with the [`PUBLIC_URL`](/chronograf/v1.10/administration/config-options/#general-authentication-options) of your Chronograf instance)
5. Set **Allowed Logout URLs** to `https://www.example.com` (substituting `example.com` with the [`PUBLIC_URL`](/chronograf/v1.10/administration/config-options/#general-authentication-options) of your Chronograf instance)
<!-- ["OIDC Conformant"](https://auth0.com/docs/api-auth/intro#how-to-use-the-new-flows). -->
2. Set the Chronograf environment variables based on your Auth0 client credentials:
* `AUTH0_DOMAIN` (Auth0 domain)
* `AUTH0_CLIENT_ID` (Auth0 Client ID)
* `AUTH0_CLIENT_SECRET` (Auth0 client Secret)
* `PUBLIC_URL` (Public URL, used in callback URL and logout URL above)
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
Alternatively, the environment variables discussed above can be set using their corresponding command line options:
* [`--auth0-domain`](/chronograf/v1.10/administration/config-options/#auth0-specific-oauth-20-authentication-options)
* [`--auth0-client-id`](/chronograf/v1.10/administration/config-options/#auth0-specific-oauth-20-authentication-options)
* [`--auth0-client-secret`](/chronograf/v1.10/administration/config-options/#auth0-specific-oauth-20-authentication-options)
* [`--public-url`](/chronograf/v1.10/administration/config-options/#general-authentication-options)
##### Auth0 organizations (optional)
Auth0 can be customized to the operator's requirements, so it has no official concept of an "organization."
Organizations are supported in Chronograf using a lightweight `app_metadata` key that can be inserted into Auth0 user profiles automatically or manually.
To assign a user to an organization, add an `organization` key to the user `app_metadata` field with the value corresponding to the user's organization.
For example, you can assign the user Marty McFly to the "time-travelers" organization by setting `app_metadata` to `{"organization": "time-travelers"}`.
This can be done either manually by an operator or automatically through the use of an [Auth0 Rule](https://auth0.com/docs/rules) or a [pre-user registration Auth0 Hook](https://auth0.com/docs/hooks/concepts/pre-user-registration-extensibility-point).
Next, you will need to set the Chronograf [`AUTH0_ORGS`](/chronograf/v1.10/administration/config-options/#auth0-organizations) environment variable to a comma-separated list of the allowed organizations.
For example, if you have one group of users with an `organization` key set to `biffs-gang` and another group with an `organization` key set to `time-travelers`, you can permit access to both with this environment variable: `AUTH0_ORGS=biffs-gang,time-travelers`.
An `--auth0-organizations` command line option is also available, but it is limited to a single organization and does not accept a comma-separated list like its environment variable equivalent.
#### Configure Heroku authentication
1. Obtain a client ID and application secret for Heroku by following the guide posted [here](https://devcenter.heroku.com/articles/oauth#register-client).
2. Set the Chronograf environment variables based on your Heroku client credentials:
```sh
export HEROKU_CLIENT_ID=<client-id-from-heroku>
export HEROKU_SECRET=<client-secret-from-heroku>
```
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
##### Heroku organizations (optional)
To restrict access to members of specific Heroku organizations,
use the `HEROKU_ORGS` environment variable (or associated command line option).
Multiple values must be comma-separated.
For example, to permit access from the `hill-valley-preservation-society` organization and `the-pinheads` organization,
use the following environment variable:
```sh
export HEROKU_ORGS=hill-valley-preservation-sociey,the-pinheads
```
#### Configure Okta authentication
1. Create an Okta web application by following the steps in the Okta documentation: [Implement the Authorization Code Flow](https://developer.okta.com/docs/guides/implement-auth-code/overview/).
1. In the **General Settings** section, find the **Allowed grant types** listing and select
only the **Client acting on behalf of a user:** **Authorization Code** option.
2. In the **LOGIN** section, set the **Login redirect URIs* and **Initiate login URI** to `http://localhost:8888/oauth/okta/callback` (the default callback URL for Chronograf).
2. Set the following Chronograf environment variables:
```bash
GENERIC_NAME=okta
# The client ID is provided in the "Client Credentials" section of the Okta dashboard.
GENERIC_CLIENT_ID=<okta_client_ID>
# The client secret is in the "Client Credentials" section of the Okta dashboard.
GENERIC_CLIENT_SECRET=<okta_client_secret>
GENERIC_AUTH_URL=https://dev-553212.oktapreview.com/oauth2/default/v1/authorize
GENERIC_TOKEN_URL=https://dev-553212.oktapreview.com/oauth2/default/v1/token
GENERIC_API_URL=https://dev-553212.oktapreview.com/oauth2/default/v1/userinfo
PUBLIC_URL=http://localhost:8888
TOKEN_SECRET=secretsecretsecret
GENERIC_SCOPES=openid,profile,email
```
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
#### Configure GitLab authentication
1. In your GitLab profile, [create a new OAuth2 authentication service](https://docs.gitlab.com/ee/integration/oauth_provider.html#adding-an-application-through-the-profile).
1. Provide a name for your application, then enter your publicly accessible Chronograf URL with the `/oauth/gitlab/callback` path as your GitLab **callback URL**.
(For example, `http://<your_chronograf_server>:8888/oauth/gitlab/callback`.)
2. Click **Submit** to save the service details.
3. Make sure your application has **openid** and **read_user** scopes.
2. Copy the provided **Application Id** and **Secret** and set the following environment variables:
> In the examples below, note the use of `gitlab-server-example.com` and `chronograf-server-example.com` in urls.
> These should be replaced by the actual URLs used to access each service.
```bash
GENERIC_NAME="gitlab"
GENERIC_CLIENT_ID=<gitlab_application_id>
GENERIC_CLIENT_SECRET=<gitlab_secret>
GENERIC_AUTH_URL="https://gitlab.com/oauth/authorize"
GENERIC_TOKEN_URL="https://gitlab.com/oauth/token"
TOKEN_SECRET=<mytokensecret>
GENERIC_SCOPES="api,openid,read_user"
PUBLIC_URL="http://<chronograf-host>:8888"
GENERIC_API_URL="https://gitlab.com/api/v3/user"
```
The equivalent command line options are:
```bash
--generic-name=gitlab
--generic-client-id=<gitlab_application_id>
--generic-client-secret=<gitlab_secret>
--generic-auth-url=https://gitlab.com/oauth/authorize
--generic-token-url=https://gitlab.com/oauth/token
--token-secret=<mytokensecret>
--generic-scopes=openid,read_user
--generic-api-url=https://gitlab.com/api/v3/user
--public-url=http://<chronograf-host>:8888/
```
#### Configure Azure Active Directory authentication
1. [Create an Azure Active Directory application](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-an-azure-active-directory-application).
Note the following information: `<APPLICATION-ID>`, `<TENANT-ID>`, and `<APPLICATION-KEY>`.
You'll need these to define your Chronograf environment.
2. Be sure to register a reply URL in your Azure application settings.
This should match the calling URL from Chronograf.
Otherwise, you will get an error stating no reply address is registered for the application.
For example, if Chronograf is configured with a `GENERIC_NAME` value of AzureAD, the reply URL would be `http://localhost:8888/oauth/AzureAD/callback`.
3. After completing the application provisioning within Azure AD, you can now complete the configuration with Chronograf.
Using the metadata from your Azure AD instance, proceed to export the following environment variables:
Set the following environment variables in `/etc/default/chronograf`:
```txt
GENERIC_TOKEN_URL=https://login.microsoftonline.com/<<TENANT-ID>>/oauth2/token
TENANT=<<TENANT-ID>>
GENERIC_NAME=AzureAD
GENERIC_API_KEY=userPrincipalName
GENERIC_SCOPES=openid
GENERIC_CLIENT_ID=<<APPLICATION-ID>>
GENERIC_AUTH_URL=https://login.microsoftonline.com/<<TENANT-ID>>/oauth2/authorize?resource=https://graph.windows.net
GENERIC_CLIENT_SECRET=<<APPLICATION-KEY>>
TOKEN_SECRET=secret
GENERIC_API_URL=https://graph.windows.net/<<TENANT-ID>>/me?api-version=1.6
PUBLIC_URL=http://localhost:8888
```
Note: If youve configured TLS/SSL, modify the `PUBLIC_URL` to ensure you're using HTTPS.
#### Configure Bitbucket authentication
1. Complete the instructions to [Use OAuth on Bitbucket Cloud](https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/), and include the following information:
- **Callback URL**: <http://localhost:8888/oauth/bitbucket/callback>
- **Permissions**: Account read, email
2. Run the following command to set Chronograf environment variables for Bitbucket in `/etc/default/chronograf`:
```sh
export TOKEN_SECRET=...
export GENERIC_CLIENT_ID=...
export GENERIC_CLIENT_SECRET=...
export GENERIC_AUTH_URL=https://bitbucket.org/site/oauth2/authorize
export GENERIC_TOKEN_URL=https://bitbucket.org/site/oauth2/access_token
export GENERIC_API_URL=https://api.bitbucket.org/2.0/user
export GENERIC_SCOPES=account
export PUBLIC_URL=http://localhost:8888
export GENERIC_NAME=bitbucket
```
#### Configure Chronograf to use any OAuth 2.0 provider
Chronograf can be configured to work with any OAuth 2.0 provider, including those defined above, by using the generic configuration options below.
Additionally, the generic provider implements OpenID Connect (OIDC) as implemented by Active Directory Federation Services (AD FS).
When using the generic configuration, some or all of the following environment variables (or corresponding command line options) are required (depending on your OAuth 2.0 provider):
* `GENERIC_CLIENT_ID`: Application client [identifier](https://tools.ietf.org/html/rfc6749#section-2.2) issued by the provider
* `GENERIC_CLIENT_SECRET`: Application client [secret](https://tools.ietf.org/html/rfc6749#section-2.3.1) issued by the provider
* `GENERIC_AUTH_URL`: Provider's authorization [endpoint](https://tools.ietf.org/html/rfc6749#section-3.1) URL
* `GENERIC_TOKEN_URL`: Provider's token [endpoint](https://tools.ietf.org/html/rfc6749#section-3.2) URL used by the Chronograf client to obtain an access token
* `USE_ID_TOKEN`: Enable OpenID [id_token](https://openid.net/specs/openid-connect-core-1_0.html#rfc.section.3.1.3.3) processing
* `JWKS_URL`: Provider's JWKS [endpoint](https://tools.ietf.org/html/rfc7517#section-4.7) used by the client to validate RSA signatures
* `GENERIC_API_URL`: Provider's [OpenID UserInfo endpoint](https://connect2id.com/products/server/docs/api/userinfo) URL used by Chronograf to request user data
* `GENERIC_API_KEY`: JSON lookup key for [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo) (known to be required for Microsoft Azure, with the value `userPrincipalName`)
* `GENERIC_SCOPES`: [Scopes](https://tools.ietf.org/html/rfc6749#section-3.3) of user data required for your instance of Chronograf, such as user email and OAuth provider organization
- Multiple values must be space-delimited, e.g. `user:email read:org`
- These may vary by OAuth 2.0 provider
- Default value: `user:email`
* `PUBLIC_URL`: Full public URL used to access Chronograf from a web browser, i.e. where Chronograf is hosted
- Used by Chronograf, for example, to construct the callback URL
* `TOKEN_SECRET`: Used to validate OAuth [state](https://tools.ietf.org/html/rfc6749#section-4.1.1) response. (see above)
##### Optional environment variables
The following environment variables (and corresponding command line options) are also available for optional use:
* `GENERIC_DOMAINS`: Email domain where email address must include.
* `GENERIC_NAME`: Value used in the callback URL in conjunction with `PUBLIC_URL`, e.g. `<PUBLIC_URL>/oauth/<GENERIC_NAME>/callback`
- This value is also used in the text for the Chronograf Login button
- Default value is `generic`
- So, for example, if `PUBLIC_URL` is `https://localhost:8888` and `GENERIC_NAME` is its default value, then the callback URL would be `https://localhost:8888/oauth/generic/callback`, and the Chronograf Login button would read `Log in with Generic`
- While using Chronograf, this value should be supplied in the `Provider` field when adding a user or creating an organization mapping.
##### Example: OIDC with AD FS
See [Enabling OpenID Connect with AD FS 2016](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/development/enabling-openid-connect-with-ad-fs) for a walk through of the server configuration.
Exports for Chronograf (e.g. in `/etc/default/chronograf`):
```sh
PUBLIC_URL="https://example.com:8888"
GENERIC_CLIENT_ID="chronograf"
GENERIC_CLIENT_SECRET="KW-TkvH7vzYeJMAKj-3T1PdHx5bxrZnoNck2KlX8"
GENERIC_AUTH_URL="https://example.com/adfs/oauth2/authorize"
GENERIC_TOKEN_URL="https://example.com/adfs/oauth2/token"
GENERIC_SCOPES="openid"
GENERIC_API_KEY="upn"
USE_ID_TOKEN="true"
JWKS_URL="https://example.com/adfs/discovery/keys"
TOKEN_SECRET="ZNh2N9toMwUVQxTVEe2ZnnMtgkh3xqKZ"
```
{{% note %}}
Do not use special characters for the `GENERIC_CLIENT_ID` as AD FS may split strings at the special character, resulting in an identifier mismatch.
{{% /note %}}
{{% note %}}
#### Troubleshoot OAuth errors
##### ERRO[0053]
A **ERRO[0053]** error indicates that a primary email is not found for the specified user.
A user must have a primary email.
```
ERRO[0053] Unable to get OAuth Group malformed email address, expected "..." to contain @ symbol
```
{{% /note %}}
### Configure authentication duration
By default, user authentication remains valid for 30 days using a cookie stored in the web browser.
To configure a different authorization duration, set a duration using the `AUTH_DURATION` environment variable.
**Example:**
To set the authentication duration to 1 hour, use the following shell command:
```sh
export AUTH_DURATION=1h
```
The duration uses the Go (golang) [time duration format](https://golang.org/pkg/time/#ParseDuration), so the largest time unit is `h` (hours).
So to change it to 45 days, use:
```sh
export AUTH_DURATION=1080h
```
To require re-authentication every time the browser is closed, set `AUTH_DURATION` to `0`.
This makes the cookie transient (aka "in-memory").
## Configure Chronograf to authenticate with a username and password
Chronograf can be configured to authenticate users by username and password ("basic authentication").
Turn on basic authentication access to restrict HTTP requests to Chronograf to selected users.
{{% warn %}}
[OAuth 2.0](#configure-chronograf-to-authenticate-with-oauth-20) is the preferred method for authentication.
Only use basic authentication in cases where an OAuth 2.0 integration is not possible.
{{% /warn %}}
When using basic authentication, *all users have SuperAdmin status*; Chronograf authorization rules are not enforced.
For more information, see [Cross-organization SuperAdmin status](/chronograf/v1.10/administration/managing-chronograf-users/#cross-organization-superadmin-status).
To enable basic authentication, run chronograf with the `--htpasswd` flag or use the `HTPASSWD` environment variable.
```sh
chronograf --htpasswd <path to .htpasswd file>
```
The `.htpasswd` file contains users and their passwords, and should be created with a password file utility tool such as `apache2-utils`.
For more information about how to restrict access with basic authentication, see NGINX documentation on [Restricting Access with HTTP Basic Authentication](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/).
## Configure TLS (Transport Layer Security) and HTTPS
The TLS (Transport Layer Security) cryptographic protocol is supported in Chronograf to provides server authentication, data confidentiality, and data integrity.
Using TLS secures traffic between a server and web browser and enables the use of HTTPS.
InfluxData recommends using HTTPS to communicate securely with Chronograf applications.
If you are not using a TLS termination proxy, you can run your Chronograf server with TLS connections.
Chronograf includes command line and environment variable options for configuring TLS (Transport Layer Security) certificates and key files.
Use of the TLS cryptographic protocol provides server authentication, data confidentiality, and data integrity.
When configured, users can use HTTPS to securely communicate with your Chronograf applications.
{{% note %}}
HTTPS helps prevent nefarious agents stealing the JWT and using it to spoof a valid user against the server.
{{% /note %}}
### Configure TLS for Chronograf
Chronograf server has command line and environment variable options to specify the certificate and key files.
The server reads and parses a public/private key pair from these files.
The files must contain PEM-encoded data.
All Chronograf command line options have corresponding environment variables.
To configure Chronograf to support TLS, do the following:
1. Specify the certificate file using the `TLS_CERTIFICATE` environment variable or the `--cert` CLI option.
2. Specify the key file using the `TLS_PRIVATE_KEY` environment variable or `--key` CLI option.
{{% note %}}
If both the TLS certificate and key are in the same file, specify them using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
{{% /note %}}
3. _(Optional)_ To specify which TLS cipher suites to allow, use the `TLS_CIPHERS` environment variable or the `--tls-ciphers` CLI option.
Chronograf supports all cipher suites in the
[Go `crypto/tls` package](https://golang.org/pkg/crypto/tls/#pkg-constants)
and, by default, allows them all.
4. _(Optional)_ To specify the minimum and maximum TLS versions to allow, use the
`TLS_MIN_VERSION` and `TLS_MAX_VERSION` environment variables or the
`--tls-min-version` and `--tls-max-version` CLI options.
By default, the minimum TLS version allowed is `tls1.2` and the maximum version is
unlimited.
#### Example with CLI options
```sh
chronograf \
--cert=my.crt \
--key=my.key \
--tls-ciphers=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
--tls-min-version=tls1.2 \
--tls-max-version=tls1.3
```
#### Example with environment variables
```sh
TLS_CERTIFICATE=my.crt \
TLS_PRIVATE_KEY=my.key \
TLS_CIPHERS=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
TLS_MIN_VERSION=tls1.2 \
TLS_MAX_VERSION=tls1.3 \
chronograf
```
#### Docker example with environment variables
```sh
docker run \
-v /host/path/to/certs:/certs \
-e TLS_CERTIFICATE=/certs/my.crt \
-e TLS_PRIVATE_KEY=/certs/my.key \
-e TLS_CIPHERS=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
-e TLS_MIN_VERSION=tls1.2 \
-e TLS_MAX_VERSION=tls1.3 \
chronograf:{{< current-version >}}
```
### Test with self-signed certificates
To test your setup, you can use a self-signed certificate.
{{% warn %}}
Don't use self-signed certificates in production environments.
{{% /warn %}}
To create a certificate and key in one file with OpenSSL:
```sh
openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout testing.pem -out testing.pem -subj "/CN=localhost" -days 365
```
Next, set the environment variable `TLS_CERTIFICATE`:
```sh
export TLS_CERTIFICATE=$PWD/testing.pem
```
Run Chronograf:
```sh
./chronograf
INFO[0000] Serving chronograf at https://[::]:8888 component=server
```
In the first log message you should see `https` rather than `http`.

View File

@ -1,58 +0,0 @@
---
title: Migrate to a Chronograf HA configuration
description: >
Migrate a Chronograf single instance configuration using BoltDB to a Chronograf high-availability (HA) cluster configuration using etcd.
menu:
chronograf_1_10:
weight: 10
parent: Administration
---
Use [`chronoctl`](/chronograf/v1.10/tools/chronoctl/) to migrate your Chronograf configuration store from BoltDB to a shared `etcd` data store used for Chronograf high-availability (HA) clusters.
{{% note %}}
#### Update resource IDs
Migrating Chronograf to a shared data source creates new source IDs for each resource.
Update external links to Chronograf dashboards to reflect new source IDs.
{{% /note %}}
1. Stop the Chronograf server by killing the `chronograf` process.
2. To prevent data loss, we **strongly recommend** that you back up your Chronograf data store before migrating to a Chronograf cluster.
3. [Install and start etcd](/chronograf/v1.10/administration/create-high-availability/#install-and-start-etcd).
4. Run the following command, specifying the local BoltDB file and the `etcd` endpoint beginning with `etcd://`.
(We recommend adding the prefix `bolt://` to an absolute path.
Do not use the prefix to specify a relative path to the BoltDB file.)
```sh
chronoctl migrate \
--from bolt:///path/to/chronograf-v1.db \
--to etcd://localhost:2379
```
##### Provide etcd authentication credentials
If authentication is enabled on `etcd`, use the standard URI basic
authentication format to define a username and password. For example:
```sh
etcd://username:password@localhost:2379
```
##### Provide etcd TLS credentials
If TLS is enabled on `etcd`, provide your TLS certificate credentials using
the following query parameters in your etcd URL:
- **cert**: Path to client certificate file or PEM file
- **key**: Path to client key file
- **ca**: Path to trusted CA certificates
```sh
etcd://127.0.0.1:2379?cert=/tmp/client.crt&key=/tst/client.key&ca=/tst/ca.crt
```
5. Update links to Chronograf (for example, from external sources) to reflect your new URLs:
- **from BoltDB:**
http://localhost:8888/sources/1/status
- **to etcd:**
http://localhost:8888/sources/373921399246786560/status
6. Set up a load balancer for Chronograf.
7. [Start Chronograf](/chronograf/v1.10/administration/create-high-availability/#start-chronograf).

View File

@ -1,486 +0,0 @@
---
title: Prebuilt dashboards in Chronograf
description: Import prebuilt dashboards into Chronograf based on Telegraf plugins.
menu:
chronograf_1_10:
name: Prebuilt dashboards in Chronograf
weight: 50
parent: Administration
---
Chronograf lets you import a variety of prebuilt dashboards that visualize metrics collect by specific [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins). The following Telegraf-related dashboards templates are available.
For details on how to import dashboards while adding a connection in Chronograf, see [Creating connections](/chronograf/v1.10/administration/creating-connections/#manage-influxdb-connections-using-the-chronograf-ui).
## Docker
The Docker dashboard displays the following information:
- nCPU
- Total Memory
- Containers
- System Memory Usage
- System Load
- Disk I/O
- Filesystem Usage
- Block I/O per Container
- CPU Usage per Container
- Memory Usage % per Container
- Memory Usage per Container
- Net I/O per Container
### Plugins
- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#input-docker)
- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu)
## Kubernetes Node
The Kubernetes Node dashboard displays the following information:
- Total Nodes
- Total Pod Count
- Total Containers
- K8s - Node Millicores
- K8s - Node Memory Bytes
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod TX Bytes/Second
- K8s - Pod RX Bytes/Second
- K8s - Kubelet Millicores
- K8s - Kubelet Memory Bytes
### Plugins
- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes)
## Kubernetes Overview
The Kubernetes Node dashboard displays the following information:
- Total Nodes
- Total Pod Count
- Total Containers
- K8s - Node Millicores
- K8s - Node Memory Bytes
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod TX Bytes/Second
- K8s - Pod RX Bytes/Second
- K8s - Kubelet Millicores
- K8s - Kubelet Memory Bytes
### Plugins
- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes)
## Kubernetes Pod
The Kubernetes Pod dashboard displays the following information:
- Total Nodes
- Total Pod Count
- Total Containers
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod TX Bytes/Second
### Plugins
- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes)
## Riak
The Riak dashboard displays the following information:
- Riak - Total Memory Bytes
- Riak - Object Byte Size
- Riak - Number of Siblings/Minute
- Riak - Latency (ms)
- Riak - Reads and Writes/Minute
- Riak - Active Connections
- Riak - Read Repairs/Minute
### Plugins
- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#input-riak)
## Consul
The Consul dashboard displays the following information:
- Consul - Number of Critical Health Checks
- Consul - Number of Warning Health Checks
### Plugins
- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul)
## Consul Telemetry
The Consul Telemetry dashboard displays the following information:
- Consul Agent - Number of Go Routines
- Consul Agent - Runtime Alloc Bytes
- Consul Agent - Heap Objects
- Consul - Number of Agents
- Consul - Leadership Election
- Consul - HTTP Request Time (ms)
- Consul - Leadership Change
- Consul - Number of Serf Events
### Plugins
[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul)
## Mesos
The Mesos dashboard displays the following information:
- Mesos Active Slaves
- Mesos Tasks Active
- Mesos Tasks
- Mesos Outstanding Offers
- Mesos Available/Used CPUs
- Mesos Available/Used Memory
- Mesos Master Uptime
### Plugins
- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos)
## RabbitMQ
The RabbitMQ dashboard displays the following information:
- RabbitMQ - Overview
- RabbitMQ - Published/Delivered per Second
- RabbitMQ - Acked/Unacked per Second
### Plugins
- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq)
## System
The System dashboard displays the following information:
- System Uptime
- CPUs
- RAM
- Memory Used %
- Load
- I/O
- Network
- Processes
- Swap
### Plugins
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu)
- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio)
- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net)
- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#input-processes)
- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#input-swap)
## VMware vSphere Overview
The VMware vSphere Overview dashboard gives an overview of your VMware vSphere Clusters and uses metrics from the `vsphere_cluster_*` and `vsphere_vm_*` set of measurements. It displays the following information:
- Cluster Status
- Uptime for :clustername:
- CPU Usage for :clustername:
- RAM Usage for :clustername:
- Datastores - Usage Capacity
- Network Usage for :clustername:
- Disk Throughput for :clustername:
- VM Status
- VM CPU Usage MHz for :clustername:
- VM Mem Usage for :clustername:
- VM Network Usage for :clustername:
- VM CPU % Ready for :clustername:
### Plugins
- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vmware-vsphere)
## Apache
The Apache dashboard displays the following information:
- System Uptime
- CPUs
- RAM
- Memory Used %
- Load
- I/O
- Network
- Workers
- Scoreboard
- Apache Uptime
- CPU Load
- Requests per Sec
- Throughput
- Response Codes
- Apache Log
### Plugins
- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#input-apache)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio)
- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net)
- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#input-logparser)
## ElasticSearch
The ElasticSearch dashboard displays the following information:
- ElasticSearch - Query Throughput
- ElasticSearch - Open Connections
- ElasticSearch - Query Latency
- ElasticSearch - Fetch Latency
- ElasticSearch - Suggest Latency
- ElasticSearch - Scroll Latency
- ElasticSearch - Indexing Latency
- ElasticSearch - JVM GC Collection Counts
- ElasticSearch - JVM GC Latency
- ElasticSearch - JVM Heap Usage
### Plugins
- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch)
## InfluxDB
The InfluxDB dashboard displays the following information:
- System Uptime
- System Load
- Network
- Memory Usage
- CPU Utilization %
- Filesystems Usage
- # Measurements
- nCPU
- # Series
- # Measurements per DB
- # Series per DB
- InfluxDB Memory Heap
- InfluxDB Active Requests
- InfluxDB - HTTP Requests/Min
- InfluxDB GC Activity
- InfluxDB - Written Points/Min
- InfluxDB - Query Executor Duration
- InfluxDB - Write Errors
- InfluxDB - Client Errors
- # CQ/Minute
### Plugins
- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio)
- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net)
## Memcached
The Memcached dashboard displays the following information:
- Memcached - Current Connections
- Memcached - Get Hits/Second
- Memcached - Get Misses/Second
- Memcached - Delete Hits/Second
- Memcached - Delete Misses/Second
- Memcached - Incr Hits/Second
- Memcached - Incr Misses/Second
- Memcached - Current Items
- Memcached - Total Items
- Memcached - Bytes Stored
- Memcached - Bytes Read/Sec
- Memcached - Bytes Written/Sec
- Memcached - Evictions/10 Seconds
### Plugins
- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached)
## NSQ
The NSQ dashboard displays the following information:
- NSQ - Channel Client Count
- NSQ - Channel Messages Count
- NSQ - Topic Count
- NSQ - Server Count
- NSQ - Topic Messages
- NSQ - Topic Messages on Disk
- NSQ - Topic Ingress
- NSQ - Topic Egress
### Plugins
- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq)
## PostgreSQL
The PostgreSQL dashboard displays the following information:
- System Uptime
- nCPU
- System Load
- Total Memory
- Memory Usage
- Filesystems Usage
- CPU Usage
- System Load
- I/O
- Network
- Processes
- Swap
- PostgreSQL rows out/sec
- PostgreSQL rows in/sec
- PostgreSQL - Buffers
- PostgreSQL commit/rollback per sec
- Postgres deadlocks/conflicts
### Plugins
- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio)
## HAProxy
The HAProxy dashboard displays the following information:
- HAProxy - Number of Servers
- HAProxy - Sum HTTP 2xx
- HAProxy - Sum HTTP 4xx
- HAProxy - Sum HTTP 5xx
- HAProxy - Frontend HTTP Requests/Second
- HAProxy - Frontend Sessions/Second
- HAProxy - Frontend Session Usage %
- HAProxy - Frontend Security Denials/Second
- HAProxy - Frontend Request Errors/Second
- HAProxy - Frontend Bytes/Second
- HAProxy - Backend Average Response Time
- HAProxy - Backend Connection Errors/Second
- HAProxy - Backend Queued Requests/Second
- HAProxy - Backend Average Requests Queue Time (ms)
- HAProxy - Backend Error Responses/Second
### Plugins
- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy)
## NGINX
The NGINX dashboard displays the following information:
- NGINX - Client Connection
- NGINX - Client Errors
- NGINX - Client Requests
- NGINX - Active Client State
### Plugins
- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx)
## Redis
The Redis dashboard displays the following information:
- Redis - Connected Clients
- Redis - Blocked Clients
- Redis - CPU
- Redis - Memory
### Plugins
- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#input-redis)
## VMware vSphere VMs
The VMWare vSphere VMs dashboard gives an overview of your VMware vSphere virtual machines and includes metrics from the `vsphere_vm_*` set of measurements. It displays the following information:
- Uptime for :vmname:
- CPU Usage for :vmname:
- RAM Usage for :vmname:
- CPU Usage Average for :vmname:
- RAM Usage Average for :vmname:
- CPU Ready Average % for :vmname:
- Network Usage for:vmname:
- Total Disk Latency for :vmname:
### Plugins
- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere)
## VMware vSphere Hosts
The VMWare vSphere Hosts dashboard displays the following information:
- Uptime for :esxhostname:
- CPU Usage for :esxhostname:
- RAM Usage for :esxhostname:
- CPU Usage Average for :esxhostname:
- RAM Usage Average for :esxhostname:
- CPU Ready Average % for :esxhostname:
- Network Usage for :esxhostname:
- Total Disk Latency for :esxhostname:
### Plugins
- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere)
## PHPfpm
The PHPfpm dashboard displays the following information:
- PHPfpm - Accepted Connections
- PHPfpm - Processes
- PHPfpm - Slow Requests
- PHPfpm - Max Children Reached
### Plugins
- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx)
## Win System
The Win System dashboard displays the following information:
- System - CPU Usage
- System - Available Bytes
- System - TX Bytes/Second
- System - RX Bytes/Second
- System - Load
### Plugins
- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-services)
## MySQL
The MySQL dashboard displays the following information:
- System Uptime
- nCPU
- MySQL uptime
- Total Memory
- System Load
- Memory Usage
- InnoDB Buffer Pool Size
- InnoDB Buffer Usage
- Max Connections
- Open Connections
- I/O
- Network
- MySQL Connections/User
- MySQL Received Bytes/Sec
- MySQL Sent Bytes/Sec
- MySQL Connections
- MySQL Queries/Sec
- MySQL Slow Queries
- InnoDB Data
### Plugins
- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
## Ping
The Ping dashboard displays the following information:
- Ping - Packet Loss Percent
- Ping - Response Times (ms)
### Plugins
- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#input-ping)

View File

@ -1,82 +0,0 @@
---
title: Restore a Chronograf database
description: >
If you're rolling back to a previous version of Chronograf, restore your internal database.
menu:
chronograf_1_10:
weight: 110
parent: Administration
---
Chronograf uses [Bolt](https://github.com/boltdb/bolt) to store Chronograf-specific key-value data.
Generally speaking, you should never have to manually administer your internal Chronograf database.
However, rolling back to a previous version of Chronograf does require restoring
the data and data-structure specific to that version.
Chronograf's internal database, `chronograf-v1.db`, is stored at your specified
[`--bolt-path`](/chronograf/v1.10/administration/config-options/#bolt-path-b) which,
by default, is the current working directory where the `chronograf` binary is executed.
In the upgrade process, an unmodified backup of your Chronograf data is stored inside the
`backup` directory before any necessary migrations are run.
This is done as a convenience in case issues arise with the data migrations
or the upgrade process in general.
The `backup` directory contains a copy of your previous `chronograf-v1.db` file.
Each backup file is appended with the corresponding Chronograf version.
For example, if you moved from Chronograf 1.4.4.2 to {{< latest-patch >}}, there will be a
file called `backup/chronograf-v1.db.1.4.4.2`.
_**Chronograf backup directory structure**_
{{% filesystem-diagram %}}
- chronograf-working-dir/
- chronograf-v1.db
- backup/
- chronograf-v1.db.1.4.4.0
- chronograf-v1.db.1.4.4.1
- chronograf-v1.db.1.4.4.2
- ...
{{% /filesystem-diagram %}}
## Roll back to a previous version
If there is an issue during the upgrade process or you simply want/need to roll
back to an earlier version of Chronograf, you must restore the data file
associated with that specific version, then downgrade and restart Chronograf.
The process is as follows:
### 1. Locate your desired backup file
Inside your `backup` directory, locate the database file with a the appended Chronograf
version that corresponds to the version to which you are rolling back.
For example, if rolling back to 1.4.4.2, find `backup/chronograf-v1.db.1.4.4.2`.
### 2. Stop your Chronograf server
Stop the Chronograf server by killing the `chronograf` process.
### 3. Replace your current database with the backup
Remove the current database file and replace it with the desired backup file:
```bash
# Remove the current database
rm chronograf-v1.db
# Replace it with the desired backup file
cp backup/chronograf-v1.db.1.4.4.2 chronograf-v1.db
```
### 4. Install the desired Chronograf version
Install the desired Chronograf version.
Chronograf releases can be viewed and downloaded either from the
[InfluxData downloads](https://portal.influxdata.com/downloads)
page or from the [Chronograf releases](https://github.com/influxdata/chronograf/releases)
page on Github.
### 5. Start the Chronograf server
Restart the Chronograf server.
Chronograf will use the `chronograf-v1.db` in the current working directory.
## Rerun update migrations
This process can also be used to rerun Chronograf update migrations.
Go through steps 1-5, but on [step 3](#3-replace-your-current-database-with-the-backup)
select the backup you want to use as a base for the migrations.
When Chronograf starts again, it will automatically run the data migrations
required for the installed version.

View File

@ -1,19 +0,0 @@
---
title: Upgrade Chronograf
description: Upgrade to the latest version of Chronograf.
menu:
chronograf_1_10:
name: Upgrade
weight: 10
parent: Administration
---
If you're upgrading from Chronograf 1.3.x, first install {{< latest-patch version="1.7" >}}, and then install {{< latest-patch >}}.
If you're upgrading from Chronograf 1.4 or later, [download and install](https://portal.influxdata.com/downloads) the most recent version of Chronograf, and then restart Chronograf.
{{% note %}}
Installing a new version of Chronograf automatically clears the localStorage settings.
{{% /note %}}
After upgrading, see [Getting Started](/chronograf/v1.10/introduction/getting-started/) to get up and running.

View File

@ -1,12 +0,0 @@
---
title: Guides for Chronograf
description: Step-by-step instructions for using Chronograf's features.
menu:
chronograf_1_10:
name: Guides
weight: 30
---
Follow the links below to explore Chronograf's features.
{{< children >}}

View File

@ -1,87 +0,0 @@
---
title: Advanced Kapacitor usage
description: >
Use Kapacitor with Chronograf to manage alert history, TICKscripts, and Flux tasks.
menu:
chronograf_1_10:
weight: 100
parent: Guides
related:
- /{{< latest "kapacitor" >}}/introduction/getting-started/
- /{{< latest "kapacitor" >}}/working/kapa-and-chrono/
- /{{< latest "kapacitor" >}}/working/flux/
---
Chronograf provides a user interface for [Kapacitor](/{{< latest "kapacitor" >}}/),
InfluxData's processing framework for creating alerts, running ETL jobs, and detecting anomalies in your data.
Learn how Kapacitor interacts with Chronograf.
- [Manage Kapacitor alerts](#manage-kapacitor-alerts)
- [Manage Kapacitor tasks](#manage-kapacitor-tasks)
## Manage Kapacitor alerts
Chronograf provides information about Kapacitor alerts on the Alert History page.
Chronograf writes Kapacitor alert data to InfluxDB as time series data.
It stores the data in the `alerts` [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement)
in the `chronograf` database.
By default, this data is subject to an infinite [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp) (RP).
If you expect to have a large number of alerts or do not want to store your alert
history forever, consider shortening the [duration](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#duration)
of the default retention policy.
### Modify the retention policy of the chronograf database
Use the Chronograf **Admin page** to modify the retention policy in the `chronograf` database.
In the Databases tab:
1. Click **{{< icon "crown" "v2" >}} InfluxDB Admin** in the left navigation bar.
2. Hover over the retention policy list of the `chronograf` database and click **Edit**
next to the retention policy to update.
3. Update the **Duration** of the retention policy.
The minimum supported duration is one hour (`1h`) and the maximum is infinite (`INF` or `∞`).
_See [supported duration units](/{{< latest "influxdb" "v1" >}}/query_language/spec/#duration-units)._
4. Click **Save**.
If you set the retention policy's duration to one hour (`1h`), InfluxDB
automatically deletes any alerts that occurred before the past hour.
## Manage Kapacitor tasks
- [Manage Kapacitor TICKscripts](#manage-kapacitor-tickscripts)
- [Manage Kapacitor Flux tasks](#manage-kapacitor-flux-tasks)
### Manage Kapacitor TICKscripts
Chronograf lets you view and manage all Kapacitor TICKscripts for a selected Kapacitor subscription using the **TICKscripts** page.
1. To manage Kapacitor TICKscripts in Chronograf, click
**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **TICKscripts**.
Do one or more of the following:
- View Kapacitor TICKscript tasks. You can view up to 100 TICKscripts at a time. If you have more than 100 TICKscripts, the list will be paginated at the bottom of the page. You can also filter your TICKscripts by name.
- View TICKscript task type.
- Enable and disable TICKscript tasks.
- Create new TICKscript tasks.
- Update TICKscript tasks.
- Rename a TICKscript. Note, renaming a TICKscript updates the `var name` variable within the TICKscript.
- Delete TICKscript tasks.
- Create alerts using the Alert Rule Builder. See [Configure Chronograf alert rules](/chronograf/v1.10/guides/create-alert-rules/#configure-chronograf-alert-rules).
2. Click **Exit** when finished.
### Manage Kapacitor Flux tasks
**Kapacitor 1.6+** supports Flux tasks.
Chronograf lets you view and manage Flux tasks for a selected Kapacitor subscription using the **Flux Tasks** page.
To manage Kapacitor Flux tasks in Chronograf, click
**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select the **Flux Tasks** option. Do one or more of the following:
- View and filter Kapacitor Flux tasks by name.
- View Kapacitor Flux task activity.
- Enable and disable Kapacitor Flux tasks.
- Delete Kapacitor Flux tasks.
For more information on Flux tasks and Kapacitor see [Use Flux tasks with Kapacitor](/{{< latest "kapacitor" >}}/working/flux/).

View File

@ -1,109 +0,0 @@
---
title: Analyze logs with Chronograf
description: Analyze log information using Chronograf.
menu:
chronograf_1_10:
weight: 120
parent: Guides
---
Chronograf gives you the ability to view, search, filter, visualize, and analyze log information from a variety of sources.
This helps to recognize and diagnose patterns, then quickly dive into logged events that lead up to events.
- [Set up logging](#set-up-logging)
- [View logs in Chronograf](#view-logs-in-chronograf)
- [Configure the log viewer](#configure-the-log-viewer)
- [Show or hide the log status histogram](#show-or-hide-the-log-status-histogram)
- [Logs in dashboards](#logs-in-dashboards)
## Set up logging
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/#input-plugins):
- [Docker Log](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/docker_log/README.md)
- [Graylog](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/graylog/README.md)
- [Logparser](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser/README.md)
- [Logstash](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logstash/README.md)
- [Syslog](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/syslog/README.md)
- [Tail](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tail/README.md)
## View logs in Chronograf
Chronograf has a dedicated log viewer accessed by clicking the **Log Viewer** button in the left navigation.
{{< img-hd src="/img/chronograf/1-6-logs-nav-log-viewer.png" alt="Log viewer in the left nav" />}}
The log viewer provides a detailed histogram showing the time-based distribution of log entries color-coded by log severity.
It also includes a live stream of logs that can be searched, filtered, and paused to analyze specific time ranges.
Logs are pulled from the `syslog` measurement.
_Other log inputs and alternate log measurement options will be available in future updates._
{{< img-hd src="/img/chronograf/1-7-log-viewer-overview.png" alt="Chronograf log viewer" />}}
### Search and filter logs
Search for logs using keywords or regular expressions.
They can also be filtered by clicking values in the log table such as `severity` or `facility`.
Any tag values included with the log entry can be used as a filter.
You can also use search operators to filter your results. For example, if you want to find results with a severity of critical that don't mention RSS, you can enter: `severity == crit` and `-RSS`.
![Searching and filtering logs](/img/chronograf/1-7-log-viewer-search-filter.gif)
{{% note %}}
**Note:** The log search field is case-sensitive.
{{% /note %}}
To remove filters, click the `×` next to the tag key by which you no longer want to filter.
### Select specific times
In the log viewer, you can select time ranges from which to view logs.
By default, logs are streamed and displayed relative to "now," but it is possible to view logs from a past window of time.
timeframe selection allows you to go to to a specific event and see logs for a time window both preceding and following that event. The default window is one minute, meaning the graph shows logs from thirty seconds before and the target time. Click the dropdown menu change the window.
![Selecting time ranges](/img/chronograf/1-7-log-viewer-specific-time.gif)
## Configure the log viewer
The log viewer can be customized to fit your specific needs.
Open the log viewer configuration options by clicking the gear button in the top right corner of the log viewer. Once done, click **Save** to apply the changes.
{{< img-hd src="/img/chronograf/1-6-logs-log-viewer-config-options.png" alt="Log viewer configuration options" />}}
### Severity colors
Every log severity is assigned a color which is used in the display of log entries.
To customize colors, select a color from the available color dropdown.
### Table columns
Columns in the log viewer are auto-populated with all fields and tags associated with your log data.
Each column can be reordered, renamed, and hidden or shown.
### Severity format
"Severity Format" specifies how the severity of log entries is displayed in your log table.
Below are the options and how they appear in the log table:
| Severity Format | Display |
| --------------- |:------- |
| Dot | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot.png" alt="Log serverity format 'Dot'" style="display:inline;max-height:24px;"/> |
| Dot + Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot-text.png" alt="Log serverity format 'Dot + Text'" style="display:inline;max-height:24px;"/> |
| Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-text.png" alt="Log serverity format 'Text'" style="display:inline;max-height:24px;"/> |
### Truncate or wrap log messages
By default, text in Log Viewer columns is truncated if it exceeds the column width. You can choose to wrap the text instead to display the full content of each cell.
Select the **Truncate** or **Wrap** option to determine how text appears when it exceeds the width of the cell.
To copy the complete, un-truncated log message, select the message cell and click **Copy**.
## Show or hide the log status histogram
The Chronograf Log Viewer displays a histogram of log status.
**To hide the log status histogram**, click the **{{< icon "hide" "v2" >}} icon** in
the top right corner of the histogram.
**To show the log status histogram**, click the **{{< icon "bar-chart" "v2" >}} icon**
in the top right corner of the log output.
## Logs in dashboards
An incredibly powerful way to analyze log data is by creating dashboards that include log data.
This is possible by using the [Table visualization type](/chronograf/v1.10/guides/visualization-types/#table) to display log data in your dashboard.
![Correlating logs with other metrics](/img/chronograf/1-7-log-viewer-dashboard.gif)
This type of visualization allows you to quickly identify anomalies in other metrics and see logs associated with those anomalies.

View File

@ -1,43 +0,0 @@
---
title: Use annotations in Chronograf views
description: >
Add contextual information to Chronograf dashboards with annotations.
menu:
chronograf_1_10:
name: Use annotations
weight: 50
parent: Guides
---
## Use annotations in the Chronograf interface
Annotations in Chronograf are notes of explanation or comments added to graph views by editors or administrators. Annotations can provide Chronograf users with useful contextual information about single points in time or time intervals. Users can use annotations to correlate the effects of important events, such as system changes or outages across multiple metrics, with Chronograf data.
When an annotation is added, a solid white line appears on all graph views for that point in time or an interval of time.
### Annotations example
The following screenshot of five graph views displays annotations for a single point in time and a time interval.
The text and timestamp for the single point in time can be seem above the annotation line in the graph view on the lower right.
The annotation displays "`Deploy v3.8.1-2`" and the time "`2018/28/02 15:59:30:00`".
![Annotations on multiple graph views](/img/chronograf/1-6-annotations-example.png)
**To add an annotation using the Chronograf user interface:**
1. Click the **Edit** button ("pencil" icon) on the graph view.
2. Click **Add Annotation** to add an annotation.
3. Move cursor to point of time and click or drag cursor to set an annotation.
4. Click **Edit** again and then click **Edit Annotation**.
5. Click the cursor on the annotation point or interval. The annotation text box appears above the annotation point or interval.
6. Click on `Name Me` in the annotation and type a note or comment.
7. Click **Done Editing**.
8. Your annotation is now available in all graph views.
{{% note %}}
Annotations are not associated with specific dashboards and appear in all dashboards.
Annotations are managed per InfluxDB data source.
When a dashboard is deleted, annotation persist until the InfluxDB data source
the annotations are associated with is removed.
{{% /note %}}

View File

@ -1,58 +0,0 @@
---
title: Clone dashboards and cells
description: >
Clone a dashboard or a cell and use the copy as a starting point to create new dashboard or cells.
menu:
chronograf_1_10:
weight: 70
parent: Guides
---
This guide explains how to clone, or duplicate, a dashboard or a cell for use as starting points for creating dashboards or cells using the copy as a template.
## Clone dashboards
Dashboards in Chronograf can be cloned (or copied) to be used to create a dashboard based on the original. Rather than building a new dashboard from scratch, you can clone a dashboard and make changes to the dashboard copy.
### To clone a dashboard
On the **Dashboards** page, hover your cursor over the listing of the dashboard that you want to clone and click the **Clone** button that appears.
![Click the Clone button](/img/chronograf/1-6-clone-dashboard.png)
The cloned dashboard opens and displays the name of the original dashboard with `(clone)` after it.
![Cloned dashboard](/img/chronograf/1-6-clone-dashboard-clone.png)
You can now change the dashboard name and customize the dashboard.
## Clone cells
Cells in Chronograf dashboards can be cloned or copied to quickly create a cell copy that can be edited for another use.
### To clone a cell
1. On the dashboard cell that you want to make a copy of, click the **Clone** icon and then confirm by clicking **Clone Cell**.
![Click the Clone icon](/img/chronograf/1-6-clone-cell-click-button.png)
2. The cloned cell appears in the dashboard displaying the nameof the original cell with `(clone)` after it.
![Cloned cell](/img/chronograf/1-6-clone-cell-cell-copy.png)
You can now change the cell name and customize the cell.
{{% note %}}
#### Cells can only be cloned to the current dashboard
Dashboard cells can only be clone to the current dashboard and can not be cloned to another dashboard.
To clone a cell to another dashboard:
1. Hover over the cell you want to clone, click the **{{< icon "pencil" "v1" >}}**
icon, and then select **Configure**.
2. Copy the cell query.
3. Open the dashboard you want to clone the cell to.
4. Click **{{< icon "add-cell" "v2" >}} Add Cell** to create a new cell.
5. Paste your copied query into the new cell.
6. Duplicate all the visualizations settings from your cloned cell.
{{% /note %}}

View File

@ -1,370 +0,0 @@
---
title: Configure Chronograf alert endpoints
aliases:
- /chronograf/v1.10/guides/configure-kapacitor-event-handlers/
description: Send alert messages with Chronograf alert endpoints.
menu:
chronograf_1_10:
name: Configure alert endpoints
weight: 70
parent: Guides
---
Chronograf alert endpoints can be configured using the Chronograf user interface to create Kapacitor-based event handlers that send alert messages.
You can use Chronograf to send alert messages to specific URLs as well as to applications.
This guide offers step-by-step instructions for configuring Chronograf alert endpoints.
## Kapacitor event handlers supported in Chronograf
Chronograf integrates with [Kapacitor](/{{< latest "kapacitor" >}}/), InfluxData's data processing platform, to send alert messages to event handlers.
Chronograf supports the following event handlers:
- [Alerta](#alerta)
- [BigPanda](#bigpanda)
- [Kafka](#kafka)
- [OpsGenie](#opsgenie)
- [OpsGenie2](#opsgenie2)
- [PagerDuty](#pagerduty)
- [PagerDuty2](#pagerduty2)
- [Pushover](#pushover)
- [Sensu](#sensu)
- [ServiceNow](#servicenow)
- [Slack](#slack)
- [SMTP](#smtp)
- [Talk](#talk)
- [Teams](#talk)
- [Telegram](#telegram)
- [VictorOps](#victorops)
- [Zenoss](#zenoss)
To configure a Kapacitor event handler in Chronograf, [install Kapacitor](/{{< latest "kapacitor" >}}/introduction/installation/) and [connect it to Chronograf](/{{< latest "kapacitor" >}}/working/kapa-and-chrono/#add-a-kapacitor-instance).
The **Configure Kapacitor** page includes the event handler configuration options.
## Alert endpoint configurations
Alert endpoint configurations appear on the Chronograf Configure Kapacitor page.
You must have a connected Kapacitor instance to access the configurations.
For more information, see [Kapacitor installation instructions](/{{< latest "kapacitor" >}}/introduction/installation/) and how to [connect a Kapacitor instance](/{{< latest "kapacitor" >}}/working/kapa-and-chrono/#add-a-kapacitor-instance) to Chronograf.
Note that the configuration options in the **Configure alert endpoints** section are not all-inclusive.
Some event handlers allow users to customize event handler configurations per [alert rule](/chronograf/v1.10/guides/create-a-kapacitor-alert/).
For example, Chronograf's Slack integration allows users to specify a default channel in the **Configure alert endpoints** section and a different channel for individual alert rules.
### Alerta
**To configure an Alerta alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page, click the **Alerta** tab.
2. Enter the following:
- **Environment**: Alerta environment. Can be a template and has access to the same data as the AlertNode.Details property. Default is set from the configuration.
- **Origin**: Alerta origin. If empty, uses the origin from the configuration.
- **Token**: Default Alerta authentication token..
- **Token Prefix**: Default token prefix. If you receive invalid token errors, you may need to change this to “Key”.
- **User**: Alerta user.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### BigPanda
**To configure an BigPanda alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **BigPanda** tab.
2. Enter the following:
- **URL**: BigPanda [alerts API URL](https://docs.bigpanda.io/reference#alerts-how-it-works).
Default is `https://api.bigpanda.io/data/v2/alerts`.
- **Token**: BigPanda [API Authorization token (API key)](https://docs.bigpanda.io/docs/api-key-management).
- **Application Key**: BigPanda [App Key](https://docs.bigpanda.io/reference#integrating-monitoring-systems).
- **Insecure Skip Verify**: Required if using a self-signed TLS certificate. Select to skip TLS certificate chain and host
verification when connecting over HTTPS.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Kafka
**To configure a Kafka alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Kafka** tab.
2. Enter the following:
- **ID**: Unique identifier for a Kafka cluster. Default is `localhost`.
- **Brokers**: List of Kafka broker addresses, using the `host:port` format.
- **Timeout**: Maximum amount of time to wait before flushing an incomplete batch. Default is `10s`.
- **Batch Size**: Number of messages batched before sending to Kafka. Default is `100`.
- **Batch Timeout**: Timeout period for the batch. Default is `1s`.
- **Use SSL**: Select to enable SSL communication.
- **SSL CA**: Path to the SSL CA (certificate authority) file.
- **SSL Cert**: Path to the SSL host certificate.
- **SSL Key**: Path to the SSL certificate private key file.
- **Insecure Skip Verify**: Required if using a self-signed TLS certificate. Select to skip TLS certificate chain and host
verification when connecting over HTTPS.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
To enable Kafka services using TICKscript, see [Kafka event handler (Kapacitor)](/{{< latest "kapacitor" >}}/event_handlers/kafka/).
### OpsGenie
{{% warn %}}
**Note:** Support for OpsGenie Events API 1.0 is deprecated (as [noted by OpGenie](https://docs.opsgenie.com/docs/migration-guide-for-alert-rest-api)).
As of June 30, 2018, the OpsGenine Events API 1.0 is disabled.
Use the [OpsGenie2](#opsgenie2) alert endpoint.
{{% /warn %}}
### OpsGenie2
Send an incident alert to OpsGenie teams and recipients using the Chronograf alert endpoint.
**To configure a OpsGenie alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **OpsGenie** tab.
2. Enter the following information:
- **API Key**: API key (or GenieKey).
To find the API key, sign into your [OpsGenie account](https://app.opsgenie.com/auth/login)
and select the **Settings** menu option in the **Admin** menu.
- **Teams**: List of [OpsGenie teams](https://docs.opsgenie.com/docs/teams) to be alerted.
- **Recipients** List of [OpsGenie team members](https://docs.opsgenie.com/docs/teams#section-team-members)) to receive alerts.
- **Select recovery action**: Actions to take when an alert recovers:
- Add a note to the alert
- Close the alert
- **Configuration Enabled**: Select to enable configuration.
4. Click **Save Changes** to save the configuration settings.
5. Click **Send Test Alert** to verify the configuration.
See [Alert API](https://docs.opsgenie.com/docs/alert-api) in the OpsGenie documentation for details on the OpsGenie Alert API
See [OpsGenie V2 event handler](/{{< latest "kapacitor" >}}/event_handlers/opsgenie/v2/) in the Kapacitor documentation for details about the OpsGenie V2 event handler.
See the [AlertNode (Kapacitor TICKscript node) - OpsGenie v2](/{{< latest "kapacitor" >}}/nodes/alert_node/#opsgenie-v2) in the Kapacitor documentation for details about enabling OpsGenie services using TICKscripts.
### PagerDuty
{{% warn %}}
The original PagerDuty alert endpoint is deprecated.
Use the [PagerDuty2](#pagerduty2) alert endpoint.
{{% /warn %}}
### PagerDuty2
**To configure a PagerDuty alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **PagerDuty** tab.
2. Enter the following:
- **Routing Key**: GUID of your PagerDuty Events API V2 integration, listed as "Integration Key" on the Events API V2 integration's detail page. See [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service) in the PagerDuty documentation details on getting an "Integration Key" (`routing_key`).
- **PagerDuty URL**: URL used to POST a JSON body representing the event. This value should not be changed. Valid value is `https://events.pagerduty.com/v2/enqueue`.
- **Configuration Enabled**: Select to enable this configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
See the [PagerDuty Events API V2 Overview](https://v2.developer.pagerduty.com/docs/events-api-v2)
for details on the PagerDuty Events API and recognized event types (`trigger`, `acknowledge`, and `resolve`).
To enable a new "Generic API" service using TICKscript, see [AlertNode (Kapacitor TICKscript node) - PagerDuty v2](/{{< latest "kapacitor" >}}/nodes/alert_node/#pagerduty-v2).
### Pushover
**To configure a Pushover alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Pushover** tab.
2. Enter the following:
- **User Key**: Pushover USER_TOKEN.
- **Token**: Pushover API token.
- **Pushover URL**: Pushover API URL.
Default is `https://api.pushover.net/1/messages.json`.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Sensu
**To configure a Sensu alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Sensu** tab.
2. Enter the following:
- **Source**: Event source. Default is `Kapacitor`.
- **Address**: URL of [Sensu HTTP API](https://docs.sensu.io/sensu-go/latest/migrate/#architecture).
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### ServiceNow
**To configure a ServiceNow alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **ServiceNow** tab.
2. Enter the following:
- **URL**: ServiceNow API URL. Default is `https://instance.service-now.com/api/global/em/jsonv2`.
- **Source**: Event source.
- **Username**: ServiceNow username.
- **Password**: ServiceNow password.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Slack
**To configure a Slack alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Slack** tab.
2. Enter the following:
- **Nickname this Configuration**: Unique name for a Slack endpoint if you
have more than one Slack alert endpoint.
- **Slack WebHook URL**: _(Optional)_ Slack webhook URL _(see [Slack webhooks](https://api.slack.com/messaging/webhooks))_
- **Slack Channel**: _(Optional)_ Slack channel or user to send messages to.
Prefix with `#` to send to a channel.
Prefix with `@` to send directly to a user.
If not specified, Kapacitor sends alert messages to the channel or user
specified in the [alert rule](/chronograf/v1.10/guides/create-a-kapacitor-alert/)
or configured in the **Slack Webhook**.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
**To add another Slack configuration:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Slack** tab.
2. Click **{{< icon "plus" "v2" >}} Add Another Config**.
3. Complete steps 2-4 [above](#slack).
### SMTP
**To configure a SMTP alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **SMTP** tab.
2. Enter the following:
- **SMTP Host**: SMTP host. Default is `localhost`.
- **SMTP Port**: SMTP port. Default is `25`.
- **From Email**: Email address to send messages from.
- **To Email**: Email address to send messages to.
- **User**: SMTP username.
- **Password**: SMTP password.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Talk
**To configure a Talk alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Talk** tab.
2. Enter the following:
- **URL**: Talk API URL.
- **Author Name**: Message author name.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Teams
**To configure a Microsoft Teams alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Teams** tab.
2. Enter the following:
- **Channel URL**: Microsoft Teams channel URL.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Telegram
**To configure a Telegram alert endpoint:**
1. [Set up a Telegram bot and credentials](/{{< latest "kapacitor" >}}/guides/event-handler-setup/#telegram-setup).
2. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Telegram** tab.
3. Enter the following:
- **Token**:
- **Chat ID**:
- **Select the alert message format**: Telegram message format
- Markdown _(default)_
- HTML
- **Disable link previews**: Disable [link previews](https://telegram.org/blog/link-preview) in Telegram messages.
- **Disable notifications**: Disable notifications on iOS devices and sounds on Android devices.
Android users will continue to receive notifications.
- **Configuration Enabled**: Select to enable configuration.
### VictorOps
**To configure a VictorOps alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **VictorOps** tab.
2. Enter the following:
- **API Key**: VictorOps API key.
- **Routing Key**: VictorOps [routing key](https://help.victorops.com/knowledge-base/routing-keys/).
- **VictorOps URL**: VictorOps alert API URL.
Default is `https://alert.victorops.com/integrations/generic/20131114/alert`.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Zenoss
**To configure a Zenoss alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Zenoss** tab.
2. Enter the following:
- **URL**: Zenoss [router endpoint URL](https://help.zenoss.com/zsd/RM/configuring-resource-manager/enabling-access-to-browser-interfaces/creating-and-changing-public-endpoints).
Default is `https://tenant.zenoss.io:8080/zport/dmd/evconsole_router`.
- **Username**: Zenoss username. Leave blank for no authentication.
- **Password**: Zenoss password. Leave blank for no authentication.
- **Action (Router Name)**: Zenoss [router name](https://help.zenoss.com/dev/collection-zone-and-resource-manager-apis/anatomy-of-an-api-request#AnatomyofanAPIrequest-RouterURL).
Default is `EventsRouter`.
- **Router Method**: [EventsRouter method](https://help.zenoss.com/dev/collection-zone-and-resource-manager-apis/codebase/routers/router-reference/eventsrouter).
Default is `add_event`.
- **Event Type**: Event type. Default is `rpc`.
- **Event TID**: Temporary request transaction ID. Default is `1`.
- **Collector Name**: Zenoss collector name. Default is `Kapacitor`.
- **Kapacitor to Zenoss Severity Mapping**: Map Kapacitor severities to [Zenoss severities](https://help.zenoss.com/docs/using-collection-zones/event-management/event-severity-levels).
- **OK**: Clear _(default)_
- **Info**: Info _(default)_
- **Warning**: Warning _(default)_
- **Critical**: Critical _(default)_
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.

View File

@ -1,164 +0,0 @@
---
title: Create Chronograf dashboards
description: Visualize your data with custom Chronograf dashboards.
menu:
chronograf_1_10:
name: Create dashboards
weight: 30
parent: Guides
---
Chronograf offers a complete dashboard solution for visualizing your data and monitoring your infrastructure:
- View [pre-created dashboards](/chronograf/v1.10/guides/using-precreated-dashboards) from the Host List page.
Dashboards are available depending on which Telegraf input plugins you have enabled.
These pre-created dashboards cannot be cloned or edited.
- Create custom dashboards from scratch by building queries in the Data Explorer, as described [below](#build-a-dashboard).
- [Export a dashboard](/chronograf/latest/administration/import-export-dashboards/#export-a-dashboard) you create.
- Import a dashboard:
- When you want to [import an exported dashboard](/chronograf/latest/administration/import-export-dashboards/#import-a-dashboard).
- When you want to add or update a connection in Chronograf. See [Dashboard templates](#dashboard-templates) for details.
By the end of this guide, you'll be aware of the tools available to you for creating dashboards similar to this example:
![Chronograf dashboard](/img/chronograf/1-6-g-dashboard-possibilities.png)
## Requirements
To perform the tasks in this guide, you must have a working Chronograf instance that is connected to an InfluxDB source.
Data is accessed using the Telegraf [system ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugins.
For more information, see [Configuring Chronograf](/chronograf/v1.10/administration/configuration).
## Build a dashboard
1. #### Create a new dashboard
Click **Dashboards** in the navigation bar and then click the **{{< icon "plus" "v2" >}} Create Dashboard** button.
A new dashboard is created and ready to begin adding cells.
2. #### Name your dashboard
Click **Name This Dashboard** and type a new name. For example, "ChronoDash".
3. #### Enter cell editor mode
In the first cell, titled "Untitled Cell", click **{{< icon "plus" "v2" >}} Add Data**
to open the cell editor mode.
{{< img-hd src="/img/chronograf/1-9-dashboard-cell-add-data.png" alt="Add data to a Chronograf cell" />}}
4. #### Create your query
Click the **Add a Query** button to create an [InfluxQL](/{{< latest "influxdb" "v1" >}}/query_language/) query.
In query editor mode, use the builder to select from your existing data and
allow Chronograf to format the query for you.
Alternatively, manually enter and edit a query.
Chronograf allows you to move seamlessly between using the builder and
manually editing the query; when possible, the interface automatically
populates the builder with the information from your raw query.
For our example, the query builder is used to generate a query that shows
the average idle CPU usage grouped by host (in this case, there are three hosts).
By default, Chronograf applies the [`MEAN()` function](/{{< latest "influxdb" "v1" >}}/query_language/functions/#mean)
to the data, groups averages into auto-generated time intervals (`:interval:`),
and shows data for the past hour (`:dashboardTime:`).
Those defaults are configurable using the query builder or by manually editing the query.
In addition, the time range (`:dashboardTime:` and `:upperDashboardTime:`) are
[configurable on the dashboard](#configure-your-dashboard).
![Build your query](/img/chronograf/1-6-g-dashboard-builder.png)
5. #### Choose your visualization type
Chronograf supports many different [visualization types](/chronograf/v1.10/guides/visualization-types/). To choose a visualization type, click **Visualization** and select **Step-Plot Graph**.
![Visualization type](/img/chronograf/1-6-g-dashboard-visualization.png)
6. #### Save your cell
Click **Save** (the green checkmark icon) to save your cell.
{{% note %}}
_**Note:**_ If you navigate away from this page without clicking Save, your work will not be saved.
{{% /note %}}
## Configure your dashboard
### Customize cells
- You can change the name of the cell from "Untitled Cell" by returning to the cell editor mode, clicking on the name, and renaming it. Remember to save your changes.
- **Move** your cell around by clicking its top bar and dragging it around the page
- **Resize** your cell by clicking and dragging its bottom right corner
### Explore cell data
- **Zoom** in on your cell by clicking and dragging your mouse over the area of interest
- **Pan** over your cell data by pressing the shift key and clicking and dragging your mouse over the graph
- **Reset** your cell by double-clicking your mouse in the cell window
{{% note %}}
**Note:** These tips only apply to the line, stacked, step-plot, and line+stat
[visualization types](/chronograf/v1.10/guides/visualization-types/).
{{% /note %}}
### Configure dashboard-wide settings
- Change the dashboard's *selected time* at the top of the page - the default
time is **Local**, which uses your browser's local time. Select **UTC** to use
Coordinated Universal Time.
{{% note %}}
**Note:** If your organization spans multiple time zones, we recommend using UTC
(Coordinated Universal Time) to ensure that everyone sees metrics and events for the same time.
{{% /note %}}
- Change the dashboard's *auto-refresh interval* at the top of the page - the default interval selected is **Every 10 seconds**.
{{% note %}}
**Note:** A dashboard's refresh rate persists in local storage, so the default
refresh rate is only used when a refresh rate isn't found in local storage.
{{% /note %}}
{{% note %}}
**To add custom auto-refresh intervals**, use the [`--custom-auto-refresh` configuration
option](/chronograf/v1.10/administration/config-options/#--custom-auto-refresh)
or `$CUSTOM_AUTO_REFRESH` environment variable when starting Chronograf.
{{% /note %}}
- Modify the dashboard's *time range* at the top of the page - the default range
is **Past 15 minutes**.
## Dashboard templates
Select from a variety of dashboard templates to import and customize based on which Telegraf plugins you have enabled, such as the following examples:
###### Kubernetes dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-kubernetes.png" alt="Kubernetes Chronograf dashboard template" />}}
###### MySQL dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-mysql.png" alt="MySQL Chronograf dashboard template" />}}
###### System metrics dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-system.png" alt="System metrics Chronograf dashboard template" />}}
###### vSphere dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-vsphere.png" alt="vSphere Chronograf dashboard template" />}}
### Import dashboard templates
1. From the Configuration page, click **Add Connection** or select an existing connection to edit it.
2. In the **InfluxDB Connection** window, enter or verify your connection details and click **Add** or **Update Connection**.
3. In the **Dashboards** window, select from the available dashboard templates to import based on which Telegraf plugins you have enabled.
{{< img-hd src="/img/chronograf/1-7-protoboard-select.png" alt="Select dashboard template" />}}
4. Click **Create (x) Dashboards**.
5. Edit, clone, or configure the dashboards as needed.
## Extra Tips
### Full screen mode
View your dashboard in full screen mode by clicking on the full screen icon (**{{< icon "fullscreen" "v2" >}}**) in the top right corner of your dashboard.
To exit full screen mode, press the Esc key.
### Template variables
Dashboards support template variables.
See the [Dashboard Template Variables](/chronograf/v1.10/guides/dashboard-template-variables/) guide for more information.

View File

@ -1,111 +0,0 @@
---
title: Create Chronograf alert rules
description: >
Trigger alerts by building Kapacitor alert rules in the Chronograf user interface (UI).
aliases:
- /chronograf/v1.10/guides/create-a-kapacitor-alert/
menu:
chronograf_1_10:
name: Create alert rules
weight: 60
parent: Guides
---
Chronograf provides a user interface for [Kapacitor](/{{< latest "kapacitor" >}}/), InfluxData's processing framework for creating alerts, ETL jobs (running extract, transform, load), and detecting anomalies in your data.
Chronograf alert rules correspond to Kapacitor tasks that trigger alerts whenever certain conditions are met.
Behind the scenes, these tasks are stored as [TICKscripts](/{{< latest "kapacitor" >}}/tick/) that can be edited manually or through Chronograf.
Common alerting use cases that can be managed using Chronograf include:
* Thresholds with static ceilings, floors, and ranges.
* Relative thresholds based on unit or percentage changes.
* Deadman switches.
Complex alerts and other tasks can be defined directly in Kapacitor as TICKscripts, but can be viewed and managed within Chronograf.
To learn about managing Kapacitor TICKscripts in Chronograf, see [Manage Kapacitor TICKscripts](/{{< latest "chronograf" >}}/guides/advanced-kapacitor/#manage-kapacitor-tickscripts).
## Requirements
[Get started with Chronograf](/{{< latest "chronograf" >}}/introduction/getting-started/) offers step-by-step instructions for each of the following requirements:
* Download and install the entire TICKstack (Telegraf, InfluxDB, Chronograf, and Kapacitor).
* [Create a Kapacitor connection in Chronograf](/{{< latest "chronograf" >}}/introduction/installation/#connect-chronograf-to-kapacitor).
## Manage Chronograf alert rules
Chronograf lets you create and manage Kapacitor alert rules. To manage alert rules:
1. Click on **{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**.
2. Do one of the following:
- [Create an alert rule](#create-an-alert-rule)
- [View alert history](#view-alert-history)
- [Enable and disable alert rules](#enable-and-disable-alert-rules)
- [Delete alert rules](#delete-alert-rules)
To create and manage alert rules in Chronograf, click on
**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**.
Do one of the following:
- View alert rules.
- Enable and disable alert rules.
- Delete alert rules.
- Create new alert rules using the **Alert Rule Builder**.
## Create an alert rule
From the **Alert Rules** page in Chronograf:
1. Click **+ Build Alert Rule**.
1. Name the alert rule.
2. Choose the alert type:
- `Threshold` - alert if data crosses a boundary.
- `Relative` - alert if data changes relative to data in a different time range.
- `Deadman` - alert if InfluxDB receives no relevant data for a specified time duration.
3. Select the time series data to use in the alert rule.
- Navigate through databases, measurements, tags, and fields to select all relevant data.
4. Define the rule conditions. Condition options are determined by the alert type.
5. Select and configure the alert handler.
- The alert handler determines where the system sends the alert (the event handler).
- Chronograf supports several event handlers and each handler has unique configurable options.
- Multiple alert handlers can be added to send alerts to multiple endpoints.
6. Configure the alert message.
- The alert message is the text that accompanies an alert.
- Alert messages are templates that have access to alert data.
- Available templates appear below the message text field.
- As you type your alert message, clicking the data templates will insert them at the end of whatever text has been entered.
7. Click **Save Rule**.
## Enable and disable alert rules
To enable and disable alerts, click on **{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**.
- To enable an alert rule, locate the alert rule and click the box **Task Enabled**. A blue dot shows the task is enabled. A message appears to confirm the rule was successfully enabled.
- To disable an alert rule, click the box **Task Enabled**. The blue dot disappears and a message confirms the alert was successfully disabled.
## Delete alert rules
To delete an alert, click on **{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**.
1. Locate the alert you want to delete, and then hover over the "Task Enabled" box. A **Delete** button appears to the right.
3. Click **Delete** to delete the rule.
**Note:** Deleting a rule cannot be undone, and removes the rule permanently.
## View alert history
Chronograf lets you view your alert history on the **Alert History** page.
To view a history of your alerts, click on
**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert History**.
Do one of the following:
- View a history of all triggered alerts.
- Filter alert history by type.
- View alert history for a specified time range.

View File

@ -1,608 +0,0 @@
---
title: Use dashboard template variables
description: >
Chronograf dashboard template variables let you update cell queries without editing queries,
making it easy to interact with your dashboard cells and explore your data.
aliases:
- /chronograf/v1.10/introduction/templating/
- /chronograf/v1.10/templating/
menu:
chronograf_1_10:
weight: 90
parent: Guides
---
Chronograf dashboard template variables let you update cell queries without editing queries,
making it easy to interact with your dashboard cells and explore your data.
- [Use template variables](#use-template-variables)
- [Predefined template variables](#predefined-template-variables)
- [Create custom template variables](#create-custom-template-variables)
- [Template variable types](#template-variable-types)
- [Reserved variable names](#reserved-variable-names)
- [Advanced template variable usage](#advanced-template-variable-usage)
## Use template variables
When creating Chronograf dashboards, use either [predefined template variables](#predefined-template-variables)
or [custom template variables](#create-custom-template-variables) in your cell queries and titles.
After you set up variables, variables are available to select in your dashboard user interface (UI).
- [Use template variables in cell queries](#use-template-variables-in-cell-queries)
- [InfluxQL](#influxql)
- [Flux](#flux)
- [Use template variables in cell titles](#use-template-variables-in-cell-titles)
![Use template variables](/img/chronograf/1-6-template-vars-use.gif)
### Use template variables in cell queries
Both InfluxQL and Flux support template variables.
#### InfluxQL
In an InfluxQL query, surround template variables names with colons (`:`) as follows:
```sql
SELECT :variable_name: FROM "telegraf"."autogen".:measurement: WHERE time < :dashboardTime:
```
##### Quoting template variables in InfluxQL
For **predefined meta queries** such as "Field Keys" and "Tag Values", **do not add quotes** (single or double) to your queries. Chronograf will add quotes as follows:
```sql
SELECT :variable_name: FROM "telegraf"."autogen".:measurement: WHERE time < :dashboardTime:
```
For **custom queries**, **CSV**, or **map queries**, quote the values in the query following standard [InfluxQL](/{{< latest "influxdb" "v1" >}}/query_language/) syntax:
- For numerical values, **do not quote**.
- For string values, choose to quote the values in the variable definition (or not). See [String examples](#string-examples) below.
{{% note %}}
**Tips for quoting strings:**
- When using custom meta queries that return strings, typically, you quote the variable values when using them in a dashboard query, given InfluxQL results are returned without quotes.
- If you are using template variable strings in regular expression syntax (when using quotes may cause query syntax errors), the flexibility in query quoting methods is particularly useful.
{{% /note %}}
##### String examples
Add single quotes when you define template variables, or in your queries, but not both.
###### Add single quotes in variable definition
If you define a custom CSV variable named `host` using single quotes:
```sh
'host1','host2','host3'
```
Do not include quotes in your query:
```sql
SELECT mean("usage_user") AS "mean_usage_user" FROM "telegraf"."autogen"."cpu"
WHERE "host" = :host: and time > :dashboardTime
```
###### Add single quotes in query
If you define a custom CSV variable named `host` without quotes:
```sh
host1,host2,host3
```
Add single quotes in your query:
```sql
SELECT mean("usage_user") AS "mean_usage_user" FROM "telegraf"."autogen"."cpu"
WHERE "host" = ':host:' and time > :dashboardTime
```
#### Flux
In Flux, template variables are stored in a `v` record.
Use dot or bracket notation to reference the variable key inside of the `v` record:
```js
from(bucket: v.bucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._field == v["Field key"])
|> aggregateWindow(every: v.windowPeriod, fn: v.aggregateFunction)
```
### Use template variables in cell titles
To dynamically change the title of a dashboard cell,
use the `:variable-name:` syntax.
For example, a variable named `field` with a value of `temp` and a variable
named `location` with a value of `San Antonio`, use the following syntax:
```
:temp: data for :location:
```
Displays as:
{{< img-hd src= "/img/chronograf/1-9-template-var-title.png" alt="Use template variables in cell titles" />}}
## Predefined template variables
Chronograf includes predefined template variables controlled by elements in the Chronograf UI.
Use predefined template variables in your cell queries.
InfluxQL and Flux include their own sets of predefined template variables:
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxQL](#)
[Flux](#)
{{% /tabs %}}
{{% tab-content %}}
- [`:dashboardTime:`](#dashboardtime)
- [`:upperDashboardTime:`](#upperdashboardtime)
- [`:interval:`](#interval)
### dashboardTime
The `:dashboardTime:` template variable is controlled by the "time" dropdown in your Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-time-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard time selector"/>
If using relative times, it represents the time offset specified in the dropdown (-5m, -15m, -30m, etc.) and assumes time is relative to "now".
If using absolute times defined by the date picker, `:dashboardTime:` is populated with lower threshold.
```sql
SELECT "usage_system" AS "System CPU Usage"
FROM "telegraf".."cpu"
WHERE time > :dashboardTime:
```
{{% note %}}
To use the date picker to specify a past time range, construct the query using `:dashboardTime:`
as the start time and [`:upperDashboardTime:`](#upperdashboardtime) as the stop time.
{{% /note %}}
### upperDashboardTime
The `:upperDashboardTime:` template variable is defined by the upper time limit specified using the date picker.
<img src="/img/chronograf/1-6-template-vars-date-picker.png" style="width:100%;max-width:762px;" alt="Dashboard date picker"/>
It will inherit `now()` when using relative time frames or the upper time limit when using absolute timeframes.
```sql
SELECT "usage_system" AS "System CPU Usage"
FROM "telegraf".."cpu"
WHERE time > :dashboardTime: AND time < :upperDashboardTime:
```
### interval
The `:interval:` template variable is defined by the interval dropdown in the Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-interval-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard interval selector"/>
In cell queries, it should be used in the `GROUP BY time()` clause that accompanies aggregate functions:
```sql
SELECT mean("usage_system") AS "Average System CPU Usage"
FROM "telegraf".."cpu"
WHERE time > :dashboardtime:
GROUP BY time(:interval:)
```
{{% /tab-content %}}
{{% tab-content %}}
- [`v.timeRangeStart`](#vtimerangestart)
- [`v.timeRangeStop`](#vtimerangestop)
- [`v.windowPeriod`](#vwindowperiod)
{{% note %}}
#### Backward compatible Flux template variables
**Chronograf 1.9+** supports the InfluxDB 2.0 variable pattern of storing
[predefined template variables](#predefined-template-variables) and [custom template variables](#create-custom-template-variables)
in a `v` record and using dot or bracket notation to reference variables.
For backward compatibility, Chronograf 1.9+ still supports the following predefined
variables that do not use the `v.` syntax:
- [`dashboardTime`](/chronograf/v1.8/guides/dashboard-template-variables/?t=Flux#dashboardtime-flux)
- [`upperDashboardTime`](/chronograf/v1.8/guides/dashboard-template-variables/?t=Flux#upperdashboardtime-flux)
- [`autoInterval`](/chronograf/v1.8/guides/dashboard-template-variables/?t=Flux#autointerval)
{{% /note %}}
### v.timeRangeStart
The `v.timeRangeStart` template variable is controlled by the "time" dropdown in your Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-time-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard time selector"/>
If using relative time, this variable represents the time offset specified in the dropdown (-5m, -15m, -30m, etc.) and assumes time is relative to "now".
If using absolute time defined by the date picker, `v.timeRangeStart` is populated with the start time.
```js
from(bucket: "telegraf/autogen")
|> range(start: v.timeRangeStart)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
```
{{% note %}}
To use the date picker to specify a time range in the past without "now", use
`v.timeRangeStart` as the start time and [`v.timeRangeStop`](#vtimerangestop)
as the stop time.
{{% /note %}}
### v.timeRangeStop
The `v.timeRangeStop` template variable is defined by the upper time limit specified using the date picker.
<img src="/img/chronograf/1-6-template-vars-date-picker.png" style="width:100%;max-width:762px;" alt="Dashboard date picker"/>
For relative time frames, this variable inherits `now()`. For absolute time frames, this variable inherits the upper time limit.
```js
from(bucket: "telegraf/autogen")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
```
### v.windowPeriod
The `v.windowPeriod` template variable is controlled by the display width of the
dashboard cell and is calculated by the duration of time that each pixel covers.
Use the `v.windowPeriod` variable to limit downsample data to display a maximum of one point per pixel.
```js
from(bucket: "telegraf/autogen")
|> range(start: v.timeRangeStart)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
|> aggregateWindow(every: v.windowPeriod, fn: mean)
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Create custom template variables
Chronograf lets you create custom template variables powered by meta queries or CSV uploads that return an array of possible values.
To create a template variable:
1. Click on **Template Variables** at the top of your dashboard, then **+ Add Variable**.
2. Select a data source from the **Data Source** dropdown menu.
3. Provide a name for the variable.
4. Select the [variable type](#template-variable-types).
The type defines the method for retrieving the array of possible values.
5. View the list of potential values and select a default.
If using the CSV or Map types, upload or input the CSV with the desired values in the appropriate format then select a default value.
6. Click **Create**.
Once created, the template variable can be used in any of your cell's queries or titles
and a dropdown for the variable will be included at the top of your dashboard.
## Template Variable Types
Chronograf supports the following template variable types:
- [Databases](#databases)
- [Measurements](#measurements)
- [Field Keys](#field-keys)
- [Tag Keys](#tag-keys)
- [Tag Values](#tag-values)
- [CSV](#csv)
- [Map](#map)
- [InfluxQL Meta Query](#influxql-meta-query)
- [Flux Query](#flux-query)
- [Text](#text)
### Databases
Database template variables allow you to select from multiple target [databases](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database).
_**Database meta query**_
Database template variables use the following meta query to return an array of all databases in your InfluxDB instance.
```sql
SHOW DATABASES
```
_**Example database variable in a cell query**_
```sql
SELECT "purchases" FROM :databaseVar:."autogen"."customers"
```
#### Database variable use cases
Use database template variables when visualizing multiple databases with similar or identical data structures.
Variables let you quickly switch between visualizations for each of your databases.
### Measurements
Vary the target [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement).
_**Measurement meta query**_
Measurement template variables use the following meta query to return an array of all measurements in a given database.
```sql
SHOW MEASUREMENTS ON database_name
```
_**Example measurement variable in a cell query**_
```sql
SELECT * FROM "animals"."autogen".:measurementVar:
```
#### Measurement variable use cases
Measurement template variables allow you to quickly switch between measurements in a single cell or multiple cells in your dashboard.
### Field Keys
Vary the target [field key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field-key).
_**Field key meta query**_
Field key template variables use the following meta query to return an array of all field keys in a given measurement from a given database.
```sql
SHOW FIELD KEYS ON database_name FROM measurement_name
```
_**Example field key var in a cell query**_
```sql
SELECT :fieldKeyVar: FROM "animals"."autogen"."customers"
```
#### Field key variable use cases
Field key template variables are great if you want to quickly switch between field key visualizations in a given measurement.
### Tag Keys
Vary the target [tag key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-key).
_**Tag key meta query**_
Tag key template variables use the following meta query to return an array of all tag keys in a given measurement from a given database.
```sql
SHOW TAG KEYS ON database_name FROM measurement_name
```
_**Example tag key variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" GROUP BY :tagKeyVar:
```
#### Tag key variable use cases
Tag key template variables are great if you want to quickly switch between tag key visualizations in a given measurement.
### Tag Values
Vary the target [tag value](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-value).
_**Tag value meta query**_
Tag value template variables use the following meta query to return an array of all values associated with a given tag key in a specified measurement and database.
```sql
SHOW TAG VALUES ON database_name FROM measurement_name WITH KEY tag_key
```
_**Example tag value variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "species" = :tagValueVar:
```
#### Tag value variable use cases
Tag value template variables are great if you want to quickly switch between tag value visualizations in a given measurement.
### CSV
Vary part of a query with a customized list of comma-separated values (CSV).
_**Example CSVs:**_
```csv
value1, value2, value3, value4
```
```csv
value1
value2
value3
value4
```
{{% note %}}
String field values [require single quotes in InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries).
```csv
'string1','string2','string3','string4'
```
{{% /note %}}
_**Example CSV variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "petname" = :csvVar:
```
#### CSV variable use cases
CSV template variables are great when the array of values necessary for your variable can't be pulled from InfluxDB using a meta query.
They allow you to use custom variable values.
### Map
Vary part of a query with a customized list of key-value pairs in CSV format.
They key of each key-value pair is used to populate the template variable dropdown in your dashboard.
The value is used when processing cells' queries.
_**Example CSV:**_
```csv
key1,value1
key2,value2
key3,value3
key4,value4
```
<img src="/img/chronograf/1-6-template-vars-map-dropdown.png" style="width:100%;max-width:140px;" alt="Map variable dropdown"/>
{{% note %}}
Wrap string field values in single quotes ([required by InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries)).
Variable keys do not require quotes.
```csv
key1,'value1'
key2,'value2'
key3,'value3'
key4,'value4'
```
{{% /note %}}
_**Example Map variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "customer" = :mapVar:
```
#### Map variable use cases
Map template variables are good when you need to map or alias simple names or keys to longer or more complex values.
For example, you may want to create a `:customer:` variable that populates your cell queries with a long, numeric customer ID (`11394850823894034209`).
With a map variable, you can alias simple names to complex values, so your list of customers would look something like:
```
Apple,11394850823894034209
Amazon,11394850823894034210
Google,11394850823894034211
Microsoft,11394850823894034212
```
The customer names would populate your template variable dropdown rather than the customer IDs.
### InfluxQL Meta Query
Vary part of a query with a customized meta query that pulls a specific array of values from InfluxDB.
InfluxQL meta query variables let you pull a highly customized array of potential
values and offer advanced functionality such as [filtering values based on other template variables](#filter-template-variables-with-other-template-variables).
<img src="/img/chronograf/1-6-template-vars-custom-meta-query.png" style="width:100%;max-width:667px;" alt="Custom meta query"/>
_**Example custom meta query variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "customer" = :customMetaVar:
```
#### InfluxQL meta query variable use cases
Use custom InfluxQL meta query template variables when predefined template variable types aren't able to return the values you want.
### Flux Query
Flux query template variables let you define variable values using Flux queries.
**Variable values are extracted from the `_value` column returned by your Flux query.**
#### Flux query variable use cases
Flux query template variables are great when the values necessary for your
variable can't be queried with InfluxQL or if you need the flexibility of Flux
to return your desired list of variable values.
### Text
Vary a part of a query with a single string of text.
There is only one value per text variable, but this value is easily altered.
#### Text variable use cases
Text template variables allow you to dynamically alter queries, such as adding or altering `WHERE` clauses, for multiple cells at once.
You could also use a text template variable to alter a regular expression used in multiple queries.
They are great when troubleshooting incidents that affect multiple visualized metrics.
## Reserved variable names
The following variable names are reserved and cannot be used when creating template variables.
Chronograf accepts [template variables as URL query parameters](#define-template-variables-in-the-url)
as well as many other parameters that control the display of graphs in your dashboard.
These names are either [predefined variables](#predefined-template-variables) or would
conflict with existing URL query parameters.
- `:database:`
- `:measurement:`
- `:dashboardTime:`
- `:upperDashboardTime:`
- `:interval:`
- `:upper:`
- `:lower:`
- `:zoomedUpper:`
- `:zoomedLower:`
- `:refreshRate:`
## Advanced template variable usage
### Filter template variables with other template variables
[Custom InfluxQL meta query template variables](#influxQL-meta-query) let you filter the array of potential variable values using other existing template variables.
For example, let's say you want to list all the field keys associated with a measurement, but want to be able to change the measurement:
1. Create a template variable named `:measurementVar:` _(the name "measurement" is [reserved]( #reserved-variable-names))_ that uses the [Measurements](#measurements) variable type to pull in all measurements from the `telegraf` database.
<img src="/img/chronograf/1-6-template-vars-measurement-var.png" style="width:100%;max-width:667px;" alt="measurementVar"/>
2. Create a template variable named `:fieldKey:` that uses the [InfluxQL meta query](#influxql-meta-query) variable type.
The following meta query pulls a list of field keys based on the existing `:measurementVar:` template variable.
```sql
SHOW FIELD KEYS ON telegraf FROM :measurementVar:
```
<img src="/img/chronograf/1-6-template-vars-fieldkey.png" style="width:100%;max-width:667px;" alt="fieldKey"/>
3. Create a new dashboard cell that uses the `fieldKey` and `measurementVar` template variables in its query.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[InfluxQL](#)
[Flux](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT :fieldKey: FROM "telegraf"..:measurementVar: WHERE time > :dashboardTime:
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
from(bucket: "telegraf/")
|> range(start: v.timeRangeStart)
|> filter(fn: (r) =>
r._measurement == v.measurementVar and
r._field == v.fieldKey
)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The resulting dashboard will work like this:
![Custom meta query filtering](/img/chronograf/1-6-custom-meta-query-filtering.gif)
### Define template variables in the URL
Chronograf uses URL query parameters (also known as query string parameters) to set both display options and template variables in the URL.
This makes it easy to share links to dashboards so they load in a specific state with specific template variable values selected.
URL query parameters are appended to the end of the URL with a question mark (`?`)
indicating the beginning of query parameters.
Chain multiple query parameters together using an ampersand (`&`).
To declare a template variable or a date range as a URL query parameter, it must follow the following pattern:
#### Pattern for template variable query parameters
```bash
# Spaces for clarity only
& tempVars %5B variableName %5D = variableValue
```
`&`
Indicates the beginning of a new query parameter in a series of multiple query parameters.
`tempVars`
Informs Chronograf that the query parameter being passed is a template variable.
_**Required for all template variable query parameters.**_
`%5B`, `%5D`
URL-encoded `[` and `]` respectively that enclose the template variable name.
`variableName`
Name of the template variable.
`variableValue`
Value of the template variable.
{{% note %}}
When template variables are modified in the dashboard, the corresponding
URL query parameters are automatically updated.
{{% /note %}}
#### Example template variable query parameter
```
.../?&tempVars%5BmeasurementVar%5D=cpu
```
#### Including multiple template variables in the URL
To chain multiple template variables as URL query parameters, include the full [pattern](#pattern-for-template-variable-query-parameters) for _**each**_ template variable.
```bash
# Spaces for clarity only
.../? &tempVars%5BmeasurementVar%5D=cpu &tempVars%5BfieldKey%5D=usage_system
```

View File

@ -1,293 +0,0 @@
---
title: Create a live leaderboard for game scores
description: This example uses Chronograf to build a leaderboard for gamers to be able to see player scores in realtime.
menu:
chronograf_1_10:
name: Live leaderboard of game scores
weight: 20
parent: Guides
draft: true
---
**If you do not have a running Kapacitor instance, check out [Getting started with Kapacitor](/kapacitor/v1.4/introduction/getting-started/) to get Kapacitor up and running on localhost.**
Today we are game developers.
We host a several game servers, each running an instance of the game code, with about a hundred players per game.
We need to build a leaderboard so that spectators can see player scores in realtime.
We would also like to have historical data on leaders in order to do postgame
analysis on who was leading for how long, etc.
We will use Kapacitor stream processing to do the heavy lifting for us.
The game servers can send a [UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol) packet whenever a player's score changes,
or every 10 seconds if the score hasn't changed.
### Setup
{{% note %}}
**Note:** Copies of the code snippets used here can be found in the [scores](https://github.com/influxdata/kapacitor/tree/master/examples/scores) example in Kapacitor project on GitHub.
{{% /note %}}
First, we need to configure Kapacitor to receive the stream of scores.
In this example, the scores update too frequently to store all of the score data in a InfluxDB database, so the score data will be semt directly to Kapacitor.
Like InfluxDB, you can configure a UDP listener.
Add the following settings the `[[udp]]` secton in your Kapacitor configuration file (`kapacitor.conf`).
```
[[udp]]
enabled = true
bind-address = ":9100"
database = "game"
retention-policy = "autogen"
```
Using this configuration, Kapacitor will listen on port `9100` for UDP packets in [Line Protocol](/{{< latest "influxdb" "v1" >}}/write_protocols/line_protocol_tutorial/) format.
Incoming data will be scoped to be in the `game.autogen` database and retention policy.
Restart Kapacitor so that the UDP listener service starts.
Here is a simple bash script to generate random score data so we can test it without
messing with the real game servers.
```bash
#!/bin/bash
# default options: can be overriden with corresponding arguments.
host=${1-localhost}
port=${2-9100}
games=${3-10}
players=${4-100}
games=$(seq $games)
players=$(seq $players)
# Spam score updates over UDP
while true
do
for game in $games
do
game="g$game"
for player in $players
do
player="p$player"
score=$(($RANDOM % 1000))
echo "scores,player=$player,game=$game value=$score" > /dev/udp/$host/$port
done
done
sleep 0.1
done
```
Place the above script into a file `scores.sh` and run it:
```bash
chmod +x ./scores.sh
./scores.sh
```
Now we are spamming Kapacitor with our fake score data.
We can just leave that running since Kapacitor will drop
the incoming data until it has a task that wants it.
### Defining the Kapacitor task
What does a leaderboard need to do?
1. Get the most recent score per player per game.
1. Calculate the top X player scores per game.
1. Publish the results.
1. Store the results.
To complete step one we need to buffer the incoming stream and return the most recent score update per player per game.
Our [TICKscript](/kapacitor/v1.4/tick/) will look like this:
```javascript
var topPlayerScores = stream
|from()
.measurement('scores')
// Get the most recent score for each player per game.
// Not likely that a player is playing two games but just in case.
.groupBy('game', 'player')
|window()
// keep a buffer of the last 11s of scores
// just in case a player score hasn't updated in a while
.period(11s)
// Emit the current score per player every second.
.every(1s)
// Align the window boundaries to be on the second.
.align()
|last('value')
```
Place this script in a file called `top_scores.tick`.
Now our `topPlayerScores` variable contains each player's most recent score.
Next to calculate the top scores per game we just need to group by game and run another map reduce job.
Let's keep the top 15 scores per game.
Add these lines to the `top_scores.tick` file.
```javascript
// Calculate the top 15 scores per game
var topScores = topPlayerScores
|groupBy('game')
|top(15, 'last', 'player')
```
The `topScores` variable now contains the top 15 player's score per game.
All we need to be able to build our leaderboard.
Kapacitor can expose the scores over HTTP via the [HTTPOutNode](/kapacitor/v1.4/nodes/http_out_node/).
We will call our task `top_scores`; with the following addition the most recent scores will be available at
`http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores`.
```javascript
// Expose top scores over the HTTP API at the 'top_scores' endpoint.
// Now your app can just request the top scores from Kapacitor
// and always get the most recent result.
//
// http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores
topScores
|httpOut('top_scores')
```
Finally we want to store the top scores over time so we can do in depth analysis to ensure the best game play.
But we do not want to store the scores every second as that is still too much data.
First we will sample the data and store scores only every 10 seconds.
Also let's do some basic analysis ahead of time since we already have a stream of all the data.
For now we will just do basic gap analysis where we will store the gap between the top player and the 15th player.
Add these lines to `top_scores.tick` to complete our task.
```javascript
// Sample the top scores and keep a score once every 10s
var topScoresSampled = topScores
|sample(10s)
// Store top fifteen player scores in InfluxDB.
topScoresSampled
|influxDBOut()
.database('game')
.measurement('top_scores')
// Calculate the max and min of the top scores.
var max = topScoresSampled
|max('top')
var min = topScoresSampled
|min('top')
// Join the max and min streams back together and calculate the gap.
max
|join(min)
.as('max', 'min')
// Calculate the difference between the max and min scores.
// Rename the max and min fields to more friendly names 'topFirst', 'topLast'.
|eval(lambda: "max.max" - "min.min", lambda: "max.max", lambda: "min.min")
.as('gap', 'topFirst', 'topLast')
// Store the fields: gap, topFirst and topLast in InfluxDB.
|influxDBOut()
.database('game')
.measurement('top_scores_gap')
```
Since we are writing data back to InfluxDB create a database `game` for our results.
{{< keep-url >}}
```
curl -G 'http://localhost:8086/query?' --data-urlencode 'q=CREATE DATABASE game'
```
Here is the complete task TICKscript if you don't want to copy paste as much :)
```javascript
dbrp "game"."autogen"
// Define a result that contains the most recent score per player.
var topPlayerScores = stream
|from()
.measurement('scores')
// Get the most recent score for each player per game.
// Not likely that a player is playing two games but just in case.
.groupBy('game', 'player')
|window()
// keep a buffer of the last 11s of scores
// just in case a player score hasn't updated in a while
.period(11s)
// Emit the current score per player every second.
.every(1s)
// Align the window boundaries to be on the second.
.align()
|last('value')
// Calculate the top 15 scores per game
var topScores = topPlayerScores
|groupBy('game')
|top(15, 'last', 'player')
// Expose top scores over the HTTP API at the 'top_scores' endpoint.
// Now your app can just request the top scores from Kapacitor
// and always get the most recent result.
//
// http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores
topScores
|httpOut('top_scores')
// Sample the top scores and keep a score once every 10s
var topScoresSampled = topScores
|sample(10s)
// Store top fifteen player scores in InfluxDB.
topScoresSampled
|influxDBOut()
.database('game')
.measurement('top_scores')
// Calculate the max and min of the top scores.
var max = topScoresSampled
|max('top')
var min = topScoresSampled
|min('top')
// Join the max and min streams back together and calculate the gap.
max
|join(min)
.as('max', 'min')
// calculate the difference between the max and min scores.
|eval(lambda: "max.max" - "min.min", lambda: "max.max", lambda: "min.min")
.as('gap', 'topFirst', 'topLast')
// store the fields: gap, topFirst, and topLast in InfluxDB.
|influxDBOut()
.database('game')
.measurement('top_scores_gap')
```
Define and enable our task to see it in action:
```bash
kapacitor define top_scores -tick top_scores.tick
kapacitor enable top_scores
```
First let's check that the HTTP output is working.
```bash
curl 'http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores'
```
You should have a JSON result of the top 15 players and their scores per game.
Hit the endpoint several times to see that the scores are updating once a second.
Now, let's check InfluxDB to see our historical data.
{{< keep-url >}}
```bash
curl \
-G 'http://localhost:8086/query?db=game' \
--data-urlencode 'q=SELECT * FROM top_scores WHERE time > now() - 5m GROUP BY game'
curl \
-G 'http://localhost:8086/query?db=game' \
--data-urlencode 'q=SELECT * FROM top_scores_gap WHERE time > now() - 5m GROUP BY game'
```
Great!
The hard work is done.
All that remains is configuring the game server to send score updates to Kapacitor and update the spectator dashboard to pull scores from Kapacitor.

View File

@ -1,291 +0,0 @@
---
title: Monitor InfluxDB Enterprise clusters
description: Use Chronograf dashboards with an InfluxDB OSS server to measure and monitor InfluxDB Enterprise clusters.
aliases:
- /chronograf/v1.10/guides/monitor-an-influxenterprise-cluster/
menu:
chronograf_1_10:
weight: 80
parent: Guides
---
[InfluxDB Enterprise](/{{< latest "enterprise_influxdb" >}}/) offers high availability and a highly scalable clustering solution for your time series data needs.
Use Chronograf to assess your cluster's health and to monitor the infrastructure behind your project.
This guide offers step-by-step instructions for using Chronograf, [InfluxDB](/{{< latest "influxdb" "v1" >}}/), and [Telegraf](/{{< latest "telegraf" >}}/) to monitor data nodes in your InfluxDB Enterprise cluster.
## Requirements
You have a fully-functioning InfluxDB Enterprise cluster with authentication enabled.
See the InfluxDB Enterprise documentation for
[detailed setup instructions](/{{< latest "enterprise_influxdb" >}}/production_installation/).
This guide uses an InfluxData Enterprise cluster with three meta nodes and three data nodes; the steps are also applicable to other cluster configurations.
InfluxData recommends using a separate server to store your monitoring data.
It is possible to store the monitoring data in your cluster and [connect the cluster to Chronograf](/chronograf/v1.10/troubleshooting/frequently-asked-questions/#how-do-i-connect-chronograf-to-an-influxenterprise-cluster), but, in general, your monitoring data should live on a separate server.
You're working on an Ubuntu installation.
Chronograf and the other components of the TICK stack are supported on several operating systems and hardware architectures. Check out the [downloads page](https://portal.influxdata.com/downloads) for links to the binaries of your choice.
## Architecture overview
Before we begin, here's an overview of the final monitoring setup:
![Architecture diagram](/img/chronograf/1-6-cluster-diagram.png)
The diagram above shows an InfluxDB Enterprise cluster that consists of three meta nodes (M) and three data nodes (D).
Each data node has its own [Telegraf](/{{< latest "telegraf" >}}/) instance (T).
Each Telegraf instance is configured to collect node CPU, disk, and memory data using the Telegraf [system stats](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin.
The Telegraf instances are also configured to send those data to a single [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/) instance that lives on a separate server.
When Telegraf sends data to InfluxDB, it automatically [tags](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag) the data with the hostname of the relevant data node.
The InfluxDB OSS instance that stores the Telegraf data is connected to Chronograf.
Chronograf uses the hostnames in the Telegraf data to populate the Host List page and provide other hostname-specific information in the user interface.
## Setup description
### InfluxDB OSS setup
#### Step 1: Download and install InfluxDB
InfluxDB can be downloaded from the [InfluxData downloads page](https://portal.influxdata.com/downloads).
#### Step 2: Enable authentication
For security purposes, enable authentication in the InfluxDB [configuration file (influxdb.conf)](/{{< latest "influxdb" "v1" >}}/administration/config/), which is located in `/etc/influxdb/influxdb.conf`.
In the `[http]` section of the configuration file, uncomment the `auth-enabled` option and set it to `true`:
```
[http]
# Determines whether HTTP endpoint is enabled.
# enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true #💥
```
#### Step 3: Start InfluxDB
Next, start the InfluxDB process:
```
~# sudo systemctl start influxdb
```
#### Step 4: Create an admin user
Create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
Because you enabled authentication, you must perform this step before moving on to the next section.
Run the command below to create an admin user, replacing `chronothan` and `supersecret` with your own username and password.
Note that the password requires single quotes.
{{< keep-url >}}
```
~# curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE USER chronothan WITH PASSWORD 'supersecret' WITH ALL PRIVILEGES"
```
A successful `CREATE USER` query returns a blank result:
```
{"results":[{"statement_id":0}]} <--- Success!
```
### Telegraf setup
Perform the following steps on each data node in your cluster.
You'll return to your InfluxDB instance at the end of this section.
#### Step 1: Download and install Telegraf
Telegraf can be downloaded from the [InfluxData downloads page](https://portal.influxdata.com/downloads).
#### Step 2: Configure Telegraf
Configure Telegraf to write monitoring data to your InfluxDB OSS instance.
The Telegraf configuration file is located in `/etc/telegraf/telegraf.conf`.
First, in the `[[outputs.influxdb]]` section, set the `urls` option to the IP address and port of your InfluxDB OSS instance.
InfluxDB runs on port `8086` by default.
This step ensures that Telegraf writes data to your InfluxDB OSS instance.
```
[[outputs.influxdb]]
## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://xxx.xx.xxx.xxx:8086"] #💥
```
Next, in the same `[[outputs.influxdb]]` section, uncomment and set the `username` and `password` options to the username and password that you created in the [previous section](#step-4-create-an-admin-user).
Telegraf must be aware your username and password to successfully write data to your InfluxDB OSS instance.
```
[[outputs.influxdb]]
## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://xxx.xx.xxx.xxx:8086"] # required
[...]
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
username = "chronothan" #💥
password = "supersecret" #💥
```
The [Telegraf System input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) is enabled by default and requires no additional configuration.
The input plugin automatically collects general statistics on system load, uptime, and the number of users logged in.
Enabled input plugins are configured in the `INPUT PLUGINS` section of the configuration file; for example, here's the section that controls the CPU data collection:
```
###############################################################################
# INPUT PLUGINS #
###############################################################################
# Read metrics about cpu usage
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
```
#### Step 3: Restart the Telegraf service
Restart the Telegraf service so that your configuration changes take effect:
**macOS**
```sh
telegraf --config telegraf.conf
```
**Linux (sysvinit and upstart installations)**
```sh
sudo service telegraf restart
```
**Linux (systemd installations)**
```sh
systemctl restart telegraf
```
Repeat steps one through four for each data node in your cluster.
#### Step 4: Confirm the Telegraf setup
To verify Telegraf is successfully collecting and writing data, use one of the following methods to query your InfluxDB OSS instance:
**InfluxDB CLI (`influx`)**
```sh
$ influx
> SHOW TAG VALUES FROM cpu WITH KEY=host
```
**`curl`**
Replace the `chronothan` and `supersecret` values with your actual username and password.
{{< keep-url >}}
```
~# curl -G "http://localhost:8086/query?db=telegraf&u=chronothan&p=supersecret&pretty=true" --data-urlencode "q=SHOW TAG VALUES FROM cpu WITH KEY=host"
```
The expected output is similar to the JSON code block below.
In this case, the `telegraf` database has three different [tag values](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-value) for the `host` [tag key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-key): `data-node-01`, `data-node-02`, and `data-node-03`.
Those values match the hostnames of the three data nodes in the cluster; this means Telegraf is successfully writing monitoring data from those hosts to the InfluxDB OSS instance!
```
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "cpu",
"columns": [
"key",
"value"
],
"values": [
[
"host",
"data-node-01"
],
[
"host",
"data-node-02"
],
[
"host",
"data-node-03"
]
]
}
]
}
]
}
```
### Chronograf Setup
#### Step 1: Download and install Chronograf
Download and install Chronograf on the same server as the InfluxDB instance.
This is not a requirement; you may host Chronograf on a separate server.
Chronograf can be downloaded from the [InfluxData downloads page](https://portal.influxdata.com/downloads).
#### Step 2: Start Chronograf
```
~# sudo systemctl start chronograf
```
### Step 3: Connect Chronograf to the InfluxDB OSS instance
To access Chronograf, go to http://localhost:8888.
The welcome page includes instructions for connecting Chronograf to that instance.
![Connect Chronograf to InfluxDB](/img/chronograf/1-6-cluster-welcome.png)
For the `Connection String`, enter the hostname or IP of your InfluxDB OSS instance, and be sure to include the default port: `8086`.
Next, name your data source; this can be anything you want.
Finally, enter your username and password and click `Add Source`.
### Step 4: Explore the monitoring data in Chronograf
Chronograf works with the Telegraf data in your InfluxDB OSS instance.
The `Host List` page shows your data node's hostnames, their statuses, CPU usage, load, and their configured applications.
In this case, you've only enabled the system stats input plugin so `system` is the single application that appears in the `Apps` column.
![Host List page](/img/chronograf/1-6-cluster-hostlist.png)
Click `system` to see the Chronograf canned dashboard for that application.
Keep an eye on your data nodes by viewing that dashboard for each hostname:
![Pre-created dashboard](/img/chronograf/1-6-cluster-predash.gif)
Next, check out the Data Explorer to create a customized graph with the monitoring data.
In the image below, the Chronograf query editor is used to visualize the idle CPU usage data for each data node:
![Data Explorer](/img/chronograf/1-6-cluster-de.png)
Create more customized graphs and save them to a dashboard on the Dashboard page in Chronograf.
See the [Creating Chronograf dashboards](/chronograf/v1.10/guides/create-a-dashboard/) guide for more information.
That's it! You've successfully configured Telegraf to collect and write data, InfluxDB to store those data, and Chronograf to use those data for monitoring and visualization purposes.

View File

@ -1,25 +0,0 @@
---
title: View Chronograf dashboards in presentation mode
description: View dashboards in full screen using presentation mode.
menu:
chronograf_1_10:
name: View dashboards in presentation mode
weight: 130
parent: Guides
---
Presentation mode allows you to view Chronograf in full screen, hiding the left and top navigation menus so only the cells appear. This mode might be helpful, for example, for stationary screens dedicated to monitoring visualizations.
## Enter presentation mode manually
To enter presentation mode manually, click the icon in the upper right:
<img src="/img/chronograf/1-6-presentation-mode.png" style="width:100%; max-width:500px"/>
To exit presentation mode, press `ESC`.
## Use the URL query parameter
To load the dashboard in presentation mode, add URL query parameter `present=true` to your dashboard URL. For example, your URL might look like this:
`http://example.com:8888/sources/1/dashboards/2?present=true`
Note that if you use this option, you won't be able to exit presentation mode using `ESC`.

View File

@ -1,135 +0,0 @@
---
title: Explore data in Chronograf
description: Query and visualize data in the Data Explorer.
menu:
chronograf_1_10:
name: Explore data in Chronograf
weight: 130
parent: Guides
---
Explore and visualize your data in the **Data Explorer**. For both InfluxQL and Flux, Chronograf allows you to move seamlessly between using the builder or templates and manually editing the query; when possible, the interface automatically populates the builder with the information from your raw query. Choose between [visualization types](/chronograf/v1.10/guides/visualization-types/) for your query.
To open the **Data Explorer**, click the **Explore** icon in the navigation bar:
<img src="/img/chronograf/1-7-data-explorer-icon.png" style="width:100%; max-width:400px; margin:2em 0; display: block;">
## Select local time or UTC (Coordinated Universal Time)
- In the upper-right corner of the page, select the time to view metrics and events by clicking one of the following:
- **UTC** for Coordinated Universal Time
- **Local** for the local time reported by your browser
{{% note %}}
**Note:** If your organization spans multiple time zones, we recommend using UTC (Coordinated Universal Time) to ensure that everyone sees metrics and events for the same time.
{{% /note %}}
## Explore data with InfluxQL
InfluxQL is a SQL-like query language you can use to interact with data in InfluxDB. For detailed tutorials and reference material, see our [InfluxQL documentation](/{{< latest "influxdb" "v1" >}}/query_language/).
{{% note %}}
#### Limited InfluxQL support in InfluxDB Cloud and OSS 2.x
Chronograf interacts with **InfluxDB Cloud** and **InfluxDB OSS 2.x** through the
[v1 compatibility API](/influxdb/cloud/reference/api/influxdb-1x/).
The v1 compatibility API provides limited InfluxQL support.
For more information, see [InfluxQL support](/influxdb/cloud/query-data/influxql/#influxql-support).
{{% /note %}}
1. Open the Data Explorer and click **Add a Query**.
2. To the right of the source dropdown above the graph placeholder, select **InfluxQL** as the source type.
3. Use the builder to select from your existing data and allow Chronograf to format the query for you. Alternatively, manually enter and edit a query.
4. You can also select from the dropdown list of **Metaquery Templates** that manage databases, retention policies, users, and more.
_See [Metaquery templates](#metaquery-templates)._
5. To display the templated values in the query, click **Show template values**.
6. Click **Submit Query**.
## Explore data with Flux
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. The **Script Builder** lets you build a complete Flux script scoped to a selected time range. View new tag keys and tag values based on already selected tag keys and tag values. Search for key names and values. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started).
1. Open the Data Explorer by clicking **Explore** in the left navigation bar.
2. Select **Flux** as the source type.
3. Click **Script Builder**.
4. The **Schema**, **Script**, and **Flux Functions** panes appear.
- Use the **Schema** pane to explore your available data. Click the **{{< icon "plus" >}}** sign next to a bucket name to expand its content.
- Use the **Script** pane to enter and view your Flux script.
- Use the **Flux Functions** pane to view details about the available Flux functions.
5. To get started building a new script, click **Script Builder**. Using the Flux script builder, you can select a bucket, measurements and tags, fields and an aggregate function. Click **{{< icon "plus" >}} Load More** to expand any truncated lists. You can also choose a variety of time ranges on your schema data.
6. When you are finished creating your script, click **Submit**.
7. Click **Script Editor** to view and edit your query.
8. If you make changes to the script using the **Script Builder**, you will receive a message when clicking **Submit** warning you
that submitting changes will override the script in the Flux editor, and that the script cannot be recovered.
## Visualize your query
Select the **Visualization** tab at the top of the **Data Explorer**. For details about all of the available visualization options, see [Visualization types in Chronograf](/chronograf/v1.10/guides/visualization-types/).
## Add queries to dashboards
To add your query and graph to a dashboard:
1. Click **Send to Dashboard** in the upper right.
2. In the **Target Dashboard(s)** dropdown, select at least one existing dashboard to send the cell to, or select **Send to a New Dashboard**.
3. Enter a name for the new cell and, if you created a new dashboard, the new dashboard.
4. If using an **InfluxQL** data source and you have multiple queries in the Data Explorer,
select which queries to send:
- **Active Query**: Send the currently viewed query.
- **All Queries**: Send all queries.
5. Click **Send to Dashboard(s)**.
## Metaquery templates
Metaquery templates provide templated InfluxQL queries manage databases, retention policies, users, and more.
Choose from the following options in the **Metaquery Templates** dropdown list:
###### Manage databases
- [Show Databases](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-databases)
- [Create Database](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-database)
- [Drop Database](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-a-database-with-drop-database)
###### Measurements, Tags, and Fields
- [Show Measurements](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-measurements)
- [Show Tag Keys](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-keys)
- [Show Tag Values](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-values)
- [Show Field Keys](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-field-keys)
###### Cardinality
- [Show Field Key Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-field-key-cardinality)
- [Show Measurement Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-measurement-cardinality)
- [Show Series Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-series-cardinality)
- [Show Tag Key Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-tag-key-cardinality)
- [Show Tag Values Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-tag-values-cardinality)
###### Manage retention policies
- [Show Retention Polices](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-retention-policies)
- [Create Retention Policy](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-retention-policies-with-create-retention-policy)
- [Drop Retention Policy](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-retention-policies-with-drop-retention-policy)
###### Manage continuous queries
- [Show Continuous Queries](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#listing-continuous-queries)
- [Create Continuous Query](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#syntax)
- [Drop Continuous Query](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#deleting-continuous-queries)
###### Manage users and permissions
- [Show Users](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-users)
- [Show Grants](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-grants)
- [Create User](/{{< latest "influxdb" "v1" >}}/query_language/spec/#create-user)
- [Create Admin User](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#admin-user-management)
- [Drop User](/{{< latest "influxdb" "v1" >}}/query_language/spec/#drop-user)
###### Delete data
- [Drop Measurement](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-measurements-with-drop-measurement)
- [Drop Series](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#drop-series-from-the-index-with-drop-series)
- [Delete](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-series-with-delete)
###### Analyze queries
- [Explain](/{{< latest "influxdb" "v1" >}}/query_language/spec/#explain)
- [Explain Analyze](/{{< latest "influxdb" "v1" >}}/query_language/spec/#explain-analyze)
###### Inspect InfluxDB internal metrics
- [Show Stats](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-stats)
- [Show Diagnostics](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-diagnostics)
- [Show Subscriptions](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-subscriptions)
- [Show Queries](/{{< latest "influxdb" "v1" >}}/troubleshooting/query_management/#list-currently-running-queries-with-show-queries)
- [Show Shards](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shards)
- [Show Shard Groups](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shard-groups)

View File

@ -1,15 +0,0 @@
---
title: Edit TICKscripts in Chronograf
description: View and edit TICKscript logs in Chronograf.
menu:
chronograf_1_10:
weight: 20
parent: Guides
draft: true
---
TICKscript logs data to a log file for debugging purposes.
Notes:
* TICKscript logs data to a log file for debugging purposes. We have a bunch of hosts which post data to an external endpoint. the payload is logged before being sent.
A feature to show the list of hosts , and an ability to see the logs for each of them.

View File

@ -1,482 +0,0 @@
---
title: Use pre-created dashboards in Chronograf
description: >
Display metrics for popular third-party applications with preconfigured dashboards in Chronograf.
menu:
chronograf_1_10:
name: Use pre-created dashboards
weight: 10
parent: Guides
---
## Overview
Pre-created dashboards are delivered with Chronograf depending on which Telegraf input plugins you have enabled and are available from the Host List page. These dashboards, which are built in and not editable, include cells with data visualizations for metrics that are relevant to data sources you are likely to be using.
{{% note %}}
Note that these pre-created dashboards cannot be cloned or customized. They appear only as part of the Host List view and are associated with metrics gathered from a single host. Dashboard templates are also available and deliver a solid starting point for customizing your own unique dashboards based on the Telegraf plugins enabled and operate across one or more hosts. For details, see [Dashboard templates](/chronograf/v1.10/guides/create-a-dashboard/#dashboard-templates).
{{% /note %}}
## Requirements
The pre-created dashboards automatically appear in the Host List page to the right of hosts based on which Telegraf input plugins you have enabled. Check the list below for applications that you are interested in using and make sure that you have the required Telegraf input plugins enabled.
## Use pre-created dashboards
Pre-created dashboards are delivered in Chronograf installations and are ready to be used when you have the required Telegraf input plugins enabled.
**To view a pre-created dashboard:**
1. Open Chronograf in your web browser and click **Host List** in the navigation bar.
2. Select an application listed under **Apps**. By default, the system `app` should be listed next to a host listing. Other apps appear depending on the Telegraf input plugins that you have enabled.
The selected application appears showing pre-created cells, based on available measurements.
## Create or edit dashboards
Find a list of apps (pre-created dashboards) available to use with Chronograf below. For each app, you'll find:
- Required Telegraf input plugins for the app
- JSON files included in the app
- Cell titles included in each JSON file
The JSON files for apps are included in the `/usr/share/chronograf/canned` directory. Find information about the configuration option `--canned-path` on the [Chronograf configuration options](/chronograf/v1.10/administration/config-options/) page.
Enable and disable apps in your Telegraf configuration file (by default, `/etc/telegraf/telegraf.conf`). See [Configuring Telegraf](/telegraf/v1.13/administration/configuration/) for details.
## Apps (pre-created dashboards):
* [apache](#apache)
* [consul](#consul)
* [docker](#docker)
* [elasticsearch](#elasticsearch)
* [haproxy](#haproxy)
* [iis](#iis)
* [influxdb](#influxdb)
* [kubernetes](#kubernetes)
* [memcached](#memcached-memcached)
* [mesos](#mesos)
* [mysql](#mysql)
* [nginx](#nginx)
* [nsq](#nsq)
* [phpfpm](#phpfpm)
* [ping](#ping)
* [postgresql](#postgresql)
* [rabbitmq](#rabbitmq)
* [redis](#redis)
* [riak](#riak)
* [system](#system)
* [varnish](#varnish)
* [win_system](#win-system)
## apache
**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#input-apache)
`apache.json`
* "Apache Bytes/Second"
* "Apache - Requests/Second"
* "Apache - Total Accesses"
## consul
**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#input-consul)
`consul_http.json`
* "Consul - HTTP Request Time (ms)"
`consul_election.json`
* "Consul - Leadership Election"
`consul_cluster.json`
* "Consul - Number of Agents"
`consul_serf_events.json`
* "Consul - Number of serf events"
## docker
**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#input-docker)
`docker.json`
* "Docker - Container CPU %"
* "Docker - Container Memory (MB)"
* "Docker - Containers"
* "Docker - Images"
* "Docker - Container State"
`docker_blkio.json`
* "Docker - Container Block IO"
`docker_net.json`
* "Docker - Container Network"
## elasticsearch
**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch)
`elasticsearch.json`
* "ElasticSearch - Query Throughput"
* "ElasticSearch - Open Connections"
* "ElasticSearch - Query Latency"
* "ElasticSearch - Fetch Latency"
* "ElasticSearch - Suggest Latency"
* "ElasticSearch - Scroll Latency"
* "ElasticSearch - Indexing Latency"
* "ElasticSearch - JVM GC Collection Counts"
* "ElasticSearch - JVM GC Latency"
* "ElasticSearch - JVM Heap Usage"
## haproxy
**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy)
`haproxy.json`
* "HAProxy - Number of Servers"
* "HAProxy - Sum HTTP 2xx"
* "HAProxy - Sum HTTP 4xx"
* "HAProxy - Sum HTTP 5xx"
* "HAProxy - Frontend HTTP Requests/Second"
* "HAProxy - Frontend Sessions/Second"
* "HAProxy - Frontend Session Usage %"
* "HAProxy - Frontend Security Denials/Second"
* "HAProxy - Frontend Request Errors/Second"
* "HAProxy - Frontend Bytes/Second"
* "HAProxy - Backend Average Response Time (ms)"
* "HAProxy - Backend Connection Errors/Second"
* "HAProxy - Backend Queued Requests/Second"
* "HAProxy - Backend Average Request Queue Time (ms)"
* "HAProxy - Backend Error Responses/Second"
## iis
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-performance-counters)
`win_websvc.json`
* "IIS - Service"
## influxdb
**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb)
`influxdb_database.json`
* "InfluxDB - Cardinality"
`influxdb_httpd.json`
* "InfluxDB - Write HTTP Requests"
* "InfluxDB - Query Requests"
* "InfluxDB - Client Failures"
`influxdb_queryExecutor.json`
* "InfluxDB - Query Performance"
`influxdb_write.json`
* "InfluxDB - Write Points"
* "InfluxDB - Write Errors"
## kubernetes
`kubernetes_node.json`
* "K8s - Node Millicores"
* "K8s - Node Memory Bytes"
`kubernetes_pod_container.json`
* "K8s - Pod Millicores"
* "K8s - Pod Memory Bytes"
`kubernetes_pod_network.json`
* "K8s - Pod TX Bytes/Second"
* "K8s - Pod RX Bytes/Second "
`kubernetes_system_container.json`
* "K8s - Kubelet Millicores"
* "K8s - Kubelet Memory Bytes"
## Memcached (`memcached`)
**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached)
`memcached.json`
* "Memcached - Current Connections"
* "Memcached - Get Hits/Second"
* "Memcached - Get Misses/Second"
* "Memcached - Delete Hits/Second"
* "Memcached - Delete Misses/Second"
* "Memcached - Incr Hits/Second"
* "Memcached - Incr Misses/Second"
* "Memcached - Current Items"
* "Memcached - Total Items"
* "Memcached - Bytes Stored"
* "Memcached - Bytes Written/Sec"
* "Memcached - Evictions/10 Seconds"
## mesos
**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos)
`mesos.json`
* "Mesos Active Slaves"
* "Mesos Tasks Active"
* "Mesos Tasks"
* "Mesos Outstanding offers"
* "Mesos Available/Used CPUs"
* "Mesos Available/Used Memory"
* "Mesos Master Uptime"
## mongodb
**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-mongodb)
`mongodb.json`
* "MongoDB - Read/Second"
* "MongoDB - Writes/Second"
* "MongoDB - Active Connections"
* "MongoDB - Reds/Writes Waiting in Queue"
* "MongoDB - Network Bytes/Second"
## mysql
**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql)
`mysql.json`
* "MySQL - Reads/Second"
* "MySQL - Writes/Second"
* "MySQL - Connections/Second"
* "MySQL - Connection Errors/Second"
## nginx
**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx)
`nginx.json`
* "NGINX - Client Connections"
* "NGINX - Client Errors"
* "NGINX - Client Requests"
* "NGINX - Active Client State"
## nsq
**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq)
`nsq_channel.json`
* "NSQ - Channel Client Count"
* "NSQ - Channel Messages Count"
`nsq_server.json`
* "NSQ - Topic Count"
* "NSQ - Server Count"
`nsq_topic.json`
* "NSQ - Topic Messages"
* "NSQ - Topic Messages on Disk"
* "NSQ - Topic Ingress"
* "NSQ topic egress"
## phpfpm
**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#input-php-fpm)
`phpfpm.json`
* "phpfpm - Accepted Connections"
* "phpfpm - Processes"
* "phpfpm - Slow Requests"
* "phpfpm - Max Children Reached"
## ping
**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#input-ping)
`ping.json`
* "Ping - Packet Loss Percent"
* "Ping - Response Times (ms)"
## postgresql
**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql)
`postgresql.json`
* "PostgreSQL - Rows"
* "PostgreSQL - QPS"
* "PostgreSQL - Buffers"
* "PostgreSQL - Conflicts/Deadlocks"
## rabbitmq
**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq)
`rabbitmq.json`
* "RabbitMQ - Overview"
* "RabbitMQ - Published/Delivered per second"
* "RabbitMQ - Acked/Unacked per second"
## redis
**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#input-redis)
`redis.json`
* "Redis - Connected Clients"
* "Redis - Blocked Clients"
* "Redis - CPU"
* "Redis - Memory"
## riak
**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#input-riak)
`riak.json`
* "Riak - Toal Memory Bytes"
* "Riak - Object Byte Size"
* "Riak - Number of Siblings/Minute"
* "Riak - Latency (ms)"
* "Riak - Reads and Writes/Minute"
* "Riak - Active Connections"
* "Riak - Read Repairs/Minute"
## system
The `system` application includes metrics that require all of the listed plugins. If any of the following plugins aren't enabled, the metrics associated with the plugins will not display data.
### cpu
**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu)
`cpu.json`
* "CPU Usage"
### disk
`disk.json`
**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#input-disk)
* "System - Disk used %"
### diskio
**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio)
`diskio.json`
* "System - Disk MB/s"
*
### mem
**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#input-mem)
`mem.json`
* "System - Memory Gigabytes Used"
### net
**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#input-net)
`net.json`
* "System - Network Mb/s"
* "System - Network Error Rate"
### netstat
**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-netstat)
`netstat.json`
* "System - Open Sockets"
* "System - Sockets Created/Second"
### processes
**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#input-processes)
`processes.json`
* "System - Total Processes"
### procstat
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat)
`procstat.json`
* "Processes - Resident Memory (MB)"
* "Processes CPU Usage %"
### system
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat)
`load.json`
* "System Load"
## varnish
**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#input-varnish)
`varnish.json`
* "Varnish - Cache Hits/Misses"
## win_system
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-performance-counters)
`win_cpu.json`
* "System - CPU Usage"
`win_mem.json`
* "System - Available Bytes"
`win_net.json`
* "System - TX Bytes/Second"
* "RX Bytes/Second"
`win_system.json`
* "System - Load"

View File

@ -1,278 +0,0 @@
---
title: Visualization types in Chronograf
descriptions: >
Chronograf provides multiple visualization types to visualize your data in a format that makes to the most sense for your use case.
menu:
chronograf_1_10:
name: Visualization types
weight: 40
parent: Guides
---
Chronograf's dashboard views support the following visualization types, which can be selected in the **Visualization Type** selection view of the [Data Explorer](/chronograf/v1.10/guides/querying-data).
[Visualization Type selector](/img/chronograf/1-6-viz-types-selector.png)
Each of the available visualization types and available user controls are described below.
* [Line Graph](#line-graph)
* [Stacked Graph](#stacked-graph)
* [Step-Plot Graph](#step-plot-graph)
* [Single Stat](#single-stat)
* [Line Graph + Single Stat](#line-graph-single-stat)
* [Bar Graph](#bar-graph)
* [Gauge](#gauge)
* [Table](#table)
* [Note](#note)
For information on adding and displaying annotations in graph views, see [Adding annotations to Chronograf views](/chronograf/v1.10/guides/annotations/).
### Line Graph
The **Line Graph** view displays a time series in a line graph.
![Line Graph selector](/img/chronograf/1-6-viz-line-graph-selector.png)
#### Line Graph Controls
![Line Graph Controls](/img/chronograf/1-6-viz-line-graph-controls.png)
Use the **Line Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
* **Static Legend**: Toggle between **Show** and **Hide**.
#### Line Graph example
![Line Graph example](/img/chronograf/1-6-viz-line-graph-example.png)
### Stacked Graph
The **Stacked Graph** view displays multiple time series bars as segments stacked on top of each other.
![Stacked Graph selector](/img/chronograf/1-6-viz-stacked-graph-selector.png)
#### Stacked Graph Controls
![Stacked Graph Controls](/img/chronograf/1-6-viz-stacked-graph-controls.png)
Use the **Stacked Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
* **Static Legend**: Toggle between **Show** and **Hide**.
#### Stacked Graph example
![Stacked Graph example](/img/chronograf/1-6-viz-stacked-graph-example.png)
### Step-Plot Graph
The **Step-Plot Graph** view displays a time series in a staircase graph.
![Step-Plot Graph selector](/img/chronograf/1-6-viz-step-plot-graph-selector.png)
#### Step-Plot Graph Controls
![Step-Plot Graph Controls](/img/chronograf/1-6-viz-step-plot-graph-controls.png)
Use the **Step-Plot Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
#### Step-Plot Graph example
![Step-Plot Graph example](/img/chronograf/1-6-viz-step-plot-graph-example.png)
### Bar Graph
The **Bar Graph** view displays the specified time series using a bar chart.
To select this view, click the Bar Graph selector icon.
![Bar Graph selector](/img/chronograf/1-6-viz-bar-graph-selector.png)
#### Bar Graph Controls
![Bar Graph Controls](/img/chronograf/1-6-viz-bar-graph-controls.png)
Use the **Bar Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
#### Bar Graph example
![Bar Graph example](/img/chronograf/1-6-viz-bar-graph-example.png)
### Line Graph + Single Stat
The **Line Graph + Single Stat** view displays the specified time series in a line graph and overlays the single most recent value as a large numeric value.
To select this view, click the **Line Graph + Single Stat** view option.
![Line Graph + Single Stat selector](/img/chronograf/1-6-viz-line-graph-single-stat-selector.png)
#### Line Graph + Single Stat Controls
![Line Graph + Single Stat Controls](/img/chronograf/1-6-viz-line-graph-single-stat-controls.png)
Use the **Line Graph + Single Stat Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
#### Line Graph + Single Stat example
![Line Graph + Single Stat example](/img/chronograf/1-6-viz-line-graph-single-stat-example.png)
### Single Stat
The **Single Stat** view displays the most recent value of the specified time series as a numerical value.
![Single Stat view](/img/chronograf/1-6-viz-single-stat-selector.png)
If a cell's query includes a [`GROUP BY` tag](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#group-by-tags) clause, Chronograf sorts the different [series](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#series) lexicographically and shows the most recent [field value](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field-value) associated with the first series.
For example, if a query groups by the `name` [tag key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-key) and `name` has two [tag values](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-value) (`chronelda` and `chronz`), Chronograf shows the most recent field value associated with the `chronelda` series.
If a cell's query includes more than one [field key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field-key) in the [`SELECT` clause](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#select-clause), Chronograf returns the most recent field value associated with the first field key in the `SELECT` clause.
For example, if a query's `SELECT` clause is `SELECT "chronogiraffe","chronelda"`, Chronograf shows the most recent field value associated with the `chronogiraffe` field key.
#### Single Stat Controls
Use the **Single Stat Controls** panel to specify one or more thresholds:
* **Add Threshold**: Button to add a new threshold.
* **Base Color**: Select a base, or background, color from the selection list.
* Color options: Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, Honeydew, Rainforest, Viridian, Ocean, Pool, Laser (default), Planet, Star, Comet, Pepper, Graphite, White, and Castle.
* **Prefix**: Prefix. For example, `%`, `MPH`, etc.
* **Suffix**: Suffix. For example, `%`, `MPH`, etc.
* Threshold Coloring: Toggle between **Background** and **Text**
### Gauge
The **Gauge** view displays the single value most recent value for a time series in a gauge view.
To select this view, click the Gauge selector icon.
![Gauge selector](/img/chronograf/1-6-viz-gauge-selector.png)
#### Gauge Controls
![Gauge Controls](/img/chronograf/1-6-viz-gauge-controls.png)
Use the **Gauge Controls** to specify the following:
* **Add Threshold**: Click button to add a threshold.
* **Min**: Minimum value for the threshold.
- Select color to display. Selection list options include: Laser (default), Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, and Honeydew.
* **Max**: Maximum value for the threshold.
- Select color to display. Selection list options include: Laser (default), Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, and Honeydew.
* **Prefix**: Prefix. For example, `%`, `MPH`, etc.
* **Suffix**: Suffix. For example, `%`, `MPH`, etc.
#### Gauge example
![Gauge example](/img/chronograf/1-6-viz-gauge-example.png)
### Table
The **Table** panel displays the results of queries in a tabular view, which is sometimes easier to analyze than graph views of data.
![Table selector](/img/chronograf/1-6-viz-table-selector.png)
#### Table Controls
![Table Controls](/img/chronograf/1-6-viz-table-controls.png)
Use the **Table Controls** to specify the following:
- **Default Sort Field**: Select the default sort field. Default is **time**.
- **Decimal Places**: Enter the number of decimal places. Default (empty field) is **unlimited**.
- **Time Axis**: Select **Vertical** or **Horizontal**.
- **Time Format**: Select the time format.
- Options include: `MM/DD/YYYY HH:mm:ss` (default), `MM/DD/YYYY HH:mm:ss.SSS`, `YYYY-MM-DD HH:mm:ss`, `HH:mm:ss`, `HH:mm:ss.SSS`, `MMMM D, YYYY HH:mm:ss`, `dddd, MMMM D, YYYY HH:mm:ss`, or `Custom`.
- **Lock First Column**: Lock the first column so that the listings are always visible. Threshold settings do not apply in the first column when locked.
- **Customize Field**:
- **time**: Enter a new name to rename.
- [additional]: Enter name for each additional column.
- Change the order of the column by dragging to the desired position.
- **Thresholds**
{{% note %}}
**Note:** Threshold settings apply to any cells with values, except when they appear in the first column and **Lock First Column** is enabled.
{{% /note %}}
- **Add Threshold** (button): Click to add a threshold.
- **Base Color**: Select a base, or background, color from the selection list.
- Color options: Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, Honeydew, Rainforest, Viridian, Ocean, Pool, Laser (default), Planet, Star, Comet, Pepper, Graphite, White, and Castle.
#### Table view example
![Table example](/img/chronograf/1-6-viz-table-example.png)
### Note
The **Note** panel displays Markdown-formatted text with your graph.
![Note selector](/img/chronograf/1-7-viz-note-selector.png)
#### Note Controls
![Note Controls](/img/chronograf/1-7-viz-note-controls.png)
Enter your text in the **Add a Note** panel, using Markdown to format the text.
Enable the **Display note in cell when query returns no results** option to display the note text in the cell instead of `No Results`.
#### Note view example
![Note example](/img/chronograf/1-7-viz-note-example.png)

View File

@ -1,102 +0,0 @@
---
title: Write data to InfluxDB
description:
Use Chronograf to write data to InfluxDB. Upload line protocol into the UI, use the
InfluxQL `INTO` clause, or use the Flux `to()` function to write data back to InfluxDB.
menu:
chronograf_1_10:
name: Write data to InfluxDB
parent: Guides
weight: 140
---
Use Chronograf to write data to InfluxDB.
Choose from the following methods:
- [Upload line protocol through the Chronograf UI](#upload-line-protocol-through-the-chronograf-ui)
- [Use the InfluxQL `INTO` clause in a query](#use-the-influxql-into-clause-in-a-query)
- [Use the Flux `to()` function in a query](#use-the-flux-to-function-in-a-query)
## Upload line protocol through the Chronograf UI
1. Select **{{< icon "data-explorer" "v2" >}} Explore** in the left navigation bar.
2. Click **Write Data** in the top right corner of the Data Explorer.
{{< img-hd src="/img/chronograf/1-9-write-data.png" alt="Write data to InfluxDB with Chronograf" />}}
3. Select the **database** _(if an InfluxQL data source is selected)_ or
**database and retention policy** _(if a Flux data source is selected)_ to write to.
{{< img-hd src="/img/chronograf/1-9-write-db-rp.png" alt="Select database and retention policy to write to" />}}
4. Select one of the following methods for uploading [line protocol](/{{< latest "influxdb" "v1" >}}/write_protocols/line_protocol_tutorial/):
- **Upload File**: Upload a file containing line protocol to write to InfluxDB.
Either drag and drop a file into the file uploader or click to use your
operating systems file selector and choose a file to upload.
- **Manual Entry**: Manually enter line protocol to write to InfluxDB.
5. Select the timestamp precision of your line protocol.
Chronograf supports the following units:
- `s` (seconds)
- `ms` (milliseconds)
- `u` (microseconds)
- `ns` (nanoseconds)
{{< img-hd src="/img/chronograf/1-9-write-precision.png" alt="Select write precision in Chronograf" />}}
5. Click **Write**.
## Use the InfluxQL `INTO` clause in a query
To write data back to InfluxDB with an InfluxQL query, include the
[`INTO` clause](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#the-into-clause)
in your query:
1. Select **{{< icon "data-explorer" "v2" >}} Explore** in the left navigation bar.
2. Select **InfluxQL** as your data source type.
3. Write an InfluxQL query that includes the `INTO` clause. Specify the database,
retention policy, and measurement to write to. For example:
```sql
SELECT *
INTO "mydb"."autogen"."example-measurement"
FROM "example-db"."example-rp"."example-measurement"
GROUP BY *
```
4. Click **Submit Query**.
{{% note %}}
#### Use InfluxQL to write to InfluxDB 2.x or InfluxDB Cloud
To use InfluxQL to write to an **InfluxDB 2.x** or **InfluxDB Cloud** instance,
[configure database and retention policy mappings](/{{< latest "influxdb" >}}/upgrade/v1-to-v2/manual-upgrade/#create-dbrp-mappings)
and ensure the current [InfluxDB connection](/chronograf/v1.10/administration/creating-connections/#manage-influxdb-connections-using-the-chronograf-ui)
includes the appropriate connection credentials.
{{% /note %}}
## Use the Flux `to()` function in a query
To write data back to InfluxDB with an InfluxQL query, include the
[`INTO` clause](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#the-into-clause)
in your query:
1. Select **{{< icon "data-explorer" "v2" >}} Explore** in the left navigation bar.
2. Select **Flux** as your data source type.
{{% note %}}
To query InfluxDB with Flux, [enable Flux](/{{< latest "influxdb" "v1" >}}/flux/installation/)
in your InfluxDB configuration.
{{% /note %}}
3. Write an Flux query that includes the `to()` function.
Provide the database and retention policy to write to.
Use the `db-name/rp-name` syntax:
```js
from(bucket: "example-db/example-rp")
|> range(start: -30d)
|> filter(fn: (r) => r._measurement == "example-measurement")
|> to(bucket: "mydb/autogen")
```
4. Click **Run Script**.

View File

@ -1,14 +0,0 @@
---
title: Introduction to Chronograf
description: >
An introduction to Chronograf, the user interface and data visualization component for the InfluxData Platform. Includes documentation on getting started, installation, and downloading.
menu:
chronograf_1_10:
name: Introduction
weight: 20
---
Follow the links below to get acquainted with Chronograf:
{{< children >}}

View File

@ -1,13 +0,0 @@
---
title: Download Chronograf
menu:
chronograf_1_10:
name: Download
weight: 10
parent: Introduction
---
Download the latest Chronograf release at the [InfluxData download page](https://portal.influxdata.com/downloads).
Click **Are you interested in InfluxDB 1.x Open Source?** to expand the 1.x options. Scroll to the **Chronograf** section and select your desired Chronograf version and operating system. Execute the provided download commands.

View File

@ -1,29 +0,0 @@
---
title: Get started with Chronograf
description: >
Overview of data visualization, alerting, and infrastructure monitoring features available in Chronograf.
aliases:
- /chronograf/v1.10/introduction/getting_started/
menu:
chronograf_1_10:
name: Get started
weight: 30
parent: Introduction
---
## Overview
Chronograf allows you to quickly see data you have stored in InfluxDB so you can build robust queries and alerts. After your administrator has set up Chronograf as described in [Installing Chronograf](/chronograf/v1.10/introduction/installation), get started with key features using the guides below.
### Data visualization
* Investigate your data by building queries using the [Data Explorer](/chronograf/v1.10/guides/querying-data/).
* Use [pre-created dashboards](/chronograf/v1.10/guides/using-precreated-dashboards/) to monitor your application data or [create your own dashboards](/chronograf/v1.10/guides/create-a-dashboard/).
* Customize dashboards using [template variables](/chronograf/v1.10/guides/dashboard-template-variables/).
### Alerting
* [Create alert rules](/chronograf/v1.10/guides/create-alert-rules/) to generate threshold, relative, and deadman alerts on your data.
* [View all active alerts](/chronograf/v1.10/guides/create-alert-rules/#step-2-view-the-alerts) on an alert dashboard.
* Use [alert endpoints](/chronograf/v1.10/guides/configuring-alert-endpoints/) in Chronograf to send alert messages to specific URLs and applications.
### Infrastructure monitoring
* [View all hosts](/chronograf/v1.10/guides/monitoring-influxenterprise-clusters/#step-4-explore-the-monitoring-data-in-chronograf) and their statuses in your infrastructure.
* [Use pre-created dashboards](/chronograf/v1.10/guides/using-precreated-dashboards/) to monitor your applications.

View File

@ -1,100 +0,0 @@
---
title: Install Chronograf
description: Download and install Chronograf.
menu:
chronograf_1_10:
name: Install
weight: 20
parent: Introduction
---
This page describes how to download and install Chronograf.
### Content
* [TICK overview](#tick-overview)
* [Download and install](#download-and-install)
* [Connect to your InfluxDB instance or InfluxDB Enterprise cluster](#connect-chronograf-to-your-influxdb-instance-or-influxdb-enterprise-cluster)
* [Connect to Kapacitor](#connect-chronograf-to-kapacitor)
## TICK overview
Chronograf is the user interface for InfluxData's [TICK stack](https://www.influxdata.com/time-series-platform/).
## Download and install
The latest Chronograf builds are available on InfluxData's [Downloads page](https://portal.influxdata.com/downloads).
1. On the Downloads page, scroll to the bottom and click **Are you interested in InfluxDB 1.x Open Source?** to expand the 1.x options. Scroll to the **Chronograf** section and select your desired Chronograf version and operating system. Execute the provided download commands.
{{% note %}}
If your download includes a TAR package, save the underlying datastore `chronograf-v1.db` in directory outside of where you start Chronograf. This preserves and references your existing datastore, including configurations and dashboards, when you download future versions.
{{% /note %}}
2. Install Chronograf, replacing `<version#>` with the appropriate version:
{{% tabs-wrapper %}}
{{% tabs %}}
[macOS](#)
[Ubuntu & Debian](#)
[RedHat & CentOS](#)
{{% /tabs %}}
{{% tab-content %}}
```sh
tar zxvf chronograf-{{< latest-patch >}}_darwin_amd64.tar.gz
```
{{% /tab-content %}}
{{% tab-content %}}
```sh
sudo dpkg -i chronograf_{{< latest-patch >}}_amd64.deb
```
{{% /tab-content %}}
{{% tab-content %}}
```sh
sudo yum localinstall chronograf-{{< latest-patch >}}.x86_64.rpm
```
{{% /tab-content %}}
{{% /tabs-wrapper %}}
3. Start Chronograf:
```sh
chronograf
```
## Connect Chronograf to your InfluxDB instance or InfluxDB Enterprise cluster
1. In a browser, navigate to [localhost:8888](http://localhost:8888).
2. Provide the following details:
- **Connection String**: InfluxDB hostname or IP, and port (default port is `8086`).
- **Connection Name**: Connection name.
- **Username** and **Password**: If you've enabled
[InfluxDB authentication](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization),
provide your InfluxDB username and password. Otherwise, leave blank.
{{% note %}}
To ensure distinct permissions can be applied, Chronograf user accounts and
credentials should be different than InfluxDB credentials.
For example, you may want to set up Chronograf to run as a service account
with read-only permissions to InfluxDB. For more information, see how to
[manage InfluxDB users in Chronograf](/chronograf/v1.10/administration/managing-influxdb-users/)
and [manage Chronograf users](/chronograf/v1.10/administration/managing-chronograf-users/).
{{% /note %}}
- **Telegraf Database Name**: _(Optional)_ Telegraf database name.
Default name is `telegraf`.
3. Click **Add Source**.
## Connect Chronograf to Kapacitor
1. In Chronograf, click the configuration (wrench) icon in the sidebar menu, then select **Add Config** in the **Active Kapacitator** column.
2. In the **Kapacitor URL** field, enter the hostname or IP of the machine that Kapacitor is running on. Be sure to include Kapacitor's default port: `9092`.
3. Enter a name for your connection.
4. Leave the **Username** and **Password** fields blank unless you've specifically enabled authorization in Kapacitor.
5. Click **Connect**.
{{% note %}}
**Note:** Using [Kapacitor](/kapacitor/v1.6/) is optional and not required to use Chronograf.
{{% /note %}}

View File

@ -1,13 +0,0 @@
---
title: Chronograf Tools
description: >
Chronograf provides command line tools designed to aid in managing and working with Chronograf from the command line.
menu:
chronograf_1_10:
name: Tools
weight: 40
---
Chronograf provides command line tools designed to aid in managing and working with Chronograf from the command line. The following command line interfaces (CLIs) are available:
{{< children hlevel="h2" >}}

View File

@ -1,26 +0,0 @@
---
title: chronoctl
description: >
The `chronoctl` command line interface (CLI) includes commands to interact with an instance of Chronograf's data store.
menu:
chronograf_1_10:
name: chronoctl
parent: Tools
weight: 10
---
The `chronoctl` command line interface (CLI) includes commands to interact with an instance of Chronograf's data store.
## Usage
```
chronoctl [command]
chronoctl [flags]
```
## Commands
| Command | Description |
|:------- |:----------- |
| [add-superadmin](/chronograf/v1.10/tools/chronoctl/add-superadmin/) | Create a new user with superadmin status |
| [list-users](/chronograf/v1.10/tools/chronoctl/list-users) | List all users in the Chronograf data store |
| [migrate](/chronograf/v1.10/tools/chronoctl/migrate) | Migrate your Chronograf configuration store |

View File

@ -1,28 +0,0 @@
---
title: chronoctl add-superadmin
description: >
The `add-superadmin` command creates a new user with superadmin status.
menu:
chronograf_1_10:
name: chronoctl add-superadmin
parent: chronoctl
weight: 20
---
The `add-superadmin` command creates a new user with superadmin status.
## Usage
```
chronoctl add-superadmin [flags]
```
## Flags
| Flag | | Description | Input type |
|:---- |:----------------- | :---------------------------------------------------------------------------------------------------- | :--------: |
| `-b` | `--bolt-path` | Full path to boltDB file (e.g. `./chronograf-v1.db`)" env:"BOLT_PATH" default:"chronograf-v1.db" | string |
| `-i` | `--id` | User ID for an existing user | uint64 |
| `-n` | `--name` | User's name. Must be Oauth-able email address or username. | |
| `-p` | `--provider` | Name of the Auth provider (e.g. Google, GitHub, auth0, or generic) | string |
| `-s` | `--scheme` | Authentication scheme that matches auth provider (default:oauth2) | string |
| `-o` | `--orgs` | A comma-separated list of organizations that the user should be added to (default:"default") | string |

View File

@ -1,23 +0,0 @@
---
title: chronoctl list-users
description: >
The `list-users` command lists all users in the Chronograf data store.
menu:
chronograf_1_10:
name: chronoctl list-users
parent: chronoctl
weight: 30
---
The `list-users` command lists all users in the Chronograf data store.
## Usage
```
chronoctl list-users [flags]
```
## Flags
| Flag | | Description | Input type |
| :---- |:----------- | :------------------------------------------------------------ | :--------: |
| `-b` | `--bolt-path` | Full path to boltDB file (e.g. `./chronograf-v1.db`)" env:"BOLT_PATH" (default:chronograf-v1.db) | string |

View File

@ -1,46 +0,0 @@
---
title: chronoctl migrate
description: >
The `migrate` command allows you to migrate your Chronograf configuration store.
menu:
chronograf_1_10:
name: chronoctl migrate
parent: chronoctl
weight: 40
---
The `migrate` command lets you migrate your Chronograf configuration store.
By default, Chronograf is delivered with BoltDB as a data store. For information on migrating from BoltDB to an etcd cluster as a data store,
see [Migrating to a Chronograf HA configuration](/chronograf/v1.10/administration/migrate-to-high-availability).
## Usage
```
chronoctl migrate [flags]
```
## Flags
| Flag | | Description | Input type |
|:---- |:--- |:----------- |:----------: |
| `-f` | `--from` | Full path to BoltDB file or etcd (e.g. `bolt:///path/to/chronograf-v1.db` or `etcd://user:pass@localhost:2379` (default: `chronograf-v1.db`) | string |
| `-t` | `--to` | Full path to BoltDB file or etcd (e.g. `bolt:///path/to/chronograf-v1.db` or `etcd://user:pass@localhost:2379` (default: `etcd://localhost:2379`) | string |
#### Provide etcd authentication credentials
If authentication is enabled on `etcd`, use the standard URI basic
authentication format to define a username and password. For example:
```sh
etcd://username:password@localhost:2379
```
#### Provide etcd TLS credentials
If TLS is enabled on `etcd`, provide your TLS certificate credentials using
the following query parameters in your etcd URL:
- **cert**: Path to client certificate file or PEM file
- **key**: Path to client key file
- **ca**: Path to trusted CA certificates
```sh
etcd://127.0.0.1:2379?cert=/tmp/client.crt&key=/tst/client.key&ca=/tst/ca.crt
```

View File

@ -1,145 +0,0 @@
---
title: chronograf CLI
description: >
The `chronograf` command line interface (CLI) includes options to manage many aspects of Chronograf security.
menu:
chronograf_1_10:
name: chronograf CLI
parent: Tools
weight: 10
---
The `chronograf` command line interface (CLI) includes options to manage Chronograf security.
## Usage
```
chronograf [flags]
```
## Chronograf service flags
| Flag | Description | Env. Variable |
|:-----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:---------------------|
| `--host` | IP the Chronograf service listens on. By default, `0.0.0.0` | `$HOST` |
| `--port` | Port the Chronograf service listens on for insecure connections. By default, `8888` | `$PORT` |
| `-b`,`--bolt-path` | File path to the BoltDB file. By default, `./chronograf-v1.db` | `$BOLT_PATH` |
| `-c`,`--canned-path` | File path to the directory of canned dashboard files. By default, `/usr/share/chronograf/canned` | `$CANNED_PATH` |
| `--resources-path` | Path to directory of canned dashboards, sources, Kapacitor connections, and organizations. By default, `/usr/share/chronograf/resources` | `$RESOURCES_PATH` |
| `-p`, `--basepath` | URL path prefix under which all Chronograf routes will be mounted. | `$BASE_PATH` |
| `--status-feed-url` | URL of JSON feed to display as a news feed on the client status page. By default, `https://www.influxdata.com/feed/json` | `$STATUS_FEED_URL` |
| `-v`, `--version` | Displays the version of the Chronograf service | |
| `-h`, `--host-page-disabled` | Disables the hosts page | `$HOST_PAGE_DISABLED`|
## InfluxDB connection flags
| Flag | Description | Env. Variable |
| :-------------------- | :-------------------------------------------------------------------------------------- | :------------------- |
| `--influxdb-url` | InfluxDB URL, including the protocol, IP address, and port | `$INFLUXDB_URL` |
| `--influxdb-username` | InfluxDB username | `$INFLUXDB_USERNAME` |
| `--influxdb-password` | InfluxDB password | `$INFLUXDB_PASSWORD` |
| `--influxdb-org` | InfluxDB 2.x or InfluxDB Cloud organization name | `$INFLUXDB_ORG` |
| `--influxdb-token` | InfluxDB 2.x or InfluxDB Cloud [authentication token](/influxdb/cloud/security/tokens/) | `$INFLUXDB_TOKEN` |
## Kapacitor connection flags
| Flag | Description | Env. Variable |
|:-----------------------|:-------------------------------------------------------------------------------|:----------------------|
| `--kapacitor-url` | Location of your Kapacitor instance, including `http://`, IP address, and port | `$KAPACITOR_URL` |
| `--kapacitor-username` | Username for your Kapacitor instance | `$KAPACITOR_USERNAME` |
| `--kapacitor-password` | Password for your Kapacitor instance | `$KAPACITOR_PASSWORD` |
## TLS (Transport Layer Security) flags
| Flag | Description | Env. Variable |
|:--------- |:------------------------------------------------------------ |:--------------------|
| `--cert` | File path to PEM-encoded public key certificate | `$TLS_CERTIFICATE` |
| `--key` | File path to private key associated with given certificate | `$TLS_PRIVATE_KEY` |
| `--tls-ciphers` | Comma-separated list of supported cipher suites. Use `help` to print available ciphers. | `$TLS_CIPHERS` |
| `--tls-min-version` | Minimum version of the TLS protocol that will be negotiated. (default: 1.2) | `$TLS_MIN_VERSION` |
| `--tls-max-version` | Maximum version of the TLS protocol that will be negotiated. | `$TLS_MAX_VERSION` |
## Other service option flags
| Flag | Description | Env. Variable |
| :--------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------- |
| `--custom-auto-refresh` | Add custom auto-refresh options using semicolon separated list of label=milliseconds pairs | `$CUSTOM-AUTO-REFRESH |
| `--custom-link` | Add a custom link to Chronograf user menu options using `<display_name>:<link_address>` syntax. For multiple custom links, include multiple flags. | |
| `-d`, `--develop` | Run the Chronograf service in developer mode | |
| `-h`, `--help` | Display command line help for Chronograf | |
| `-l`, `--log-level` | Set the logging level. Valid values include `info` (default), `debug`, and `error` | `$LOG_LEVEL` |
| `-r`, `--reporting-disabled` | Disable reporting of usage statistics. Usage statistics reported once every 24 hours include: `OS`, `arch`, `version`, `cluster_id`, and `uptime`. | `$REPORTING_DISABLED` |
## Authentication option flags
### General authentication flags
| Flag | Description | Env. Variable |
| :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- |
| `-t`, `--token-secret` | Secret for signing tokens | `$TOKEN_SECRET` |
| `--auth-duration` | Total duration, in hours, of cookie life for authentication. Default value is `720h`. | `$AUTH_DURATION` |
| `--public-url` | Public URL required to access Chronograf using a web browser. For example, if you access Chronograf using the default URL, the public URL value would be `http://localhost:8888`. Required for Google OAuth 2.0 authentication. Used for Auth0 and some generic OAuth 2.0 authentication providers. | `$PUBLIC_URL` |
| `—-htpasswd` | Path to password file for use with HTTP basic authentication. See [NGINX documentation](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/) for more on password files. | `$HTPASSWD` |
### GitHub-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------- | :------------------ |
| `--github-url` | Github base URL. Default is `https://github.com`. {{< req "Required if using Github Enterprise" >}} | `$GH_URL` |
| `-i`, `--github-client-id` | GitHub client ID value for OAuth 2.0 support | `$GH_CLIENT_ID` |
| `-s`, `--github-client-secret` | GitHub client secret value for OAuth 2.0 support | `$GH_CLIENT_SECRET` |
| `-o`, `--github-organization` | Restricts authorization to users from specified Github organizations. To add more than one organization, add multiple flags. Optional. | `$GH_ORGS` |
### Google-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
|:-------------------------|:--------------------------------------------------------------------------------|:------------------------|
| `--google-client-id` | Google client ID value for OAuth 2.0 support | `$GOOGLE_CLIENT_ID` |
| `--google-client-secret` | Google client secret value for OAuth 2.0 support | `$GOOGLE_CLIENT_SECRET` |
| `--google-domains` | Restricts authorization to users from specified Google email domain. To add more than one domain, add multiple flags. Optional. | `$GOOGLE_DOMAINS` |
### Auth0-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------|
| `--auth0-domain` | Subdomain of your Auth0 client. Available on the configuration page for your Auth0 client. | `$AUTH0_DOMAIN` |
| `--auth0-client-id` | Auth0 client ID value for OAuth 2.0 support | `$AUTH0_CLIENT_ID` |
| `--auth0-client-secret` | Auth0 client secret value for OAuth 2.0 support | `$AUTH0_CLIENT_SECRET` |
| `--auth0-organizations` | Restricts authorization to users specified Auth0 organization. To add more than one organization, add multiple flags. Optional. Organizations are set using an organization key in the users `app_metadata`. | `$AUTH0_ORGS` |
### Heroku-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
|:------------------------|:-----------------------------------------------------------------------------------------|:--------------------|
| `--heroku-client-id` | Heroku client ID value for OAuth 2.0 support | `$HEROKU_CLIENT_ID` |
| `--heroku-secret` | Heroku secret for OAuth 2.0 support | `$HEROKU_SECRET` |
| `--heroku-organization` | Restricts authorization to users from specified Heroku organization. To add more than one organization, add multiple flags. Optional. | `$HEROKU_ORGS` |
### Generic OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
| :------------------------ | :----------------------------------------------------------------------------- | :----------------------- |
| `--generic-name` | Generic OAuth 2.0 name presented on the login page | `$GENERIC_NAME` |
| `--generic-client-id` | Generic OAuth 2.0 client ID value. Can be used for a custom OAuth 2.0 service. | `$GENERIC_CLIENT_ID` |
| `--generic-client-secret` | Generic OAuth 2.0 client secret value | `$GENERIC_CLIENT_SECRET` |
| `--generic-scopes` | Scopes requested by provider of web client | `$GENERIC_SCOPES` |
| `--generic-domains` | Email domain required for user email addresses | `$GENERIC_DOMAINS` |
| `--generic-auth-url` | Authorization endpoint URL for the OAuth 2.0 provider | `$GENERIC_AUTH_URL` |
| `--generic-token-url` | Token endpoint URL for the OAuth 2.0 provider | `$GENERIC_TOKEN_URL` |
| `--generic-api-url` | URL that returns OpenID UserInfo-compatible information | `$GENERIC_API_URL` |
| `--oauth-no-pkce` | Disable OAuth PKCE | `$OAUTH_NO_PKCE` |
### etcd flags
| Flag | Description | Env. Variable |
| :----------------------- | :--------------------------------------------------------------------------------------------------------- | :---------------------- |
| `-e`, `--etcd-endpoints` | etcd endpoint URL (include multiple flags for multiple endpoints) | `$ETCD_ENDPOINTS` |
| `--etcd-username` | etcd username | `$ETCD_USERNAME` |
| `--etcd-password` | etcd password | `$ETCD_PASSWORD` |
| `--etcd-dial-timeout` | Total time to wait before timing out while connecting to etcd endpoints (0 means no timeout, default: -1s) | `$ETCD_DIAL_TIMEOUT` |
| `--etcd-request-timeout` | Total time to wait before timing out the etcd view or update (0 means no timeout, default: -1s) | `$ETCD_REQUEST_TIMEOUT` |
| `--etcd-cert` | Path to PEM encoded TLS public key certificate for use with TLS | `$ETCD_CERTIFICATE` |
| `--etcd-key` | Path to private key associated with given certificate for use with TLS | `$ETCD_PRIVATE_KEY` |
| `--etcd-root-ca` | Path to root CA certificate for TLS verification | `$ETCD-ROOT-CA |

View File

@ -1,12 +0,0 @@
---
title: Troubleshoot Chronograf
Description: Troubleshoot Chronograf.
menu:
chronograf_1_10:
name: Troubleshoot
weight: 50
---
Follow the link below to access Chronograf's FAQ.
{{< children hlevel="h2" >}}

View File

@ -1,23 +0,0 @@
---
title: Chronograf frequently asked questions (FAQs)
description: Common issues with Chronograf
menu:
chronograf_1_10:
name: Frequently asked questions (FAQs)
weight: 10
parent: Troubleshoot
---
## How do I connect Chronograf to an InfluxDB Enterprise cluster?
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
Enter the HTTP bind address of one of your cluster's meta nodes into that input and Chronograf takes care of the rest.
![Cluster connection details](/img/chronograf/1-6-faq-cluster-connection.png)
Note that the example above assumes that you do not have authentication enabled.
If you have authentication enabled, the form requires username and password information.
For more details about monitoring an InfluxDB Enterprise cluster, see the [Monitor an InfluxDB Enterprise Cluster](/chronograf/v1.10/guides/monitoring-influxenterprise-clusters/) guide.

View File

@ -1,50 +0,0 @@
---
title: Chronograf 1.6 documentation
aliases:
- /influxdb/v1.6/chronograf/
menu:
chronograf_1_6:
name: Chronograf v1.6
weight: 1
---
Chronograf is InfluxData's open source web application.
Use Chronograf with the other components of the [TICK stack](https://www.influxdata.com/products/) to visualize your monitoring data and easily create alerting and automation rules.
![Chronograf Collage](/img/chronograf/1-6-chronograf-collage.png)
## Key features
### Infrastructure monitoring
* View all hosts and their statuses in your infrastructure
* View the configured applications on each host
* Monitor your applications with Chronograf's [pre-created dashboards](/chronograf/v1.6/guides/using-precreated-dashboards/)
### Alert management
Chronograf offers a UI for [Kapacitor](https://github.com/influxdata/kapacitor), InfluxData's data processing framework for creating alerts, running ETL jobs, and detecting anomalies in your data.
* Generate threshold, relative, and deadman alerts on your data
* Easily enable and disable existing alert rules
* View all active alerts on an alert dashboard
* Send alerts to the supported event handlers, including Slack, PagerDuty, HipChat, and [more](/chronograf/v1.6/guides/configuring-alert-endpoints/)
### Data visualization
* Monitor your application data with Chronograf's [pre-created dashboards](/chronograf/v1.6/guides/using-precreated-dashboards/)
* Create your own customized dashboards complete with various graph types and [template variables](/chronograf/v1.6/guides/dashboard-template-variables/)
* Investigate your data with Chronograf's data explorer and query templates
### Database management
* Create and delete databases and retention policies
* View currently-running queries and stop inefficient queries from overloading your system
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS 1.x](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
### Multi-organization and multi-user support
* Create organizations and assign users to those organizations
* Restrict access to administrative functions
* Allow users to setup and maintain unique dashboards for their organizations

View File

@ -1,45 +0,0 @@
---
title: About the Chronograf project
description: Chronograf is the UI component of the InfluxData time series platform. This section includes documentation about the Chronograf project, release notes and changelogs, what's new, contributing, and licenses.
menu:
chronograf_1_6:
name: About the project
weight: 10
---
Chronograf is the user interface component of the [InfluxData time series platform](https://www.influxdata.com/time-series-platform/). It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with realtime visualizations of your data.
Follow the links below for more information.
{{< children hlevel="h2" >}}
Chronograf is released under the GNU Affero General Public License. This Free Software Foundation license is fairly new,
and differs from the more widely known and understood GPL.
Our goal with using AGPL, much like MongoDB, is to preserve the concept of copyleft with Chronograf.
With traditional GPL, copyleft was associated with the concept of distribution of software.
The problem is that nowadays, distribution of software is rare: things tend to run in the cloud. AGPL fixes this “loophole”
in GPL by saying that if you use the software over a network, you are bound by the copyleft. Other than that,
the license is virtually the same as GPL v3.
To say this another way: if you modify the core source code of Chronograf, the goal is that you have to contribute
those modifications back to the community.
Note however that it is NOT required that your dashboards and alerts created by using Chronograf be published.
The copyleft applies only to the source code of Chronograf itself.
If this explanation isn't good enough for you and your use case, we dual license Chronograf under our
[standard commercial license](https://www.influxdata.com/legal/slsa/).
[Contact sales for more information](https://www.influxdata.com/contact-sales/).
## Third Party Software
InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
software of third parties that is incorporated in InfluxData products.
Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
Third party suppliers assume no liability for any claim that might arise with respect to such third party software,
nor for a customers use of or inability to use the third party software.
The [list of third party software components, including references to associated license and other materials](https://github.com/influxdata/chronograf/blob/1.6.0.x/LICENSE_OF_DEPENDENCIES.md),
is maintained on a version by version basis.

View File

@ -1,12 +0,0 @@
---
title: InfluxData Contributor License Agreement (CLA)
description: Before contributing to the Chronograf project, you need to submit the InfluxData Contributor License Agreement.
menu:
chronograf_1_6:
weight: 30
parent: About the project
params:
url: https://www.influxdata.com/legal/cla/
---
Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site.

View File

@ -1,12 +0,0 @@
---
title: Contributing to Chronograf
menu:
chronograf_1_6:
name: Contributing
weight: 20
parent: About the project
params:
url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md
---
See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project.

View File

@ -1,12 +0,0 @@
---
title: Open source license for Chronograf
menu:
chronograf_1_6:
Name: Open source license
weight: 40
parent: About the project
params:
url: https://github.com/influxdata/chronograf/blob/master/LICENSE
---
The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project.

View File

@ -1,716 +0,0 @@
---
title: Chronograf 1.6 release notes
description: Features, breaking features, user interface improvements, and bug fixes for the latest and earlier Chronograf releases for the InfluxData time series platform.
menu:
chronograf_1_6:
name: Release notes
weight: 10
parent: About the project
---
## v1.6.2 [2018-09-06]
### Features
* Add ability to copy expanded/untruncated log message.
* Add Close button for logs pop over.
* Add search attributes to Log Viewer.
### UI improvements
* Make infinite scroll UX in Log Viewer more crisp by decreasing results queried.
* Clear logs after searching.
* Add search expression to highlighting log lines.
### Bug Fixes
* Fix notifying user to press ESC to exit presentation mode.
* Fix socket leaks on Alert Rule pages.
## v1.6.1 [2018-08-02]
### Features
* Include source IDs, links, and names in dashboard exports
* Add ability to map sources when importing dashboards
### UI Improvements
* Make it easier to get mouse into hover legend
### Bug Fixes
* Ensure text template variables reflect query parameters
* Enable using a new, blank text template variable in a query
* Ensure cells with broken queries display “No Data”
* Fix use of template variables within InfluxQL regexes
* Pressing play on log viewer goes to "now"
* Fix display of log viewer histogram when a basepath is enabled
* Fix crosshairs and hover legend display in Alert Rule visualization
* Size loading spinners based on height of their container
## v1.6.0 [2018-07-18]
### Features
* Add support for template variables in cell titles.
* Add ability to export and import dashboards.
* Add ability to override template variables and time ranges via URL query.
* Add pprof routes to chronograf server.
* Add API to get/update Log Viewer UI config.
* Consume new Log Viewer config API in client to allow user to configure log viewer UI for their organization.
### UI Improvements
* Sort task table on Manage Alert page alphabetically.
* Redesign icons in side navigation.
* Remove Snip functionality in hover legend.
* Upgrade Data Explorer query text field with syntax highlighting and partial multi-line support.
* Truncate message preview in Alert Rules table.
* Improve performance of graph crosshairs.
* Hide dashboard cell menu until mouse over cell.
* Auto-Scale single-stat text to match cell dimensions.
### Bug Fixes
* Ensure cell queries use constraints from TimeSelector.
* Fix Gauge color selection bug.
* Fix erroneous icons in Date Picker widget.
* Fix allowing hyphens in basepath.
* Fix error in cell when tempVar returns no values.
* Change arrows in table columns so that ascending sort points up and descending points down.
* Fix crosshairs moving passed the edges of graphs.
* Change y-axis options to have valid defaults.
* Stop making requests for old sources after changing sources.
* Fix health check status code creating FireFox error.
* Change decimal places to enforce 2 places by default in cells.
## v1.5.0.1 [2018-06-04]
### Bug Fixes
* Fix Color Scale Dropdown
## v1.5.0.0 [2018-05-22]
### Features
* Add table view as a visualization type.
* Add default retention policy field as option in source configuration for use in querying hosts from Host List page and Host pages.
* Add support for PagerDuty v2 alert endpoints in UI.
* Add support for OpsGenie v2 alert endpoints in UI.
* Add support for Kafka alert endpoint in UI to configure and create alert handlers.
* Add support for disabling Kapacitor services.
* Add support for multiple Slack alert endpoint configurations in the UI.
### User interface improvements
* Notify user when a dashboard cell is added, removed, or cloned.
* Fix Template Variables Control Bar to top of dashboard page.
* Remove extra click when creating dashboard cell.
* Reduce font sizes in dashboards for increased space efficiency.
* Add overlay animation to Template Variables Manager.
* Display 'No results' on cells without results.
* Disable template variables for non-editing users.
* YAxisLabels in Dashboard Graph Builder not showing until graph is redrawn.
* Ensure table views have a consistent user experience between Google Chrome and Mozilla Firefox.
* Change AutoRefresh interval to paused.
* Get cloned cell name for notification from cloned cell generator function.
* Improve load time for Host page.
* Show Kapacitor batch point info in log panel.
### Bug fixes
* Allow user to select TICKscript editor with mouse-click.
* Change color when value is equal to or greater than the threshold value.
* Fix base path for Kapacitor logs.
* Fix logout when using `basepath` and simplify `basepath` usage (deprecates `PREFIX_ROUTES`).
* Fix graphs in alert rule builder for queries that include `groupBy`.
* Fix auto not showing in the group by dropdown and explorer getting disconnected.
* Display y-axis label on initial graph load.
* Fix not being able to change the source in the CEO display.
* Fix only the selected template variable value getting loaded.
* Fix Generic OAuth bug for GitHub Enterprise where the principal was incorrectly being checked for email being Primary and Verified.
* Fix missing icons when using basepath.
* Limit max-width of TICKScript editor.
* Fix naming of new TICKScripts.
* Fix data explorer query error reporting regression.
* Fix Kapacitor Logs fetch regression.
## v1.4.4.1 [2018-04-16]
### Bug fixes
* Snapshot all db struct types in migration files.
## v1.4.4.0 [2018-04-13]
### Features
* Add support for RS256/JWKS verification, support for `id_token` parsing (as in ADFS).
* Add ability to set a color palette for Line, Stacked, Step-Plot, and Bar graphs.
* Add ability to clone dashboards.
* Change `:interval:` to represent a raw InfluxQL duration value.
* Add paginated measurements API to server.
* Data Explorer measurements can be toggled open.
### UI improvements
* New dashboard cells appear at bottom of layout and assume the size of the most common cell.
* Standardize delete confirmation interactions.
* Standardize Save and Cancel interactions.
* Improve cell renaming.
### Bug fixes
* Always save template variables on first edit.
* Query annotations at auto-refresh interval.
* Display link to configure Kapacitor on Alerts Page if no configured Kapacitor.
* Fix saving of new TICKscripts.
* Fix appearance of cell y-axis titles.
* Only add `stateChangesOnly` to new rules.
* Fix 500s when deleting organizations.
* Fixes issues with providing regexp in query.
* Ensure correct basepath prefix in URL pathname when passing InfluxQL query param to Data Explorer.
* Fix type error bug in Kapacitor Alert Config page and persist deleting of team and recipient in OpsGenieConfig.
* Fixes errors caused by switching query tabs in CEO.
* Only send threshold value to parent on blur.
* Require that emails on GitHub & Generic OAuth2 principals be verified & primary, if those fields are provided.
* Send notification when retention policy (rp) creation returns a failure.
* Show valid time in custom time range when now is selected.
* Default to zero for gauges.
## v1.4.3.3 [2018-04-12]
### Bug Fixes
* Require that emails on GitHub & Generic OAuth2 principals be verified & primary if those fields are provided.
## v1.4.3.1 [2018-04-02]
### Bug fixes
* Fixes template variable editing not allowing saving.
* Save template variables on first edit.
* Fix template variables not loading all values.
## v1.4.3.0 [2018-3-28]
### Features
* Add unsafe SSL to Kapacitor UI configuration
* Add server flag to grant SuperAdmin status to users authenticating from a specific Auth0 organization
### UI Improvements
* Redesign system notifications
### Bug Fixes
* Fix Heroku OAuth 2.0 provider support
* Fix error reporting in Data Explorer
* Fix Okta OAuth 2.0 provider support
* Change hover text on delete mappings confirmation button to 'Delete'
* Automatically add graph type 'line' to any graph missing a type
* Fix hanging browser on docker host dashboard
* Fix Kapacitor Rules task enabled checkboxes to only toggle exactly as clicked
* Prevent Multi-Select Dropdown in InfluxDB Admin Users and Roles tabs from losing selection state
* Fix intermittent missing fill from graphs
* Support custom time range in annotations API wrapper
## v1.4.2.5 [2018-04-12]
### Bug Fixes
* Require that emails on GitHub & Generic OAuth2 principals be verified & primary if those fields are provided.
## v1.4.2.3 [2018-03-08]
### Bug fixes
* Include URL in Kapacitor connection creation requests.
## v1.4.2.1 [2018-02-28]
### Features
* Prevent execution of queries in cells that are not in view on the Dashboard page.
* Add an optional persistent legend which can toggle series visibility to dashboard cells.
* Allow user to annotate graphs via UI or API.
### UI improvements
* Add ability to set a prefix and suffix on Single Stat and Gauge cell types.
* Rename 'Create Alerts' page to 'Manage Tasks'; redesign page to improve clarity of purpose.
### Bug fixes
* Save only selected template variable values into dashboards for non-CSV template variables.
* Use Generic APIKey for OAuth2 group lookup.
* Fix bug in which resizing any cell in a dashboard causes a Gauge cell to resize.
* Don't sort Single Stat & Gauge thresholds when editing threshold values.
* Maintain y-axis labels in dashboard cells.
* Deprecate `--new-sources` in CLI.
## v1.4.1.5 [2018-04-12]
### Bug Fixes
* Require that emails on GitHub & Generic OAuth2 principals be verified & primary if those fields are provided.
## v1.4.1.3 [2018-02-14]
### Bug fixes
* Allow self-signed certificates for InfluxDB Enterprise meta nodes.
## v1.4.1.2 [2018-02-13]
### Bug fixes
* Respect `basepath` when fetching server API routes.
* Set default `tempVar` `:interval`: with Data Explorer CSV download call.
* Display series with value of `0` in a cell legend.
## v1.4.1.1 [2018-02-12]
### Features
- Allow multiple event handlers per rule.
- Add "Send Test Alert" button to test Kapacitor alert configurations.
- Link to Kapacitor config panel from Alert Rule builder.
- Add auto-refresh widget to Hosts List page.
- Upgrade to Go 1.9.4 and Node 6.12.3.
- Allow users to delete themselves.
- Add All Users page, visible only to SuperAdmins.
- Introduce `chronoctl` binary for user CRUD operations.
- Introduce mappings to allow control over new user organization assignments.
### UI improvements
- Clarify terminology regarding InfluxDB and Kapacitor connections.
- Separate saving TICKscript from exiting editor page.
- Enable Save (`⌘ + Enter`) and Cancel (`Escape`) hotkeys in Cell Editor Overlay.
- Enable customization of Single Stat "Base Color".
### Bug fixes
- Fix TICKscript Sensu alerts when no GROUP BY tags selected.
- Display 200 most-recent TICKscript log messages; prevent overlapping.
- Add `TO` to kapacitor SMTP config; improve config update error messages.
- Remove CLI options from `sysvinit` service file.
- Remove CLI options from `systemd` service file.
- Fix disappearance of text in Single Stat graphs during editing.
- Redirect to Alerts page after saving Alert Rule.
## v1.4.0.3 [2018-4-12]
### Bug Fixes
* Require that emails on GitHub & Generic OAuth2 principals be verified & primary if those fields are provided.
## v1.4.0.1 [2018-1-9]
### Features
* Add separate CLI flag for canned sources, Kapacitors, dashboards, and organizations.
* Add Telegraf interval configuration.
### Bug fixes
- Allow insecure (self-signed) certificates for Kapacitor and InfluxDB.
- Fix positioning of custom time indicator.
## v1.4.0.0 [2017-12-22]
### Features
* Add support for multiple organizations, multiple users with role-based access control, and private instances.
* Add Kapacitor logs to the TICKscript editor
* Add time shift feature to DataExplorer and Dashboards
* Add auto group by time to Data Explorer
* Support authentication for Enterprise Meta Nodes
* Add Boolean thresholds for kapacitor threshold alerts
* Update kapacitor alerts to cast to float before sending to influx
* Allow override of generic oauth2 keys for email
### UI improvements
* Introduce customizable Gauge visualization type for dashboard cells
* Improve performance of Hosts, Alert History, and TICKscript logging pages when there are many items to display
* Add filtering by name to Dashboard index page
* Improve performance of hoverline rendering
### Bug fixes
* Fix `.jsdep` step fails when LDFLAGS is exported
* Fix logscale producing console errors when only one point in graph
* Fix 'Cannot connect to source' false error flag on Dashboard page
* Add fractions of seconds to time field in csv export
* Fix Chronograf requiring Telegraf's CPU and system plugins to ensure that all Apps appear on the HOST LIST page.
* Fix template variables in dashboard query building.
* Fix several kapacitor alert creation panics.
* Add shadow-utils to RPM release packages
* Source extra command line options from defaults file
* After CREATE/DELETE queries, refresh list of databases in Data Explorer
* Visualize CREATE/DELETE queries with Table view in Data Explorer
* Include tag values alongside measurement name in Data Explorer result tabs
* Redesign cell display options panel
* Fix queries that include regex, numbers and wildcard
* Fix apps on hosts page from parsing tags with null values
* Fix updated Dashboard names not updating dashboard list
* Fix create dashboard button
* Fix default y-axis labels not displaying properly
* Gracefully scale Template Variables Manager overlay on smaller displays
* Fix Influx Enterprise users from deletion in race condition
* Fix oauth2 logout link not having basepath
* Fix supplying a role link to sources that do not have a metaURL
* Fix hoverline intermittently not rendering
* Update MySQL pre-canned dashboard to have query derivative correctly
## v1.3.10.0 [2017-10-24]
### Bug fixes
* Improve the copy in the retention policy edit page.
* Fix `Could not connect to source` bug on source creation with unsafe-ssl.
* Fix when exporting `SHOW DATABASES` CSV has bad data.
* Fix not-equal-to highlighting in Kapacitor Rule Builder.
* Fix undescriptive error messages for database and retention policy creation.
* Fix drag and drop cancel button when writing data in the data explorer.
* Fix persistence of "SELECT AS" statements in queries.
### Features
* Every dashboard can now have its own time range.
* Add CSV download option in dashboard cells.
* Implicitly prepend source URLs with `http://`
* Add support for graph zooming and point display on the millisecond-level.
* Add manual refresh button for Dashboard, Data Explorer, and Host Pages.
### UI improvements
* Increase size of Cell Editor query tabs to reveal more of their query strings.
* Improve appearance of Admin Page tabs on smaller screens.
* Add cancel button to TICKscript editor.
* Redesign dashboard naming & renaming interaction.
* Redesign dashboard switching dropdown.
## v1.3.9.0 [2017-10-06]
### Bug fixes
* Fix Data Explorer disappearing query templates in dropdown.
* Fix missing alert for duplicate db name.
* Chronograf shows real status for Windows hosts when metrics are saved in non-default db.
* Fix false error warning for duplicate Kapacitor name
* Fix unresponsive display options and query builder in dashboards.
### Features
* Add fill options to Data Explorer and dashboard queries.
* Support editing kapacitor TICKScript.
* Introduce the TICKscript editor UI.
* Add CSV download button to the Data Explorer.
* Add Data Explorer InfluxQL query and location query synchronization, so queries can be shared using a URL.
* Able to switch InfluxDB sources on a per graph basis.
### UI improvements
* Require a second click when deleting a dashboard cell.
* Sort database list in Schema Explorer alphabetically.
* Improve usability of dashboard cell context menus.
* Move dashboard cell renaming UI into Cell Editor Overlay.
* Prevent the legend from overlapping graphs at the bottom of the screen.
* Add a "Plus" icon to every button with an Add or Create action for clarity and consistency.
* Make hovering over series smoother.
* Reduce the number of pixels per cell to one point per 3 pixels.
* Remove tabs from Data Explorer.
* Improve appearance of placeholder text in inputs.
* Add ability to use "Default" values in Source Connection form.
* Display name & port in SourceIndicator tool tip.
## v1.3.8.3 [2017-09-29]
### Bug fixes
* Fix duration for single value and custom time ranges.
* Fix Data Explorer query templates dropdown disappearance.
* Fix no alert for duplicate db name.
* Fix unresponsive display options and query builder in dashboards.
## v1.3.8.2 [2017-09-22]
### Bug fixes
* Fix duration for custom time ranges.
## v1.3.8.1 [2017-09-08]
### Bug fixes
* Fix return code on meta nodes when raft redirects to leader.
* Reduce points per graph to one point per 3 pixels.
## v1.3.8.0 [2017-09-07]
### Bug fixes
* Fix the limit of 100 alert rules on alert rules page.
* Fix graphs when y-values are constant.
* Fix crosshair not being removed when user leaves graph.
* Fix inability to add kapacitor from source page on fresh install.
* Fix DataExplorer crashing if a field property is not present in the queryConfig.
* Fix the max y value of stacked graphs preventing display of the upper bounds of the chart.
* Fix for delayed selection of template variables using URL query params.
### Features
* Add prefix, suffix, scale, and other y-axis formatting for cells in dashboards.
* Update the group by time when zooming in graphs.
* Add the ability to link directly to presentation mode in dashboards with the `present` Boolean query parameter in the URL.
* Add the ability to select a template variable via a URL parameter.
### UI improvements
* Use line-stacked graph type for memory information.
* Improve cell sizes in Admin Database tables.
* Polish appearance of optional alert parameters in Kapacitor rule builder.
* Add active state for Status page navbar icon.
* Improve UX of navigation to a sub-nav item in the navbar.
## v1.3.7.0 [2017-08-23]
### Bug fixes
* Chronograf now renders on Internet Explorer (IE) 11.
* Resolve Kapacitor config for PagerDuty via the UI.
* Fix Safari display issues in the Cell Editor display options.
* Fix uptime status on Windows hosts running Telegraf.
* Fix console error for 'placing prop on div'.
* Fix Write Data form upload button and add `onDragExit` handler.
* Fix missing cell type (and consequently single-stat).
* Fix regression and redesign drag & drop interaction.
* Prevent stats in the legend from wrapping line.
* Fix raw query editor in Data Explorer, not using selected time.
### Features
* Improve 'new-sources' server flag example by adding 'type' key.
* Add an input and validation to custom time range calendar dropdowns.
* Add support for selecting template variables with URL params.
### UI improvements
* Show "Add Graph" button on cells with no queries.
## v1.3.6.1 [2017-08-14]
**Upgrade Note** This release (1.3.6.1) fixes a possibly data corruption issue with dashboard cells' graph types. If you upgraded to 1.3.6.0 and visited any dashboard, once you have then upgraded to this release (1.3.6.1) you will need to manually reset the graph type for every cell via the cell's caret --> Edit --> Display Options. If you upgraded directly to 1.3.6.1, you should not experience this issue.
### Bug fixes
* Fix inaccessible scroll bar in Data Explorer table.
* Fix non-persistence of dashboard graph types.
### Features
* Add y-axis controls to the API for layouts.
### UI improvements
* Increase screen real estate of Query Maker in the Cell Editor Overlay.
## v1.3.6.0 [2017-08-08]
### Bug fixes
* Fix domain not updating in visualizations when changing time range manually.
* Prevent console error spam from Dygraph's synchronize method when a dashboard has only one graph.
* Guarantee UUID for each Alert Table key to prevent dropping items when keys overlap.
### Features
* Add a few time range shortcuts to the custom time range menu.
* Add ability to edit a dashboard graph's y-axis bounds.
* Add ability to edit a dashboard graph's y-axis label.
### UI improvements
* Add spinner in write data modal to indicate data is being written.
* Fix bar graphs overlapping.
* Assign a series consistent coloring when it appears in multiple cells.
* Increase size of line protocol manual entry in Data Explorer's Write Data overlay.
* Improve error message when request for Status Page News Feed fails.
* Provide affirmative UI choice for 'auto' in DisplayOptions with new toggle-based component.
## v1.3.5.0 [2017-07-27]
### Bug fixes
* Fix z-index issue in dashboard cell context menu.
* Clarify BoltPath server flag help text by making example the default path.
* Fix cell name cancel not reverting to original name.
* Fix typo that may have affected PagerDuty node creation in Kapacitor.
* Prevent 'auto' GROUP BY as option in Kapacitor rule builder when applying a function to a field.
* Prevent clipped buttons in Rule Builder, Data Explorer, and Configuration pages.
* Fix JWT for the write path.
* Disentangle client Kapacitor rule creation from Data Explorer query creation.
### Features
* View server generated TICKscripts.
* Add the ability to select Custom Time Ranges in the Hostpages, Data Explorer, and Dashboards.
* Clarify BoltPath server flag help text by making example the default path
* Add shared secret JWT authorization to InfluxDB.
* Add Pushover alert support.
* Restore all supported Kapacitor services when creating rules, and add most optional message parameters.
### UI improvements
* Polish alerts table in status page to wrap text less.
* Specify that version is for Chronograf on Configuration page.
* Move custom time range indicator on cells into corner when in presentation mode.
* Highlight legend "Snip" toggle when active.
## v1.3.4.0 [2017-07-10]
### Bug fixes
* Disallow writing to \_internal in the Data Explorer.
* Add more than one color to Line+Stat graphs.
* Fix updating Retention Policies in single-node InfluxDB instances.
* Lock the width of Template Variable dropdown menus to the size of their longest option.
### Features
* Add Auth0 as a supported OAuth2 provider.
* Add ability to add custom links to User menu via server CLI or ENV vars.
* Allow users to configure custom links on startup that will appear under the User menu in the sidebar.
* Add support for Auth0 organizations.
* Allow users to configure InfluxDB and Kapacitor sources on startup.
### UI improvements
* Redesign Alerts History table on Status Page to have sticky headers.
* Refresh Template Variable values on Dashboard page load.
* Display current version of Chronograf at the bottom of Configuration page.
* Redesign Dashboards table and sort them alphabetically.
* Bring design of navigation sidebar in line with Branding Documentation.
## v1.3.3.0 [2017-06-19]
### Bug fixes
* Prevent legend from flowing over window bottom bound
* Prevent Kapacitor configurations from having the same name
* Limit Kapacitor configuration names to 33 characters to fix display bug
### Features
* Synchronize vertical crosshair at same time across all graphs in a dashboard
* Add automatic `GROUP BY (time)` functionality to dashboards
* Add a Status Page with Recent Alerts bar graph, Recent Alerts table, News Feed, and Getting Started widgets
### UI improvements
* When dashboard time range is changed, reset graphs that are zoomed in
* [Bar graph](/chronograf/v1.6/guides/visualization-types/#bar-graph) option added to dashboard
* Redesign source management table to be more intuitive
* Redesign [Line + Single Stat](/chronograf/v1.6/guides/visualization-types/#line-graph-single-stat) cells to appear more like a sparkline, and improve legibility
## v1.3.2.0 [2017-06-05]
### Bug fixes
* Update the query config's field ordering to always match the input query
* Allow users to add functions to existing Kapacitor rules
* Fix logout menu item regression
* Fix InfluxQL parsing with multiple tag values for a tag key
* Fix load localStorage and warning UX on fresh Chronograf install
* Show submenus when the alert notification is present
### Features
* Add UI to the Data Explorer for [writing data to InfluxDB](/chronograf/v1.6/administration/transition-web-admin-interface/#writing-data)
### UI improvements
* Make the enter and escape keys perform as expected when renaming dashboards
* Improve copy on the Kapacitor configuration page
* Reset graph zoom when the user selects a new time range
* Upgrade to new version of Influx Theme, and remove excess stylesheets
* Replace the user icon with a solid style
* Disable query save in cell editor mode if the query does not have a database, measurement, and field
* Improve UX of applying functions to fields in the query builder
## v1.3.1.0 [2017-05-22]
### Release notes
In versions 1.3.1+, installing a new version of Chronograf automatically clears the localStorage settings.
### Bug fixes
* Fix infinite spinner when `/chronograf` is a [basepath](/chronograf/v1.6/administration/config-options/#basepath-p)
* Remove the query templates dropdown from dashboard cell editor mode
* Fix the backwards sort arrows in table column headers
* Make the logout button consistent with design
* Fix the loading spinner on graphs
* Filter out any template variable values that are empty, whitespace, or duplicates
* Allow users to click the add query button after selecting singleStat as the visualization type
* Add a query for windows uptime - thank you, @brianbaker!
### Features
* Add log [event handler](/chronograf/v1.6/guides/configuring-alert-endpoints/) - thank you, @mpchadwick!
* Update Go (golang) vendoring to dep and committed vendor directory
* Add autocomplete functionality to [template variable](/chronograf/v1.6/guides/dashboard-template-variables/) dropdowns
### UI improvements
* Refactor scrollbars to support non-webkit browsers
* Increase the query builder's default height in cell editor mode and in the data explorer
* Make the [template variables](/chronograf/v1.6/guides/dashboard-template-variables/) manager more space efficient
* Add page spinners to pages that did not have them
* Denote which source is connected in the sources table
* Use milliseconds in the InfluxDB dashboard instead of nanoseconds
* Notify users when local settings are cleared
## v1.3.0 [2017-05-09]
### Bug fixes
* Fix the link to home when using the [`--basepath` option](/chronograf/v1.6/administration/config-options/#basepath-p)
* Remove the notification to login on the login page
* Support queries that perform math on functions
* Prevent the creation of blank template variables
* Ensure thresholds for Kapacitor Rule Alerts appear on page load
* Update the Kapacitor configuration page when the configuration changes
* Fix Authentication when using Chronograf with a set [basepath](/chronograf/v1.6/administration/config-options/#basepath-p)
* Show red indicator on Hosts Page for an offline host
* Support escaping from presentation mode in Safari
* Re-implement level colors on the alerts page
* Fix router bug introduced by upgrading to react-router v3.0
* Show legend on [Line+Stat](/chronograf/v1.6/guides/visualization-types/#line-graph-single-stat) visualization type
* Prevent queries with `:dashboardTime:` from breaking the query builder
### Features
* Add line-protocol proxy for InfluxDB/InfluxDB Enterprise Cluster data sources
* Add `:dashboardTime:` to support cell-specific time ranges on dashboards
* Add support for enabling and disabling [TICKscripts that were created outside Chronograf](/chronograf/v1.6/guides/advanced-kapacitor/#tickscript-management)
* Allow users to delete Kapacitor configurations
### UI improvements
* Save user-provided relative time ranges in cells
* Improve how cell legends and options appear on dashboards
* Combine the measurements and tags columns in the Data Explorer and implement a new design for applying functions to fields.
* Normalize the terminology in Chronograf
* Make overlays full-screen
* Change the default global time range to past 1 hour
* Add the Source Indicator icon to the Configuration and Admin pages
> See Chronograf's [CHANGELOG](https://github.com/influxdata/chronograf/blob/master/CHANGELOG.md) on GitHub for information about the 1.2.0-beta releases.

View File

@ -1,13 +0,0 @@
---
title: Administering Chronograf
description: This section documents Chronograf administration, including configuration, InfluxDB Enterprise clusters, Kapacitor and InfluxDB connections, user and organization management, security, and upgrading.
menu:
chronograf_1_6:
name: Administration
weight: 40
---
Follow the links below for more information.
{{< children hlevel="h2" >}}

View File

@ -1,22 +0,0 @@
---
title: Connecting Chronograf to InfluxDB Enterprise clusters
description: Configuration steps for connecting Chronograf to InfluxDB Enterprise clusters and the InfluxData time series platform.
menu:
chronograf_1_6:
name: Connecting Chronograf to clusters
weight: 40
parent: Administration
---
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
Enter the HTTP bind address of one of your cluster's meta nodes into that input and Chronograf takes care of the rest.
![Cluster connection details](/img/chronograf/1-6-faq-cluster-connection.png)
Note that the example above assumes that you do not have authentication enabled.
If you have authentication enabled, the form requires username and password information.
For details about monitoring InfluxDB Enterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/chronograf/v1.6/guides/monitoring-influxenterprise-clusters/).

Some files were not shown because too many files have changed in this diff Show More