Merge branch 'master' of github.com:influxdata/docs-v2 into feature/2689-get-started-with-api

* 'master' of github.com:influxdata/docs-v2: (63 commits)
  CLI fixes (#2844)
  Fix links (#2828)
  Fix/2759 1.x query api (#2841)
  fix: updating windows start instructions (#2842)
  Add static legend to visualization examples (#2837)
  Fix Windows CLI download link (#2840)
  Fix incorrect Telegraf output settings info (#2754)
  Move static legend to cloud-only (#2798)
  ThingWorx integration (#2558)
  edits
  Update content/influxdb/v2.0/process-data/task-options.md
  Update content/influxdb/v2.0/process-data/task-options.md
  Update task-options.md
  hr eg covers itt
  1.19.1 release notes (#2819)
  edit
  fixes issue #2672 https://github.com/influxdata/docs-v2/issues/2672
  Update content/kapacitor/v1.6/guides/anomaly_detection.md
  Update content/kapacitor/v1.5/guides/anomaly_detection.md
  Update content/kapacitor/v1.4/guides/anomaly_detection.md
  ...
pull/2853/head
Jason Stirnaman 2021-07-15 13:50:48 -05:00
commit 39e775be62
498 changed files with 110406 additions and 653 deletions

View File

@ -6,6 +6,9 @@ jobs:
environment:
HUGO_VERSION: "0.81.0"
S3DEPLOY_VERSION: "2.3.5"
# From https://github.com/bep/s3deploy/releases
S3DEPLOY_VERSION_HASH: "95de91ed207ba32abd0df71f9681c1ede952f8358f3510b980b02550254c941a"
steps:
- checkout
- restore_cache:

View File

@ -260,7 +260,7 @@ Find more info [here][{{< enterprise-link >}}]
```
### InfluxDB Cloud Content
For sections content that relate specifically to InfluxDB Cloud, use the `{{% cloud %}}` shortcode.
For sections of content that relate specifically to InfluxDB Cloud, use the `{{% cloud %}}` shortcode.
```md
{{% cloud %}}
@ -333,6 +333,18 @@ current product. Easier to maintain being you update the version number in the `
{{< latest-patch >}}
```
### API endpoint
Use the `{{< api-endpoint >}}` shortcode to generate a code block that contains
a colored request method and a specified API endpoint.
Provide the following arguments:
- **method**: HTTP request method (get, post, patch, put, or delete)
- **endpoint**: API endpoint
```md
{{< api-endpoint method="get" endpoint="/api/v2/tasks">}}
```
### Tabbed Content
Shortcodes are available for creating "tabbed" content (content that is changed by a users' selection).
Ther following three must be used:
@ -437,6 +449,13 @@ you can customize the text by passing a string argument with the shortcode.
**Output:** This is required
If using other named arguments like `key` or `color`, use the `text` argument to
customize the text of the required message.
```md
{{< req text="Required if ..." color="blue" type="key" >}}
```
#### Required elements in a list
When identifying required elements in a list, use `{{< req type="key" >}}` to generate
a "* Required" key before the list. For required elements in the list, include
@ -450,6 +469,18 @@ a "* Required" key before the list. For required elements in the list, include
- **This element is NOT required**
```
#### Change color of required text
Use the `color` argument to change the color of required text.
The following colors are available:
- blue
- green
- magenta
```md
{{< req color="magenta" text="This is required" >}}
```
### Keybinds
Use the `{{< keybind >}}` shortcode to include OS-specific keybindings/hotkeys.
The following parameters are available:
@ -489,7 +520,7 @@ flowchart TB
Use the `{{< filesystem-diagram >}}` shortcode to create a styled file system
diagram using a Markdown unordered list.
##### Example filestsytem diagram shortcode
##### Example filesystem diagram shortcode
```md
{{< filesystem-diagram >}}
- Dir1/
@ -637,6 +668,80 @@ list_code_example: |
```
~~~
#### Organize and include native code examples
To include text from a file in `/assets/text/`, use the
`{{< get-assets-text >}}` shortcode and provide the relative path and filename.
This is useful for maintaining and referencing sample code variants in their
native file formats.
1. Store code examples in their native formats at `/assets/text/`.
```md
/assets/text/example1/example.js
/assets/text/example1/example.py
```
2. Include the files, e.g. in code tabs
````md
{{% code-tabs-wrapper %}}
{{% code-tabs %}}
[Javascript](#js)
[Python](#py)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
{{< get-assets-text "example1/example.js" >}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```py
{{< get-assets-text "example1/example.py" >}}
```
{{% /code-tab-content %}}
{{% /code-tabs-wrapper %}}
````
#### Include specific files from the same directory
To include the text from one file in another file in the same
directory, use the `{{< get-leaf-text >}}` shortcode.
The directory that contains both files must be a
Hugo [*Leaf Bundle*](https://gohugo.io/content-management/page-bundles/#leaf-bundles),
a directory that doesn't have any child directories.
In the following example, `api` is a leaf bundle. `content` isn't.
```md
content
|
|--- api
| query.pdmc
| query.sh
| _index.md
```
##### query.pdmc
```md
# Query examples
```
##### query.sh
```md
curl https://localhost:8086/query
```
To include `query.sh` and `query.pdmc` in `api/_index.md`, use the following code:
````md
{{< get-leaf-text "query.pdmc" >}}
# Curl example
```sh
{{< get-leaf-text "query.sh" >}}
```
````
Avoid using the following file extensions when naming included text files since Hugo interprets these as markup languages:
`.ad`, `.adoc`, `.asciidoc`, `.htm`, `.html`, `.markdown`, `.md`, `.mdown`, `.mmark`, `.pandoc`, `.pdc`, `.org`, or `.rst`.
#### Reference a query example in children
To include a query example with the children in your list, update `data/query_examples.yml`
with the example code, input, and output, and use the `list_query_example`

39
api-docs/yarn.lock Executable file → Normal file
View File

@ -403,15 +403,6 @@ classnames@^2.2.6:
resolved "https://registry.yarnpkg.com/classnames/-/classnames-2.3.1.tgz#dfcfa3891e306ec1dad105d0e88f4417b8535e8e"
integrity sha512-OlQdbZ7gLfGarSqxesMesDa5uz7KFbID8Kpq/SxIoNGDqY8lSYs0D+hhtBXhcdB3rcbXArFr7vlHheLk1voeNA==
clipboard@^2.0.0:
version "2.0.8"
resolved "https://registry.yarnpkg.com/clipboard/-/clipboard-2.0.8.tgz#ffc6c103dd2967a83005f3f61976aa4655a4cdba"
integrity sha512-Y6WO0unAIQp5bLmk1zdThRhgJt/x3ks6f30s3oE3H1mgIEU33XyQjEf8gsf6DxC7NPX8Y1SsNWjUjL/ywLnnbQ==
dependencies:
good-listener "^1.2.2"
select "^1.1.2"
tiny-emitter "^2.0.0"
cliui@^6.0.0:
version "6.0.0"
resolved "https://registry.yarnpkg.com/cliui/-/cliui-6.0.0.tgz#511d702c0c4e41ca156d7d0e96021f23e13225b1"
@ -568,11 +559,6 @@ decko@^1.2.0:
resolved "https://registry.yarnpkg.com/decko/-/decko-1.2.0.tgz#fd43c735e967b8013306884a56fbe665996b6817"
integrity sha1-/UPHNelnuAEzBohKVvvmZZlraBc=
delegate@^3.1.2:
version "3.2.0"
resolved "https://registry.yarnpkg.com/delegate/-/delegate-3.2.0.tgz#b66b71c3158522e8ab5744f720d8ca0c2af59166"
integrity sha512-IofjkYBZaZivn0V8nnsMJGBr4jVLxHDheKSW88PyxS5QC4Vo9ZbZVvhzlSxY87fVq3STR6r+4cGepyHkcWOQSw==
des.js@^1.0.0:
version "1.0.1"
resolved "https://registry.yarnpkg.com/des.js/-/des.js-1.0.1.tgz#5382142e1bdc53f85d86d53e5f4aa7deb91e0843"
@ -718,13 +704,6 @@ globals@^11.1.0:
resolved "https://registry.yarnpkg.com/globals/-/globals-11.12.0.tgz#ab8795338868a0babd8525758018c2a7eb95c42e"
integrity sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==
good-listener@^1.2.2:
version "1.2.2"
resolved "https://registry.yarnpkg.com/good-listener/-/good-listener-1.2.2.tgz#d53b30cdf9313dffb7dc9a0d477096aa6d145c50"
integrity sha1-1TswzfkxPf+33JoNR3CWqm0UXFA=
dependencies:
delegate "^3.1.2"
grapheme-splitter@^1.0.4:
version "1.0.4"
resolved "https://registry.yarnpkg.com/grapheme-splitter/-/grapheme-splitter-1.0.4.tgz#9cf3a665c6247479896834af35cf1dbb4400767e"
@ -1208,11 +1187,9 @@ postcss-value-parser@^4.0.2:
integrity sha512-97DXOFbQJhk71ne5/Mt6cOu6yxsSfM0QGQyl0L25Gca4yGWEGJaig7l7gbCX623VqTBNGLRLaVUCnNkcedlRSQ==
prismjs@^1.20.0:
version "1.23.0"
resolved "https://registry.yarnpkg.com/prismjs/-/prismjs-1.23.0.tgz#d3b3967f7d72440690497652a9d40ff046067f33"
integrity sha512-c29LVsqOaLbBHuIbsTxaKENh1N2EQBOHaWv7gkHN4dgRbxSREqDnDbtFJYdpPauS4YCplMSNCABQ6Eeor69bAA==
optionalDependencies:
clipboard "^2.0.0"
version "1.24.0"
resolved "https://registry.yarnpkg.com/prismjs/-/prismjs-1.24.0.tgz#0409c30068a6c52c89ef7f1089b3ca4de56be2ac"
integrity sha512-SqV5GRsNqnzCL8k5dfAjCNhUrF3pR0A9lTDSCUZeh/LIshheXJEaP0hwLz2t4XHivd2J/v2HR+gRnigzeKe3cQ==
process-nextick-args@~2.0.0:
version "2.0.1"
@ -1440,11 +1417,6 @@ scheduler@^0.19.1:
loose-envify "^1.1.0"
object-assign "^4.1.1"
select@^1.1.2:
version "1.1.2"
resolved "https://registry.yarnpkg.com/select/-/select-1.1.2.tgz#0e7350acdec80b1108528786ec1d4418d11b396d"
integrity sha1-DnNQrN7ICxEIUoeG7B1EGNEbOW0=
set-blocking@^2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/set-blocking/-/set-blocking-2.0.0.tgz#045f9782d011ae9a6803ddd382b24392b3d890f7"
@ -1633,11 +1605,6 @@ timers-browserify@^2.0.4:
dependencies:
setimmediate "^1.0.4"
tiny-emitter@^2.0.0:
version "2.1.0"
resolved "https://registry.yarnpkg.com/tiny-emitter/-/tiny-emitter-2.1.0.tgz#1d1a56edfc51c43e863cbb5382a72330e3555423"
integrity sha512-NB6Dk1A9xgQPMoGqC5CVXn123gWyte215ONT5Pp5a0yt4nlEoO1ZWeCwpncaekPHXO60i47ihFnZPiRPjRMq4Q==
to-arraybuffer@^1.0.0:
version "1.0.1"
resolved "https://registry.yarnpkg.com/to-arraybuffer/-/to-arraybuffer-1.0.1.tgz#7d229b1fcc637e466ca081180836a7aabff83f43"

View File

@ -128,14 +128,46 @@ Cookies.set('influx-docs-api-lib', selectedApiLib)
// Iterate through code blocks and update InfluxDB urls
// Requires objects with cloud and oss keys and url values
function updateUrls(prevUrls, newUrls) {
var preference = getPreference()
var prevUrlsParsed = {
cloud: {},
oss: {}
}
var newUrlsParsed = {
cloud: {},
oss: {}
}
Object.keys(prevUrls).forEach(function(k) {
try {
prevUrlsParsed[k] = new URL(prevUrls[k])
} catch {
prevUrlsParsed[k] = { host: prevUrls[k] }
}
})
Object.keys(newUrls).forEach(function(k) {
try {
newUrlsParsed[k] = new URL(newUrls[k])
} catch {
newUrlsParsed[k] = { host: newUrls[k] }
}
})
/**
* Match and replace <prev> host with <new> host
* then replace <prev> URL with <new> URL.
**/
var cloudReplacements = [
{ replace: prevUrlsParsed.cloud.host, with: newUrlsParsed.cloud.host },
{ replace: prevUrlsParsed.oss.host, with: newUrlsParsed.cloud.host },
{ replace: prevUrls.cloud, with: newUrls.cloud },
{ replace: prevUrls.oss, with: newUrls.cloud }
{ replace: prevUrls.oss, with: newUrls.cloud },
]
var ossReplacements = [
{ replace: prevUrlsParsed.cloud.host, with: newUrlsParsed.cloud.host },
{ replace: prevUrlsParsed.oss.host, with: newUrlsParsed.oss.host },
{ replace: prevUrls.cloud, with: newUrls.cloud},
{ replace: prevUrls.oss, with: newUrls.oss }
]

View File

@ -59,7 +59,7 @@ $bold: 700;
color: $b-pool;
}
&:before {
content: '\e918';
content: "\e919";
font-family: 'icomoon';
margin-right: .65rem;
}
@ -73,7 +73,7 @@ $bold: 700;
border-radius: 4.5px;
transition: all .2s;
&:before {
content: "\e933";
content: "\e934";
display: inline-block;
font-size: .95rem;
margin-right: .5rem;

View File

@ -147,6 +147,10 @@
font-size: .9rem;
font-weight: $medium;
}
&.blue {color: $b-dodger;}
&.green {color: $gr-viridian;}
&.magenta {color: $p-comet;}
}
h2,h3,h4,h5,h6 {

View File

@ -9,7 +9,7 @@
position: relative;
flex-grow: 1;
&:after {
content: '\e905';
content: "\e905";
display: block;
font-family: 'icomoon';
position: absolute;

View File

@ -75,7 +75,7 @@
}
&:after {
content: '\e917';
content: "\e918";
font-family: 'icomoon';
position: absolute;
top: .45rem;

View File

@ -329,7 +329,7 @@ label:after {
border-radius: 0 0 $radius $radius;
&:before {
content: "\e923";
content: "\e924";
display: inline-block;
margin-right: .35rem;
font-family: "icomoon";

View File

@ -1,7 +1,7 @@
body.v1, body.platform{
.article .article--content {
blockquote {
padding: 1.65rem 2rem .1rem 2rem;
padding: 1.65rem 2rem;
margin: 1rem 0 2rem;
border-width: 0 0 0 4px;
border-style: solid;

View File

@ -35,7 +35,7 @@ a.btn {
}
&.download:before {
content: "\e91c";
content: "\e91d";
font-family: "icomoon";
margin-right: .5rem;
font-size: 1.1rem;

View File

@ -5,7 +5,7 @@
margin-top: -.5rem;
a a:after {
content: "\e919";
content: "\e91a";
font-family: "icomoon";
color: rgba($article-heading, .35);
vertical-align: bottom;

View File

@ -80,8 +80,11 @@ pre .api {
font-weight: bold;
font-size: .9rem;
&.get { background: $gr-rainforest; }
&.get { background: $gr-viridian; }
&.post { background: $b-ocean; }
&.patch { background: $y-topaz; }
&.delete { background: $r-ruby; }
&.put {background: $br-pulsar; }
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -74,12 +74,12 @@
}
&.edit:before {
content: "\e92e";
content: "\e92f";
font-size: .75rem;
vertical-align: top;
}
&.issue:before {
content: "\e933";
content: "\e934";
font-size: .95rem;
}
}

View File

@ -55,4 +55,9 @@ li {
}
.list-note { font-size: .85rem }
h4,h5,h6 {
margin-top: 1em;
padding-top: 0;
}
}

View File

@ -47,6 +47,8 @@ table {
}
}
img { margin-bottom: 0; }
}
#flags:not(.no-shorthand), #global-flags {

View File

@ -1,10 +1,10 @@
@font-face {
font-family: 'icomoon';
src: url('fonts/icomoon.eot?a22byr');
src: url('fonts/icomoon.eot?a22byr#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?a22byr') format('truetype'),
url('fonts/icomoon.woff?a22byr') format('woff'),
url('fonts/icomoon.svg?a22byr#icomoon') format('svg');
src: url('fonts/icomoon.eot?itn2ph');
src: url('fonts/icomoon.eot?itn2ph#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?itn2ph') format('truetype'),
url('fonts/icomoon.woff?itn2ph') format('woff'),
url('fonts/icomoon.svg?itn2ph#icomoon') format('svg');
font-weight: normal;
font-style: normal;
font-display: block;
@ -25,11 +25,8 @@
-moz-osx-font-smoothing: grayscale;
}
.icon-crown:before {
content: "\e934";
}
.icon-book-pencil:before {
content: "\e965";
.icon-bar-chart:before {
content: "\e913";
}
.icon-influx-logo:before {
content: "\e900";
@ -89,107 +86,110 @@
content: "\e912";
}
.icon-heart1:before {
content: "\e913";
}
.icon-settings:before {
content: "\e914";
}
.icon-zoom-in:before {
.icon-settings:before {
content: "\e915";
}
.icon-zoom-out:before {
.icon-zoom-in:before {
content: "\e916";
}
.icon-chevron-down:before {
.icon-zoom-out:before {
content: "\e917";
}
.icon-chevron-left:before {
.icon-chevron-down:before {
content: "\e918";
}
.icon-chevron-right:before {
.icon-chevron-left:before {
content: "\e919";
}
.icon-chevron-up:before {
.icon-chevron-right:before {
content: "\e91a";
}
.icon-menu:before {
.icon-chevron-up:before {
content: "\e91b";
}
.icon-download:before {
.icon-menu:before {
content: "\e91c";
}
.icon-minus:before {
.icon-download:before {
content: "\e91d";
}
.icon-plus:before {
.icon-minus:before {
content: "\e91e";
}
.icon-add-cell:before {
.icon-plus:before {
content: "\e91f";
}
.icon-alert:before {
.icon-add-cell:before {
content: "\e920";
}
.icon-calendar:before {
.icon-alert:before {
content: "\e921";
}
.icon-checkmark:before {
.icon-calendar:before {
content: "\e922";
}
.icon-cog-thick:before {
.icon-checkmark:before {
content: "\e923";
}
.icon-dashboards:before {
.icon-cog-thick:before {
content: "\e924";
}
.icon-data-explorer:before {
.icon-dashboards:before {
content: "\e925";
}
.icon-ui-download:before {
.icon-data-explorer:before {
content: "\e926";
}
.icon-duplicate:before {
.icon-ui-download:before {
content: "\e927";
}
.icon-export:before {
.icon-duplicate:before {
content: "\e928";
}
.icon-fullscreen:before {
.icon-export:before {
content: "\e929";
}
.icon-influx-icon:before {
.icon-fullscreen:before {
content: "\e92a";
}
.icon-note:before {
.icon-influx-icon:before {
content: "\e92b";
}
.icon-organizations:before {
.icon-note:before {
content: "\e92c";
}
.icon-pause:before {
.icon-organizations:before {
content: "\e92d";
}
.icon-pencil:before {
.icon-pause:before {
content: "\e92e";
}
.icon-play:before {
.icon-pencil:before {
content: "\e92f";
}
.icon-ui-plus:before {
.icon-play:before {
content: "\e930";
}
.icon-refresh:before {
.icon-ui-plus:before {
content: "\e931";
}
.icon-remove:before {
.icon-refresh:before {
content: "\e932";
}
.icon-alert-circle:before {
.icon-remove:before {
content: "\e933";
}
.icon-trash:before {
.icon-alert-circle:before {
content: "\e934";
}
.icon-crown:before {
content: "\e935";
}
.icon-trash:before {
content: "\e936";
}
.icon-triangle:before {
content: "\e937";
}
@ -232,6 +232,9 @@
.icon-eye-open:before {
content: "\e957";
}
.icon-book-pencil:before {
content: "\e965";
}
.icon-heart:before {
content: "\e9da";
}

View File

@ -0,0 +1,36 @@
/**
* Use an InfluxDB Cloud username and token
* with Basic Authentication
* to query the InfluxDB 1.x compatibility API
*/
const https = require('https');
const querystring = require('querystring');
function queryWithUsername() {
const queryparams = {
db: 'mydb',
q: 'SELECT * FROM cpu_usage',
};
const options = {
host: 'localhost:8086',
path: '/query?' + querystring.stringify(queryparams),
auth: 'exampleuser@influxdata.com:YourAuthToken',
headers: {
'Content-type': 'application/json'
},
};
const request = https.get(options, (response) => {
let rawData = '';
response.on('data', () => {
response.on('data', (chunk) => { rawData += chunk; });
})
response.on('end', () => {
console.log(rawData);
})
});
request.end();
}

View File

@ -0,0 +1,16 @@
#######################################
# Use an InfluxDB 1.x compatible username
# and password with Basic Authentication
# to query the InfluxDB 1.x compatibility API
#######################################
# Use default retention policy
#######################################
# Use the --user option with `--user <username>:<password>` syntax
# or the `--user <username>` interactive syntax to ensure your credentials are
# encoded in the header.
#######################################
curl --get "http://localhost:8086/query" \
--user "OneDotXUsername":"YourAuthToken" \
--data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu_usage"

View File

@ -0,0 +1,37 @@
/**
* Use an InfluxDB 1.x compatible username and password
* to query the InfluxDB 1.x compatibility API
*
* Use Basic authentication
*/
const https = require('https');
const querystring = require('querystring');
function queryWithUsername() {
const queryparams = {
db: 'mydb',
q: 'SELECT * FROM cpu_usage',
};
const options = {
host: 'localhost:8086',
path: '/query?' + querystring.stringify(queryparams),
auth: 'OneDotXUsername:yourPasswordOrToken',
headers: {
'Content-type': 'application/json'
},
};
const request = https.get(options, (response) => {
let rawData = '';
response.on('data', () => {
response.on('data', (chunk) => { rawData += chunk; });
})
response.on('end', () => {
console.log(rawData);
})
});
request.end();
}

View File

@ -0,0 +1,16 @@
#######################################
# Use an InfluxDB 1.x compatible username
# and password with Basic Authentication
# to query the InfluxDB 1.x compatibility API
#######################################
# Use default retention policy
#######################################
# Use the --user option with `--user <username>:<password>` syntax
# or the `--user <username>` interactive syntax to ensure your credentials are
# encoded in the header.
#######################################
curl --get "http://localhost:8086/query" \
--user "OneDotXUsername":"yourPasswordOrToken" \
--data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu_usage"

View File

@ -0,0 +1,38 @@
/**
* Use an InfluxDB 1.x compatible username and password
* to query the InfluxDB 1.x compatibility API
*
* Use authentication query parameters:
* ?u=<username>&p=<password>
*
* Use default retention policy.
*/
const https = require('https');
const querystring = require('querystring');
function queryWithToken() {
const queryparams = {
db: 'mydb',
q: 'SELECT * FROM cpu_usage',
u: 'OneDotXUsername',
p: 'yourPasswordOrToken'
};
const options = {
host: 'localhost:8086',
path: "/query?" + querystring.stringify(queryparams)
};
const request = https.get(options, (response) => {
let rawData = '';
response.on('data', () => {
response.on('data', (chunk) => { rawData += chunk; });
})
response.on('end', () => {
console.log(rawData);
})
});
request.end();
}

View File

@ -0,0 +1,14 @@
#######################################
# Use an InfluxDB 1.x compatible username and password
# to query the InfluxDB 1.x compatibility API
#######################################
# Use authentication query parameters:
# ?u=<username>&p=<password>
# Use default retention policy.
#######################################
curl --get "http://localhost:8086/query" \
--data-urlencode "u=OneDotXUsername" \
--data-urlencode "p=yourPasswordOrToken" \
--data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu_usage"

View File

@ -0,0 +1,35 @@
/**
* Use a token in the Authorization header
* to query the InfluxDB 1.x compatibility API
*/
const https = require('https');
const querystring = require('querystring');
function queryWithToken() {
const queryparams = {
db: 'mydb',
q: 'SELECT * FROM cpu_usage',
};
const options = {
host: 'localhost:8086',
path: "/query?" + querystring.stringify(queryparams),
headers: {
'Authorization': 'Token YourAuthToken',
'Content-type': 'application/json'
},
};
const request = https.get(options, (response) => {
let rawData = '';
response.on('data', () => {
response.on('data', (chunk) => { rawData += chunk; });
})
response.on('end', () => {
console.log(rawData);
})
});
request.end();
}

View File

@ -0,0 +1,10 @@
#######################################
# Use a token in the Authorization header
# to query the InfluxDB 1.x compatibility API
#######################################
curl --get "http://localhost:8086" \
--header "Authorization: Token YourAuthToken" \
--header 'Content-type: application/json' \
--data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu_usage"

View File

@ -0,0 +1,6 @@
[
{"name": "time", "type": "timestamp"},
{"name": "host", "type": "tag"},
{"name": "usage_user", "type": "field", "dataType": "float"},
{"name": "usage_system", "type": "field", "dataType": "float"}
]

View File

@ -0,0 +1,3 @@
influx bucket create \
--name my_schema_bucket \
--schema-type explicit

View File

@ -0,0 +1,5 @@
{"name": "time", "type": "timestamp"}
{"name": "service", "type": "tag"}
{"name": "sensor", "type": "tag"}
{"name": "temperature", "type": "field", "dataType": "float"}
{"name": "humidity", "type": "field", "dataType": "float"}

View File

@ -0,0 +1,7 @@
[
{"name": "time", "type": "timestamp"},
{"name": "service", "type": "tag"},
{"name": "host", "type": "tag"},
{"name": "usage_user", "type": "field", "dataType": "float"},
{"name": "usage_system", "type": "field", "dataType": "float"}
]

View File

@ -0,0 +1,6 @@
name,type,data_type
time,timestamp,
host,tag,
service,tag,
fsRead,field,float
fsWrite,field,float
1 name type data_type
2 time timestamp
3 host tag
4 service tag
5 fsRead field float
6 fsWrite field float

View File

@ -1,7 +1,8 @@
---
title: Use dashboard template variables
description: >
Dashboard variables let you to alter specific components of cells' queries without having to edit the queries, making it easy to interact with your dashboard cells and explore your data.
Chronograf dashboard template variables let you update cell queries without editing queries,
making it easy to interact with your dashboard cells and explore your data.
aliases:
- /chronograf/v1.8/introduction/templating/
- /chronograf/v1.8/templating/
@ -11,10 +12,10 @@ menu:
parent: Guides
---
Chronograf's dashboard template variables let you update cell queries without editing queries, making it easy to interact with your dashboard cells and explore your data.
Chronograf dashboard template variables let you update cell queries without editing queries,
making it easy to interact with your dashboard cells and explore your data.
- [Use template variables](#use-template-variables)
- [Quoting template variables](#quoting-template-variables)
- [Predefined template variables](#predefined-template-variables)
- [Create custom template variables](#create-custom-template-variables)
- [Template variable types](#template-variable-types)
@ -23,21 +24,27 @@ Chronograf's dashboard template variables let you update cell queries without ed
## Use template variables
When creating Chronograf dashboards, use template variables in cell queries and titles.
When creating Chronograf dashboards, use either [predefined template variables](#predefined-template-variables) or [custom template variables](#create-custom-template-variables) in your cell queries and titles.
After you set up variables, variables are available to select in your dashboard user interface (UI).
In the query, surround template variables names with colons (`:`) as follows:
- [Use template variables in cell queries](#use-template-variables-in-cell-queries)
- [InfluxQL](#influxql)
- [Flux](#flux)
- [Use template variables in cell titles](#use-template-variables-in-cell-titles)
![Use template variables](/img/chronograf/1-6-template-vars-use.gif)
### Use template variables in cell queries
Both InfluxQL and Flux support template variables.
#### InfluxQL
In an InfluxQL query, surround template variables names with colons (`:`) as follows:
```sql
SELECT :variable_name: FROM "telegraf"."autogen".:measurement: WHERE time < :dashboardTime:
```
Use either [predefined template variables](#predefined-template-variables)
or [custom template variables](#create-custom-template-variables).
After you set up variables, variables are available to select in your dashboard user-interface (UI).
![Using template variables](/img/chronograf/1-6-template-vars-use.gif)
## Quoting template variables
##### Quoting template variables in InfluxQL
For **predefined meta queries** such as "Field Keys" and "Tag Values", **do not add quotes** (single or double) to your queries. Chronograf will add quotes as follows:
@ -45,7 +52,7 @@ For **predefined meta queries** such as "Field Keys" and "Tag Values", **do not
SELECT :variable_name: FROM "telegraf"."autogen".:measurement: WHERE time < :dashboardTime:
```
For **custom queries**, **CSV**, or **map queries**, quote the values in the query in accordance with standard [InfluxQL](/influxdb/v1.8/query_language/) syntax as follows:
For **custom queries**, **CSV**, or **map queries**, quote the values in the query following standard [InfluxQL](/{{< latest "influxdb" "v1" >}}/query_language/) syntax:
- For numerical values, **do not quote**.
- For string values, choose to quote the values in the variable definition (or not). See [String examples](#string-examples) below.
@ -53,14 +60,14 @@ For **custom queries**, **CSV**, or **map queries**, quote the values in the que
{{% note %}}
**Tips for quoting strings:**
- When using custom meta queries that return strings, typically, you quote the variable values when using them in a dashboard query, given InfluxQL results are returned without quotes.
- If you are using template variable strings in regular expression syntax (when using quotes may cause query syntax errors), the flexibility in query quoting methods is particularly useful.
{{% /note %}}
#### String examples
##### String examples
Add single quotes when you define template variables, or in your queries, but not both.
##### Example 1: Add single quotes in variable definition
###### Add single quotes in variable definition
If you define a custom CSV variable named `host` using single quotes:
@ -72,10 +79,10 @@ Do not include quotes in your query:
```sql
SELECT mean("usage_user") AS "mean_usage_user" FROM "telegraf"."autogen"."cpu"
WHERE "host" = :host: and time > :dashboardTime
WHERE "host" = :host: and time > :dashboardTime:
```
##### Example 2: Add single quotes in query
###### Add single quotes in query
If you define a custom CSV variable named `host` without quotes:
@ -90,14 +97,52 @@ SELECT mean("usage_user") AS "mean_usage_user" FROM "telegraf"."autogen"."cpu"
WHERE "host" = ':host:' and time > :dashboardTime
```
#### Flux
To use a template variable in a Flux query, include the variable key in your query.
{{% note %}}
Flux dashboard cell queries don't support **custom template variables**, but do
support [predefined template variables](#predefined-template-variables).
{{% /note %}}
```js
from(bucket: "example-bucket")
|> range(start: dashboardTime, stop: dashboardTime)
|> filter(fn: (r) => r._field == "example-field")
|> aggregateWindow(every: autoInterval, fn: last)
```
### Use template variables in cell titles
To dynamically change the title of a dashboard cell,
use the `:variable-name:` syntax.
For example, a variable named `field` with a value of `temp`
and a variable named `location` with a value of `San Antonio`, use the following syntax:
```
:temp: data for :location:
```
Displays as:
{{< img-hd src= "/img/chronograf/1-9-template-var-title.png" alt="Use template variables in cell titles" />}}
## Predefined template variables
Chronograf includes predefined template variables controlled by elements in the Chrongraf UI.
These template variables can be used in any of your cells' queries.
Chronograf includes predefined template variables controlled by elements in the Chronograf UI.
Use predefined template variables in your cell queries.
[`:dashboardTime:`](#dashboardtime)
[`:upperDashboardTime:`](#upperdashboardtime)
[`:interval:`](#interval)
InfluxQL and Flux include their own sets of predefined template variables:
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxQL](#)
[Flux](#)
{{% /tabs %}}
{{% tab-content %}}
- [`:dashboardTime:`](#dashboardtime)
- [`:upperDashboardTime:`](#upperdashboardtime)
- [`:interval:`](#interval)
### dashboardTime
The `:dashboardTime:` template variable is controlled by the "time" dropdown in your Chronograf dashboard.
@ -113,9 +158,11 @@ FROM "telegraf".."cpu"
WHERE time > :dashboardTime:
```
> In order to use the date picker to specify a particular time range in the past
> which does not include "now", the query should be constructed using `:dashboardTime:`
> as the lower limit and [`:upperDashboardTime:`](#upperdashboardtime) as the upper limit.
{{% note %}}
To use the date picker to specify a particular time range in the past
which does not include "now", construct the query using `:dashboardTime:`
as the start time and [`:upperDashboardTime:`](#upperdashboardtime) as the stop time.
{{% /note %}}
### upperDashboardTime
The `:upperDashboardTime:` template variable is defined by the upper time limit specified using the date picker.
@ -144,10 +191,61 @@ FROM "telegraf".."cpu"
WHERE time > :dashboardtime:
GROUP BY time(:interval:)
```
{{% /tab-content %}}
{{% tab-content %}}
- [`dashboardTime`](#dashboardtime-flux)
- [`upperDashboardTime`](#upperdashboardtime-flux)
- [`autoInterval`](#autointerval)
### dashboardTime {id="dashboardtime-flux"}
The `dashboardTime` template variable is controlled by the "time" dropdown in your Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-time-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard time selector"/>
If using relative time, this variable represents the time offset specified in the dropdown (-5m, -15m, -30m, etc.) and assumes time is relative to "now".
If using absolute time defined by the date picker, `dashboardTime` is populated with the start time.
```js
from(bucket: "telegraf/autogen")
|> range(start: dashboardTime)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
```
{{% note %}}
To use the date picker to specify a time range in the past without "now", use `dashboardTime` as the lower limit and
[`upperDashboardTime`](#upperdashboardtime-flux) as the upper limit.
{{% /note %}}
### upperDashboardTime{id="upperdashboardtime-flux"}
The `upperDashboardTime` template variable is defined by the stop time specified using the date picker.
<img src="/img/chronograf/1-6-template-vars-date-picker.png" style="width:100%;max-width:762px;" alt="Dashboard date picker"/>
For relative time frames, this variable inherits `now()`. For absolute time frames, this variable inherits the stop time.
```js
from(bucket: "telegraf/autogen")
|> range(start: dashboardTime, stop: upperDashboardTime)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
```
### autoInterval
The `autoInterval` template variable is controlled by the display width of the
dashboard cell and is calculated by the duration of time that each pixel covers.
Use the `autoInterval` variable to limit downsample data to display a maximum of one point per pixel.
```js
from(bucket: "telegraf/autogen")
|> range(start: dashboardTime)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
|> aggregateWindow(every: autoInterval, fn: mean)
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Create custom template variables
Template variables are essentially an array of potential values used to populate parts of your cells' queries.
Chronograf lets you create custom template variables powered by meta queries or CSV uploads that return an array of possible values.
To create a template variable:
@ -167,15 +265,15 @@ and a dropdown for the variable will be included at the top of your dashboard.
## Template Variable Types
Chronograf supports the following template variable types:
[Databases](#databases)
[Measurements](#measurements)
[Field Keys](#field-keys)
[Tag Keys](#tag-keys)
[Tag Values](#tag-values)
[CSV](#csv)
[Map](#map)
[Custom Meta Query](#custom-meta-query)
[Text](#text)
- [Databases](#databases)
- [Measurements](#measurements)
- [Field Keys](#field-keys)
- [Tag Keys](#tag-keys)
- [Tag Values](#tag-values)
- [CSV](#csv)
- [Map](#map)
- [Custom Meta Query](#custom-meta-query)
- [Text](#text)
### Databases
Database template variables allow you to select from multiple target [databases](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database).
@ -193,7 +291,8 @@ SELECT "purchases" FROM :databaseVar:."autogen"."customers"
```
#### Database variable use cases
Database template variables are good when visualizing multiple databases with similar or identical data structures. They allow you to quickly switch between visualizations for each of your databases.
Use database template variables when visualizing multiple databases with similar or identical data structures.
Variables let you quickly switch between visualizations for each of your databases.
### Measurements
Vary the target [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement).
@ -285,11 +384,14 @@ value3
value4
```
> Since string field values [require single quotes in InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries), string values should be wrapped in single quotes.
{{% note %}}
String field values [require single quotes in InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries),
so wrap string values in single quotes.
>```csv
```csv
'string1','string2','string3','string4'
```
{{% /note %}}
_**Example CSV variable in a cell query**_
```sql
@ -315,14 +417,16 @@ key4,value4
<img src="/img/chronograf/1-6-template-vars-map-dropdown.png" style="width:100%;max-width:140px;" alt="Map variable dropdown"/>
> If values are meant to be used as string field values, wrap them in single quote ([required by InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries)). This only pertains to values. String keys do not matter.
{{% note %}}
Wrap string field values in single quotes ([required by InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries)) (string field values only; string keys do not require quotes).
>```csv
```csv
key1,'value1'
key2,'value2'
key3,'value3'
key4,'value4'
```
{{% /note %}}
_**Example Map variable in a cell query**_
```sql
@ -345,7 +449,8 @@ The customer names would populate your template variable dropdown rather than th
### Custom Meta Query
Vary part of a query with a customized meta query that pulls a specific array of values from InfluxDB.
These variables let you pull a highly customized array of potential values and offer advanced functionality such as [filtering values based on other template variables](#filter-template-variables-with-other-template-variables).
Custom meta query variables let you pull a highly customized array of potential values and offer
advanced functionality such as [filtering values based on other template variables](#filter-template-variables-with-other-template-variables).
<img src="/img/chronograf/1-6-template-vars-custom-meta-query.png" style="width:100%;max-width:667px;" alt="Custom meta query"/>
@ -355,7 +460,7 @@ SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "customer" = :cust
```
#### Custom meta query variable use cases
Custom meta query template variables should be used any time you are pulling values from InfluxDB, but the pre-canned template variable types aren't able to return the desired list of values.
Use custom InfluxQL meta query template variables when predefined template variable types aren't able to return the values you want.
### Text
Vary a part of a query with a single string of text.
@ -373,21 +478,21 @@ as well as many other parameters that control the display of graphs in your dash
These names are either [predefined variables](#predefined-template-variables) or would
conflict with existing URL query parameters.
`:database:`
`:measurement:`
`:dashboardTime:`
`:upperDashboardTime:`
`:interval:`
`:upper:`
`:lower:`
`:zoomedUpper:`
`:zoomedLower:`
`refreshRate:`
- `:database:`
- `:measurement:`
- `:dashboardTime:`
- `:upperDashboardTime:`
- `:interval:`
- `:upper:`
- `:lower:`
- `:zoomedUpper:`
- `:zoomedLower:`
- `:refreshRate:`
## Advanced template variable usage
### Filter template variables with other template variables
[Custom meta query template variables](#custom-meta-query) allow you to filter the array of potential variable values using other existing template variables.
[Custom meta query template variables](#influxQL-meta-query) let you filter the array of potential variable values using other existing template variables.
For example, let's say you want to list all the field keys associated with a measurement, but want to be able to change the measurement:
@ -395,7 +500,7 @@ For example, let's say you want to list all the field keys associated with a mea
<img src="/img/chronograf/1-6-template-vars-measurement-var.png" style="width:100%;max-width:667px;" alt="measurementVar"/>
2. Create a template variable named `:fieldKey:` that uses the [custom meta query](#custom-meta-query) variable type.
2. Create a template variable named `:fieldKey:` that uses the [InfluxQL meta query](#influxql-meta-query) variable type.
The following meta query pulls a list of field keys based on the existing `:measurementVar:` template variable.
```sql
@ -404,7 +509,7 @@ The following meta query pulls a list of field keys based on the existing `:meas
<img src="/img/chronograf/1-6-template-vars-fieldkey.png" style="width:100%;max-width:667px;" alt="fieldKey"/>
3. Create a new dashboard cell that uses the `:fieldKey:` and `:measurementVar` template variables in its query.
3. Create a new dashboard cell that uses the `fieldKey` and `measurementVar` template variables in its query.
```sql
SELECT :fieldKey: FROM "telegraf"..:measurementVar: WHERE time > :dashboardTime:
@ -418,8 +523,8 @@ The resulting dashboard will work like this:
Chronograf uses URL query parameters (also known as query string parameters) to set both display options and template variables in the URL.
This makes it easy to share links to dashboards so they load in a specific state with specific template variable values selected.
URL query parameters are appeneded to the end of the URL with a question mark (`?`) indicating beginning of query parameters.
Multiple query paramemters can be chained together using an ampersand (`&`).
URL query parameters are appended to the end of the URL with a question mark (`?`) indicating the beginning of query parameters.
Chain multiple query parameters together using an ampersand (`&`).
To declare a template variable or a date range as a URL query parameter, it must follow the following pattern:
@ -445,8 +550,10 @@ Name of the template variable.
`variableValue`
Value of the template variable.
> Whenever template variables are modified in the dashboard, the corresponding
> URL query parameters are automatically updated.
{{% note %}}
When template variables are modified in the dashboard, the corresponding
URL query parameters are automatically updated.
{{% /note %}}
#### Example template variable query parameter
```

View File

@ -0,0 +1,58 @@
---
title: Chronograf 1.9 documentation
description: >
Chronograf is InfluxData's open source web application.
Use Chronograf with the other components of the TICK stack to visualize your
monitoring data and easily create alerting and automation rules.
menu:
chronograf_1_9:
name: Chronograf v1.9
weight: 1
---
Chronograf is InfluxData's open source web application.
Use Chronograf with the other components of the [TICK stack](https://www.influxdata.com/products/) to visualize your monitoring data and easily create alerting and automation rules.
![Chronograf Collage](/img/chronograf/1-6-chronograf-collage.png)
## Key features
### Infrastructure monitoring
* View all hosts and their statuses in your infrastructure
* View the configured applications on each host
* Monitor your applications with Chronograf's [pre-created dashboards](/chronograf/v1.9/guides/using-precreated-dashboards/)
### Alert management
Chronograf offers a UI for [Kapacitor](https://github.com/influxdata/kapacitor), InfluxData's data processing framework for creating alerts, running ETL jobs, and detecting anomalies in your data.
* Generate threshold, relative, and deadman alerts on your data
* Easily enable and disable existing alert rules
* View all active alerts on an alert dashboard
* Send alerts to the supported event handlers, including Slack, PagerDuty, HipChat, and [more](/chronograf/v1.9/guides/configuring-alert-endpoints/)
### Data visualization
* Monitor your application data with Chronograf's [pre-created dashboards](/chronograf/v1.9/guides/using-precreated-dashboards/)
* Create your own customized dashboards complete with various graph types and [template variables](/chronograf/v1.9/guides/dashboard-template-variables/)
* Investigate your data with Chronograf's data explorer and query templates
### Database management
* Create and delete databases and retention policies
* View currently-running queries and stop inefficient queries from overloading your system
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
### Multi-organization and multi-user support
{{% note %}}
**Note:** To use this feature, OAuth 2.0 authentication must be configured.
Once configured, the Chronograf Admin tab on the Admin menu is visible.
For details, see [Managing Chronograf security](/chronograf/v1.9/administration/managing-security/).
{{% /note %}}
* Create organizations and assign users to those organizations
* Restrict access to administrative functions
* Allow users to set up and maintain unique dashboards for their organizations

View File

@ -0,0 +1,45 @@
---
title: About the Chronograf project
description: Learn about Chronograf, the user interface (UI) for InfluxDB.
menu:
chronograf_1_9:
name: About the project
weight: 10
---
Chronograf is the user interface component of the [InfluxData time series platform](https://www.influxdata.com/time-series-platform/). It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with realtime visualizations of your data.
Follow the links below for more information.
{{< children >}}
Chronograf is released under the GNU Affero General Public License. This Free Software Foundation license is fairly new,
and differs from the more widely known and understood GPL.
Our goal with using AGPL is to preserve the concept of copyleft with Chronograf.
With traditional GPL, copyleft was associated with the concept of distribution of software.
The problem is that nowadays, distribution of software is rare: things tend to run in the cloud. AGPL fixes this “loophole”
in GPL by saying that if you use the software over a network, you are bound by the copyleft. Other than that,
the license is virtually the same as GPL v3.
To say this another way: if you modify the core source code of Chronograf, the goal is that you have to contribute
those modifications back to the community.
Note however that it is NOT required that your dashboards and alerts created by using Chronograf be published.
The copyleft applies only to the source code of Chronograf itself.
If this explanation isn't good enough for you and your use case, we dual license Chronograf under our
[standard commercial license](https://www.influxdata.com/legal/slsa/).
[Contact sales for more information](https://www.influxdata.com/contact-sales/).
## Third Party Software
InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
software of third parties that is incorporated in InfluxData products.
Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
Third party suppliers assume no liability for any claim that might arise with respect to such third party software,
nor for a customers use of or inability to use the third party software.
The [list of third party software components, including references to associated license and other materials](https://github.com/influxdata/chronograf/blob/master/LICENSE_OF_DEPENDENCIES.md),
is maintained on a version by version basis.

View File

@ -0,0 +1,12 @@
---
title: InfluxData Contributor License Agreement (CLA)
description: >
Before contributing to the Chronograf project, submit the InfluxData Contributor License Agreement.
menu:
chronograf_1_9:
weight: 30
parent: About the project
url: https://www.influxdata.com/legal/cla/
---
Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site.

View File

@ -0,0 +1,12 @@
---
title: Contribute to Chronograf
description: Contribute to the Chronograf project.
menu:
chronograf_1_9:
name: Contribute
weight: 20
parent: About the project
url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md
---
See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project.

View File

@ -0,0 +1,12 @@
---
title: Open source license for Chronograf
description: Find the open source license for Chronograf.
menu:
chronograf_1_9:
Name: Open source license
weight: 40
parent: About the project
url: https://github.com/influxdata/chronograf/blob/master/LICENSE
---
The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,14 @@
---
title: Administering Chronograf
description: >
Upgrade and configure Chronograf, plus manage connections, users, security, and organizations.
menu:
chronograf_1_9:
name: Administration
weight: 40
---
Follow the links below for more information.
{{< children >}}

View File

@ -0,0 +1,22 @@
---
title: Connecting Chronograf to InfluxDB Enterprise clusters
description: Work with InfluxDB Enterprise clusters through the Chronograf UI.
menu:
chronograf_1_9:
name: Connecting Chronograf to InfluxDB Enterprise
weight: 40
parent: Administration
---
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
Enter the HTTP bind address of one of your cluster's meta nodes into that input and Chronograf takes care of the rest.
![Cluster connection details](/img/chronograf/1-6-faq-cluster-connection.png)
Note that the example above assumes that you do not have authentication enabled.
If you have authentication enabled, the form requires username and password information.
For details about monitoring InfluxDB Enterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/chronograf/v1.9/guides/monitoring-influxenterprise-clusters).

View File

@ -0,0 +1,682 @@
---
title: Chronograf configuration options
description: >
Options available in the Chronograf configuration file and environment variables.
menu:
chronograf_1_9:
name: Configuration options
weight: 30
parent: Administration
---
Chronograf is configured using the configuration file (/etc/default/chronograf) and environment variables. If you do not uncomment a configuration option, the system uses its default setting. The configuration settings in this document are set to their default settings. For more information, see [Configure Chronograf](/chronograf/v1.9/administration/configuration/).
* [Usage](#usage)
* [Chronograf service options](#chronograf-service-options)
- [InfluxDB connection options](#influxdb-connection-options)
- [Kapacitor connection options](#kapacitor-connection-options)
- [TLS (Transport Layer Security) options](#tls-transport-layer-security-options)
- [etcd options](#etcd-options)
- [Other service options](#other-service-options)
* [Authentication options](#authentication-options)
* [General authentication options](#general-authentication-options)
* [GitHub-specific OAuth 2.0 authentication options](#github-specific-oauth-20-authentication-options)
* [Google-specific OAuth 2.0 authentication options](#google-specific-oauth-20-authentication-options)
* [Auth0-specific OAuth 2.0 authentication options](#auth0-specific-oauth-20-authentication-options)
* [Heroku-specific OAuth 2.0 authentication options](#heroku-specific-oauth-20-authentication-options)
* [Generic OAuth 2.0 authentication options](#generic-oauth-20-authentication-options)
## Usage
Start the Chronograf service, and include any options after `chronograf`, where `[OPTIONS]` are options separated by spaces:
```sh
chronograf [OPTIONS]
```
**Linux examples**
- To start `chronograf` without options:
```sh
sudo systemctl start chronograf
```
- To start `chronograf` and set options for develop mode and to disable reporting:
```sh
sudo systemctl start chronograf --develop --reporting-disabled
```
**MacOS X examples**
- To start `chronograf` without options:
```sh
chronograf
```
- To start `chronograf` and add shortcut options for develop mode and to disable reporting:
```sh
chronograf -d -r
```
{{% note %}}
***Note:*** Command line options take precedence over corresponding environment variables.
{{% /note %}}
## Chronograf service options
#### `--host=`
The IP that the `chronograf` service listens on.
Default value: `0.0.0.0`
Example: `--host=0.0.0.0`
Environment variable: `$HOST`
#### `--port=`
The port that the `chronograf` service listens on for insecure connections.
Default: `8888`
Environment variable: `$PORT`
#### `--bolt-path=` | `-b`
The file path to the BoltDB file.
Default value: `./chronograf-v1.db`
Environment variable: `$BOLT_PATH`
#### `--canned-path=` | `-c`
The path to the directory of [canned dashboards](/chronograf/v1.9/guides/using-precreated-dashboards) files. Canned dashboards (also known as pre-created dashboards or application layouts) cannot be edited. They're delivered with Chronograf and available depending on which Telegraf input plugins you have enabled.
Default value: `/usr/share/chronograf/canned`
Environment variable: `$CANNED_PATH`
#### `--resources-path=`
Path to directory of sources (.src files), Kapacitor connections (.kap files), organizations (.org files), and dashboards (.dashboard files).
{{% note %}}
**Note:** If you have a dashboard with the `.json` extension, rename it with the `.dashboard` extension in this directory to ensure the dashboard is loaded.
{{% /note %}}
Default value: `/usr/share/chronograf/resources`
Environment variable: `$RESOURCES_PATH`
#### `--basepath=` | `-p`
The URL path prefix under which all `chronograf` routes will be mounted.
Environment variable: `$BASE_PATH`
#### `--status-feed-url=`
URL of JSON feed to display as a news feed on the client Status page.
Default value: `https://www.influxdata.com/feed/json`
Environment variable: `$STATUS_FEED_URL`
#### `--version` | `-v`
Displays the version of the Chronograf service.
Example:
```sh
$ chronograf -v
2018/01/03 14:11:19 Chronograf 1.4.0.0-rc1-26-gb74ae387 (git: b74ae387)
```
## InfluxDB connection options
{{% note %}}
InfluxDB connection details specified via command line when starting Chronograf do not persist when Chronograf is shut down.
To persist connection details, [include them in a `.src` file](/chronograf/v1.9/administration/creating-connections/#manage-influxdb-connections-using-src-files) located in your [`--resources-path`](#resources-path).
**Only InfluxDB 1.x connections are configurable in a `.src` file.**
Configure InfluxDB 2.x and Cloud connections with CLI flags or in the
[Chronograf UI](/chronograf/v1.9/administration/creating-connections/#manage-influxdb-connections-using-the-chronograf-ui).
{{% /note %}}
### `--influxdb-url`
The location of your InfluxDB instance, including the protocol, IP address, and port.
Example: `--influxdb-url http://localhost:8086`
Environment variable: `$INFLUXDB_URL`
### `--influxdb-username`
The [username] for your InfluxDB instance.
Environment variable: `$INFLUXDB_USERNAME`
### `--influxdb-password`
The [password] for your InfluxDB instance.
Environment variable: `$INFLUXDB_PASSWORD`
### `--influxdb-org`
InfluxDB 2.x or InfluxDB Cloud organization name.
Environment variable: `$INFLUXDB_ORG`
### `--influxdb-token`
InfluxDB 2.x or InfluxDB Cloud [authentication token](/influxdb/cloud/security/tokens/).
Environment variable: `$INFLUXDB_TOKEN`
## Kapacitor connection options
{{% note %}}
Kapacitor connection details specified via command line when starting Chronograf do not persist when Chronograf is shut down.
To persist connection details, [include them in a `.kap` file](/chronograf/v1.9/administration/creating-connections/#manage-kapacitor-connections-using-kap-files) located in your [`--resources-path`](#resources-path).
{{% /note %}}
### `--kapacitor-url=`
The location of your Kapacitor instance, including `http://`, IP address, and port.
Example: `--kapacitor-url=http://0.0.0.0:9092`.
Environment variable: `$KAPACITOR_URL`
### `--kapacitor-username=`
The username for your Kapacitor instance.
Environment variable: `$KAPACITOR_USERNAME`
### `--kapacitor-password=`
The password for your Kapacitor instance.
Environment variable: `$KAPACITOR_PASSWORD`
### TLS (Transport Layer Security) options
See [Configuring TLS (Transport Layer Security) and HTTPS](/chronograf/v1.9/administration/managing-security/#configure-tls-transport-layer-security-and-https) for more information.
#### `--cert=`
The file path to PEM-encoded public key certificate.
Environment variable: `$TLS_CERTIFICATE`
#### `--key=`
The file path to private key associated with given certificate.
Environment variable: `$TLS_PRIVATE_KEY`
### etcd options
#### `--etcd-endpoints=` | `-e`
List of etcd endpoints.
##### CLI example
```sh
## Single parameter
--etcd-endpoints=localhost:2379
## Mutiple parameters
--etcd-endpoints=localhost:2379 \
--etcd-endpoints=192.168.1.61:2379 \
--etcd-endpoints=192.192.168.1.100:2379
```
Environment variable: `$ETCD_ENDPOINTS`
##### Environment variable example
```sh
## Single parameter
ETCD_ENDPOINTS=localhost:2379
## Mutiple parameters
ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
```
#### `--etcd-username=`
Username to log into etcd.
Environment variable: `$ETCD_USERNAME`
#### `--etcd-password=`
Password to log into etcd.
Environment variable: `$ETCD_PASSWORD`
#### `--etcd-dial-timeout=`
Total time to wait before timing out while connecting to etcd endpoints.
0 means no timeout.
The default is 1s.
Environment variable: `$ETCD_DIAL_TIMEOUT`
#### `--etcd-request-timeout=`
Total time to wait before timing out an etcd view or update request.
0 means no timeout.
The default is 1s.
Environment variable: `$ETCD_REQUEST_TIMEOUT`
#### `--etcd-cert=`
Path to etcd PEM-encoded TLS public key certificate.
Environment variable: `$ETCD_CERTIFICATE`
#### `--etcd-key=`
Path to private key associated with specified etcd certificate.
Environment variable: `$ETCD_PRIVATE_KEY`
#### `--etcd-root-ca`
Path to root CA certificate for TLS verification.
Environment variable: `$ETCD_ROOT_CA`
### Other service options
#### `--custom-auto-refresh`
Add custom auto-refresh intervals to the list of available auto-refresh intervals in Chronograf dashboards.
Provide a semi-colon-delimited list of key-value pairs where the key is the interval
name that appears in the auto-refresh dropdown menu and the value is the auto-refresh interval in milliseconds.
Example: `--custom-auto-refresh "500ms=500;1s=1000"`
Environment variable: `$CUSTOM_AUTO_REFRESH`
#### `--custom-link <display_name>:<link_address>`
Custom link added to Chronograf User menu options. Useful for providing links to internal company resources for your Chronograf users. Can be used when any OAuth 2.0 authentication is enabled. To add another custom link, repeat the custom link option.
Example: `--custom-link InfluxData:http://www.influxdata.com/`
#### `--develop` | `-d`
Run the `chronograf` service in developer mode.
#### `--help` | `-h`
Displays the command line help for `chronograf`.
#### `--host-page-disabled` | `-H`
Disables rendering and serving of the Hosts List page (/sources/$sourceId/hosts).
Environment variable: `$HOST_PAGE_DISABLED`
#### `--log-level=` | `-l`
Set the logging level.
Valid values: `debug` | `info` | `error`
Default value: `info`
Example: `--log-level=debug`
Environment variable: `$LOG_LEVEL`
#### `--reporting-disabled` | `-r`
Disables reporting of usage statistics.
Usage statistics reported once every 24 hours include: `OS`, `arch`, `version`, `cluster_id`, and `uptime`.
Environment variable: `$REPORTING_DISABLED`
## Authentication options
### General authentication options
#### `--auth-duration=`
The total duration (in hours) of cookie life for authentication.
Default value: `720h`
Authentication expires on browser close when `--auth-duration=0`.
Environment variable: `$AUTH_DURATION`
#### `--inactivity-duration=`
The duration that a token is valid without any new activity.
Default value: `5m`
Environment variable: `$INACTIVITY_DURATION`
#### `--public-url=`
The public URL required to access Chronograf using a web browser. For example, if you access Chronograf using the default URL, the public URL value would be `http://localhost:8888`.
Required for Google OAuth 2.0 authentication. Used for Auth0 and some generic OAuth 2.0 authentication providers.
Environment variable: `$PUBLIC_URL`
#### `--token-secret=` | `-t`
The secret for signing tokens.
Environment variable: `$TOKEN_SECRET`
### GitHub-specific OAuth 2.0 authentication options
See [Configuring GitHub authentication](/chronograf/v1.9/administration/managing-security/#configure-github-authentication) for more information.
#### `--github-url`
{{< req "Required if using Github Enterprise" >}}
GitHub base URL. Default is `https://github.com`.
Environment variable: `$GH_URL`
#### `--github-client-id` | `-i`
The GitHub client ID value for OAuth 2.0 support.
Environment variable: `$GH_CLIENT_ID`
#### `--github-client-secret` | `-s`
The GitHub Client Secret value for OAuth 2.0 support.
Environment variable: `$GH_CLIENT_SECRET`
#### `--github-organization` | `-o`
[Optional] Specify a GitHub organization membership required for a user.
##### CLI example
```sh
## Single parameter
--github-organization=org1
## Mutiple parameters
--github-organization=org1 \
--github-organization=org2 \
--github-organization=org3
```
Environment variable: `$GH_ORGS`
##### Environment variable example
```sh
## Single parameter
GH_ORGS=org1
## Mutiple parameters
GH_ORGS=org1,org2,org3
```
### Google-specific OAuth 2.0 authentication options
See [Configuring Google authentication](/chronograf/v1.9/administration/managing-security/#configure-google-authentication) for more information.
#### `--google-client-id=`
The Google Client ID value required for OAuth 2.0 support.
Environment variable: `$GOOGLE_CLIENT_ID`
#### `--google-client-secret=`
The Google Client Secret value required for OAuth 2.0 support.
Environment variable: `$GOOGLE_CLIENT_SECRET`
#### `--google-domains=`
[Optional] Restricts authorization to users from specified Google email domains.
##### CLI example
```sh
## Single parameter
--google-domains=delorean.com
## Mutiple parameters
--google-domains=delorean.com \
--google-domains=savetheclocktower.com
```
Environment variable: `$GOOGLE_DOMAINS`
##### Environment variable example
```sh
## Single parameter
GOOGLE_DOMAINS=delorean.com
## Mutiple parameters
GOOGLE_DOMAINS=delorean.com,savetheclocktower.com
```
### Auth0-specific OAuth 2.0 authentication options
See [Configuring Auth0 authentication](/chronograf/v1.9/administration/managing-security/#configure-auth0-authentication) for more information.
#### `--auth0-domain=`
The subdomain of your Auth0 client; available on the configuration page for your Auth0 client.
Example: https://myauth0client.auth0.com
Environment variable: `$AUTH0_DOMAIN`
#### `--auth0-client-id=`
The Auth0 Client ID value required for OAuth 2.0 support.
Environment variable: `$AUTH0_CLIENT_ID`
#### `--auth0-client-secret=`
The Auth0 Client Secret value required for OAuth 2.0 support.
Environment variable: `$AUTH0_CLIENT_SECRET`
#### `--auth0-organizations=`
[Optional] The Auth0 organization membership required to access Chronograf.
Organizations are set using an "organization" key in the user's `app_metadata`.
Lists are comma-separated and are only available when using environment variables.
##### CLI example
```sh
## Single parameter
--auth0-organizations=org1
## Mutiple parameters
--auth0-organizations=org1 \
--auth0-organizations=org2 \
--auth0-organizations=org3
```
Environment variable: `$AUTH0_ORGS`
##### Environment variable example
```sh
## Single parameter
AUTH0_ORGS=org1
## Mutiple parameters
AUTH0_ORGS=org1,org2,org3
```
### Heroku-specific OAuth 2.0 authentication options
See [Configuring Heroku authentication](/chronograf/v1.9/administration/managing-security/#configure-heroku-authentication) for more information.
### `--heroku-client-id=`
The Heroku Client ID for OAuth 2.0 support.
**Environment variable:** `$HEROKU_CLIENT_ID`
### `--heroku-secret=`
The Heroku Secret for OAuth 2.0 support.
**Environment variable:** `$HEROKU_SECRET`
### `--heroku-organization=`
The Heroku organization memberships required for access to Chronograf.
##### CLI example
```sh
## Single parameter
--heroku-organization=org1
## Mutiple parameters
--heroku-organization=org1 \
--heroku-organization=org2 \
--heroku-organization=org3
```
**Environment variable:** `$HEROKU_ORGS`
##### Environment variable example
```sh
## Single parameter
HEROKU_ORGS=org1
## Mutiple parameters
HEROKU_ORGS=org1,org2,org3
```
### Generic OAuth 2.0 authentication options
See [Configure OAuth 2.0](/chronograf/v1.9/administration/managing-security/#configure-oauth-2-0) for more information.
#### `--generic-name=`
The generic OAuth 2.0 name presented on the login page.
Environment variable: `$GENERIC_NAME`
#### `--generic-client-id=`
The generic OAuth 2.0 Client ID value.
Can be used for a custom OAuth 2.0 service.
Environment variable: `$GENERIC_CLIENT_ID`
#### `--generic-client-secret=`
The generic OAuth 2.0 Client Secret value.
Environment variable: `$GENERIC_CLIENT_SECRET`
#### `--generic-scopes=`
The scopes requested by provider of web client.
Default value: `user:email`
##### CLI example
```sh
## Single parameter
--generic-scopes=api
## Mutiple parameters
--generic-scopes=api \
--generic-scopes=openid \
--generic-scopes=read_user
```
Environment variable: `$GENERIC_SCOPES`
##### Environment variable example
```sh
## Single parameter
GENERIC_SCOPES=api
## Mutiple parameters
GENERIC_SCOPES=api,openid,read_user
```
#### `--generic-domains=`
The email domain required for user email addresses.
Example: `--generic-domains=example.com`
##### CLI example
```sh
## Single parameter
--generic-domains=delorean.com
## Mutiple parameters
--generic-domains=delorean.com \
--generic-domains=savetheclocktower.com
```
Environment variable: `$GENERIC_DOMAINS`
##### Environment variable example
```sh
## Single parameter
GENERIC_DOMAINS=delorean.com
## Mutiple parameters
GENERIC_DOMAINS=delorean.com,savetheclocktower.com
```
#### `--generic-auth-url`
The authorization endpoint URL for the OAuth 2.0 provider.
Environment variable: `$GENERIC_AUTH_URL`
#### `--generic-token-url`
The token endpoint URL for the OAuth 2.0 provider.
Environment variable: `$GENERIC_TOKEN_URL`
#### `--generic-api-url`
The URL that returns OpenID UserInfo-compatible information.
Environment variable: `$GENERIC_API_URL`
#### `--oauth-no-pkce`
Disable OAuth PKCE (Proof Key for Code Exchange).
Environment variable: `$OAUTH_NO_PKCE`

View File

@ -0,0 +1,70 @@
---
title: Configure Chronograf
description: >
Configure Chronograf, including security, multiple users, and multiple organizations.
menu:
chronograf_1_9:
name: Configure
weight: 20
parent: Administration
---
Configure Chronograf by passing command line options when starting the Chronograf service. Or set custom default configuration options in the filesystem so they dont have to be passed in when starting Chronograf.
- [Start the Chronograf service](#start-the-chronograf-service)
- [Set custom default Chronograf configuration options](#set-custom-default-chronograf-configuration-options)
- [Set up security, organizations, and users](#set-up-security-organizations-and-users)
## Start the Chronograf service
Use one of the following commands to start Chronograf:
- **If you installed Chronograf using an official Debian or RPM package and are running a distro with `systemd`. For example, Ubuntu 15 or later.**
```sh
systemctl start chronograf
```
- **If you installed Chronograf using an official Debian or RPM package:**
```sh
service chronograf start
```
- **If you built Chronograf from source:**
```bash
$GOPATH/bin/chronograf
```
## Set custom default Chronograf configuration options
Custom default Chronograf configuration settings can be defined in `/etc/default/chronograf`.
This file consists of key-value pairs. See keys (environment variables) for [Chronograf configuration options](/chronograf/v1.9/administration/config-options), and set values for the keys you want to configure.
```conf
HOST=0.0.0.0
PORT=8888
TLS_CERTIFICATE=/path/to/cert.pem
TOKEN_SECRET=MySup3rS3cretT0k3n
LOG_LEVEL=info
```
{{% note %}}
**Note:** `/etc/default/chronograf` is only created when installing the `.deb or `.rpm` package.
{{% /note %}}
## Set up security, organizations, and users
To set up security for Chronograf, configure:
* [OAuth 2.0 authentication](/chronograf/v1.9/administration/managing-security/#configure-oauth-2-0)
* [TLS (Transport Layer Security) for HTTPS](/chronograf/v1.9/administration/managing-security/#configure-tls-transport-layer-security-and-https)
After you configure OAuth 2.0 authentication, you can set up multiple organizations, roles, and users. For details, check out the following topics:
* [Managing organizations](/chronograf/v1.9/administration/managing-organizations/)
* [Managing Chronograf users](/chronograf/v1.9/administration/managing-chronograf-users/)
<!-- TODO ## Configuring Chronograf for InfluxDB Enterprise clusters) -->

View File

@ -0,0 +1,72 @@
---
title: Create a Chronograf HA configuration
description: Create a Chronograf high-availability (HA) cluster using etcd.
menu:
chronograf_1_9:
weight: 10
parent: Administration
---
To create a Chronograf high-availability (HA) configuration using an etcd cluster as a shared data store, do the following:
1. [Install and start etcd](#install-and-start-etcd)
2. Set up a load balancer for Chronograf
3. [Start Chronograf](#start-chronograf)
Have an existing Chronograf configuration store that you want to use with a Chronograf HA configuration? Learn how to [migrate your Chrongraf configuration](/chronograf/v1.9/administration/migrate-to-high-availability/) to a shared data store.
## Architecture
{{< svg "/static/img/chronograf/1-8-ha-architecture.svg" >}}
## Install and start etcd
1. Download the latest etcd release [from GitHub](https://github.com/etcd-io/etcd/releases/).
(For detailed installation instructions specific to your operating system, see [Install and deploy etcd](http://play.etcd.io/install).)
2. Extract the `etcd` binary and place it in your system PATH.
3. Start etcd.
## Start Chronograf
Run the following command to start Chronograf using `etcd` as the storage layer. The syntax depends on whether you're using command line flags or the `ETCD_ENDPOINTS` environment variable.
##### Define etcd endpoints with command line flags
```sh
# Syntax
chronograf --etcd-endpoints=<etcd-host>
# Examples
# Add a single etcd endpoint when starting Chronograf
chronograf --etcd-endpoints=localhost:2379
# Add multiple etcd endpoints when starting Chronograf
chronograf \
--etcd-endpoints=localhost:2379 \
--etcd-endpoints=192.168.1.61:2379 \
--etcd-endpoints=192.192.168.1.100:2379
```
##### Define etcd endpoints with the ETCD_ENDPOINTS environment variable
```sh
# Provide etcd endpoints in a comma-separated list
export ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
# Start Chronograf
chronograf
```
##### Define etcd endpoints with TLS enabled
Use the `--etcd-cert` flag to specify the path to the etcd PEM-encoded public
certificate file and the `--etcd-key` flag to specify the path to the private key
associated with the etcd certificate.
```sh
chronograf --etcd-endpoints=localhost:2379 \
--etcd-cert=path/to/etcd-certificate.pem \
--etcd-key=path/to/etcd-private-key.key
```
For more information, see [Chronograf etcd configuration options](/chronograf/v1.9/administration/config-options#etcd-options).

View File

@ -0,0 +1,289 @@
---
title: Create InfluxDB and Kapacitor connections
description: Create and manage InfluxDB and Kapacitor connections in the UI.
menu:
chronograf_1_9:
name: Create InfluxDB and Kapacitor connections
weight: 50
parent: Administration
related:
- /influxdb/v2.0/tools/chronograf/
---
Connections to InfluxDB and Kapacitor can be configured through the Chronograf user interface (UI) or with JSON configuration files:
- [Manage InfluxDB connections using the Chronograf UI](#manage-influxdb-connections-using-the-chronograf-ui)
- [Manage InfluxDB connections using .src files](#manage-influxdb-connections-using-src-files)
- [Manage Kapacitor connections using the Chronograf UI](#manage-kapacitor-connections-using-the-chronograf-ui)
- [Manage Kapacitor connections using .kap files](#manage-kapacitor-connections-using-kap-files)
{{% note %}}
**Note:** Connection details are stored in Chronografs internal database `chronograf-v1.db`.
You may administer the internal database when [restoring a Chronograf database](/chronograf/v1.9/administration/restoring-chronograf-db/)
or when [migrating a Chronograf configuration from BoltDB to etcd](/chronograf/v1.9/administration/migrate-to-high-availability/).
{{% /note %}}
## Manage InfluxDB connections using the Chronograf UI
To create an InfluxDB connection in the Chronograf UI:
1. Open Chronograf and click **Configuration** (wrench icon) in the navigation menu.
2. Click **Add Connection**.
![Chronograf connections landing page](/img/chronograf/1-6-connection-landing-page.png)
3. Provide the necessary connection credentials.
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxDB 1.x](#)
[InfluxDB Cloud or OSS 2.x ](#)
{{% /tabs %}}
{{% tab-content %}}
<img src="/img/chronograf/1-8-influxdb-v1-connection-config.png" style="width:100%; max-width:798px;"/>
- **Connection URL**: hostname or IP address and port of the InfluxDB 1.x instance
- **Connection Name**: Unique name for this connection.
- **Username**: InfluxDB 1.x username
_(Required only if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) in InfluxDB)_
- **Password**: InfluxDB password
_(Required only if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) in InfluxDB)_
- **Telegraf Database Name**: the database Chronograf uses to populate parts of the application, including the Host List page (default is `telegraf`)
- **Default Retention Policy**: default [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp)
(if left blank, defaults to `autogen`)
- **Default connection**: use this connection as the default connection for data exploration, dashboards, and administrative actions
{{% /tab-content %}}
{{% tab-content %}}
<img src="/img/chronograf/1-8-influxdb-v2-connection-config.png" style="width:100%; max-width:798px;"/>
- **Enable the {{< req "InfluxDB v2 Auth" >}} option**
- **Connection URL**: [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions/)
or [InfluxDB OSS 2.x URL](/influxdb/v2.0/reference/urls/)
```
http://localhost:8086
```
- **Connection Name**: Unique name for this connection.
- **Organiziation**: InfluxDB [organization](/influxdb/v2.0/organizations/)
- **Token**: InfluxDB [authentication token](/influxdb/v2.0/security/tokens/)
- **Telegraf Database Name:** InfluxDB [bucket](/influxdb/v2.0/organizations/buckets/)
Chronograf uses to populate parts of the application, including the Host List page (default is `telegraf`)
- **Default Retention Policy:** default [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp)
_**(leave blank)**_
- **Default connection**: use this connection as the default connection for data exploration and dashboards
{{% note %}}
For more information about connecting Chronograf to an InfluxDB Cloud or OSS 2.x instance, see:
- [Use Chronograf with InfluxDB Cloud](/influxdb/cloud/tools/chronograf/)
- [Use Chronograf with InfluxDB OSS 2.x](/{{< latest "influxdb" "v2" >}}/tools/chronograf/)
{{% /note %}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
4. Click **Add Connection**
* If the connection is valid, the Dashboards window appears, allowing you to import dashboard templates you can use to display and analyze your data. For details, see [Creating dashboards](/chronograf/v1.9/guides/create-a-dashboard).
* If the connection cannot be created, the following error message appears:
"Unable to create source: Error contacting source."
If this occurs, ensure all connection credentials are correct and that the InfluxDB instance is running and accessible.
The following dashboards are available:
- Docker
- Kubernetes Node
- Riak
- Consul
- Kubernetes Overview
- Mesos
- IIS
- RabbitMQ
- System
- VMware vSphere Overview
- Apache
- Elastisearch
- InfluxDB
- Memcached
- NSQ
- PostgreSQL
- Consul Telemetry
- HAProxy
- Kubernetes Pod
- NGINX
- Redis
- VMware vSphere VMs
- VMware vSphere Hosts
- PHPfpm
- Win System
- MySQL
- Ping
## Manage InfluxDB connections using .src files
Manually create `.src` files to store InfluxDB connection details.
`.src` files are simple JSON files that contain key-value paired connection details.
The location of `.src` files is defined by the [`--resources-path`](/chronograf/v1.9/administration/config-options/#resources-path)
command line option, which is, by default, the same as the [`--canned-path`](/chronograf/v1.9/administration/config-options/#canned-path-c).
A `.src` file contains the details for a single InfluxDB connection.
{{% note %}}
**Only InfluxDB 1.x connections are configurable in a `.src` file.**
Configure InfluxDB 2.x and Cloud connections with [CLI flags](/chronograf/v1.9/administration/config-options/#influxdb-connection-options)
or in the [Chronograf UI](#manage-influxdb-connections-using-the-chronograf-ui).
{{% /note %}}
Create a new file named `example.src` (the filename is arbitrary) and place it at Chronograf's `resource-path`.
All `.src` files should contain the following:
{{< keep-url >}}
```json
{
"id": "10000",
"name": "My InfluxDB",
"username": "test",
"password": "test",
"url": "http://localhost:8086",
"type": "influx",
"insecureSkipVerify": false,
"default": true,
"telegraf": "telegraf",
"organization": "example_org"
}
```
#### `id`
A unique, stringified non-negative integer.
Using a 4 or 5 digit number is recommended to avoid interfering with existing datasource IDs.
#### `name`
Any string you want to use as the display name of the source.
#### `username`
Username used to access the InfluxDB server or cluster.
*Only required if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) on the InfluxDB instance to which you're connecting.*
#### `password`
Password used to access the InfluxDB server or cluster.
*Only required if [authorization is enabled](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/) on the InfluxDB instance to which you're connecting.*
#### `url`
URL of the InfluxDB server or cluster.
#### `type`
Defines the type or distribution of InfluxDB to which you are connecting.
Below are the following options:
| InfluxDB Distribution | `type` Value |
| --------------------- | ------------ |
| InfluxDB OSS | `influx` |
| InfluxDB Enterprise | `influx-enterprise` |
#### `insecureSkipVerify`
Skips the SSL certificate verification process.
Set to `true` if you are using a self-signed SSL certificate on your InfluxDB server or cluster.
#### `default`
Set to `true` if you want the connection to be the default data connection used upon first login.
#### `telegraf`
The name of the Telegraf database on your InfluxDB server or cluster.
#### `organization`
The ID of the organization you want the data source to be associated with.
### Environment variables in .src files
`.src` files support the use of environment variables to populate InfluxDB connection details.
Environment variables can be loaded using the `"{{ .VARIABLE_KEY }}"` syntax:
```JSON
{
"id": "10000",
"name": "My InfluxDB",
"username": "{{ .INFLUXDB_USER }}",
"password": "{{ .INFLUXDB_PASS }}",
"url": "{{ .INFLUXDB_URL }}",
"type": "influx",
"insecureSkipVerify": false,
"default": true,
"telegraf": "telegraf",
"organization": "example_org"
}
```
## Manage Kapacitor connections using the Chronograf UI
Kapacitor is the data processing component of the TICK stack.
To use Kapacitor in Chronograf, create Kapacitor connections and configure alert endpoints.
To create a Kapacitor connection using the Chronograf UI:
1. Open Chronograf and click **Configuration** (wrench icon) in the navigation menu.
2. Next to an existing [InfluxDB connection](#manage-influxdb-connections-using-the-chronograf-ui), click **Add Kapacitor Connection** if there are no existing Kapacitor connections or select **Add Kapacitor Connection** in the **Kapacitor Connection** dropdown list.
![Add a new Kapacitor connection in Chronograf](/img/chronograf/1-6-connection-kapacitor.png)
3. In the **Connection Details** section, enter values for the following fields:
<img src="/img/chronograf/1-7-kapacitor-connection-config.png" style="width:100%; max-width:600px;">
* **Kapacitor URL**: Enter the hostname or IP address of the Kapacitor instance and the port. The field is prefilled with `http://localhost:9092`.
* **Name**: Enter the name for this connection.
* **Username**: Enter the username that will be shared for this connection.
*Only required if [authorization is enabled](/{{< latest "kapacitor" >}}/administration/security/#kapacitor-authentication-and-authorization) on the Kapacitor instance or cluster to which you're connecting.*
* **Password**: Enter the password.
*Only required if [authorization is enabled](/{{< latest "kapacitor" >}}/administration/security/#kapacitor-authentication-and-authorization) on the Kapacitor instance or cluster to which you're connecting.*
4. Click **Continue**. If the connection is valid, the message "Kapacitor Created! Configuring endpoints is optional." appears. To configure alert endpoints, see [Configuring alert endpoints](/chronograf/v1.9/guides/configuring-alert-endpoints/).
## Manage Kapacitor connections using .kap files
Manually create `.kap` files to store Kapacitor connection details.
`.kap` files are simple JSON files that contain key-value paired connection details.
The location of `.kap` files is defined by the `--resources-path` command line option, which is, by default, the same as the [`--canned-path`](/chronograf/v1.9/administration/config-options/#canned-path-c).
A `.kap` files contains the details for a single InfluxDB connection.
Create a new file named `example.kap` (the filename is arbitrary) and place it at Chronograf's `resource-path`.
All `.kap` files should contain the following:
```json
{
"id": "10000",
"srcID": "10000",
"name": "My Kapacitor",
"url": "http://localhost:9092",
"active": true,
"organization": "example_org"
}
```
#### `id`
A unique, stringified non-negative integer.
Using a 4 or 5 digit number is recommended to avoid interfering with existing datasource IDs.
#### `srcID`
The unique, stringified non-negative integer `id` of the InfluxDB server or cluster with which the Kapacitor service is associated.
#### `name`
Any string you want to use as the display name of the Kapacitor connection.
#### `url`
URL of the Kapacitor server.
#### `active`
If `true`, specifies that this is the Kapacitor connection that should be used when displaying Kapacitor-related information in Chronograf.
#### `organization`
The ID of the organization you want the Kapacitor connection to be associated with.
### Environment variables in .kap files
`.kap` files support the use of environment variables to populate Kapacitor connection details.
Environment variables can be loaded using the `"{{ .VARIABLE_KEY }}"` syntax:
```JSON
{
"id": "10000",
"srcID": "10000",
"name": "My Kapacitor",
"url": "{{ .KAPACITOR_URL }}",
"active": true,
"organization": "example_org"
}
```

View File

@ -0,0 +1,60 @@
---
title: Import and export Chronograf dashboards
description: Share dashboard JSON files between Chronograf instances, or add dashboards as resources to include in a deployment.
menu:
chronograf_1_9:
weight: 120
parent: Administration
---
Chronograf makes it easy to recreate robust dashboards without having to manually configure them from the ground up. Import and export dashboards between instances, or add dashboards as resources to include in a deployment.
- [Export a dashboard](#export-a-dashboard)
- [Load a dashboard as a resource](#load-a-dashboard-as-a-resource)
- [Import a dashboard](#import-a-dashboard)
- [Required user roles](#required-user-roles)
## Required user roles
All users can export a dashboard. To import a dashboard, a user must have an Admin or Editor role.
| Task vs Role | Admin | Editor | Viewer |
|------------------|:-----:|:------:|:------:|
| Export Dashboard | ✅ | ✅ | ✅ |
| Import Dashboard | ✅ | ✅ | ❌ |
## Export a dashboard
1. On the Dashboards page, hover over the dashboard you want to export, and then click the **Export**
button on the right.
<img src="/img/chronograf/1-6-dashboard-export.png" alt="Exporting a Chronograf dashboard" style="width:100%;max-width:912px"/>
This downloads a JSON file containing dashboard information including template variables, cells and cell information such as the query, cell-sizing, color scheme, visualization type, etc.
> No time series data is exported with a dashboard.
> Exports include only dashboard-related information as mentioned above.
## Load a dashboard as a resource
Automatically load the dashboard as a resource (useful for adding a dashboard to a deployment).
1. Rename the dashboard `.json` extension to `.dashboard`.
2. Use the [`resources-path` configuration option](/chronograf/v1.9/administration/config-options/#--resources-path) to save the dashboard in the `/resources` directory (by default, `/usr/share/chronograf/resources`).
## Import a dashboard
1. On your Dashboards page, click the **Import Dashboard** button.
2. Either drag and drop or select the JSON export file to import.
3. Click the **Upload Dashboard** button.
The newly imported dashboard is included in your list of dashboards.
![Importing a Chronograf dashboard](/img/chronograf/1-6-dashboard-import.gif)
### Reconcile unmatched sources
If the data sources defined in the imported dashboard file do not match any of your local sources,
reconcile each of the unmatched sources during the import process, and then click **Done**.
![Reconcile unmatched sources](/img/chronograf/1-6-dashboard-import-reconcile.png)

View File

@ -0,0 +1,252 @@
---
title: Manage Chronograf users
description: >
Manage users and roles, including SuperAdmin permission and organization-bound users.
menu:
chronograf_1_9:
name: Manage Chronograf users
weight: 90
parent: Administration
---
**On this page**
* [Manage Chronograf users and roles](#manage-chronograf-users-and-roles)
* [Organization-bound users](#organization-bound-users)
* [InfluxDB and Kapacitor users within Chronograf](#influxdb-and-kapacitor-users-within-chronograf)
* [Chronograf-owned resources](#chronograf-owned-resources)
* [Chronograf-accessed resources](#chronograf-accessed-resources)
* [Members](#members-role-member)
* [Viewers](#viewers-role-viewer)
* [Editors](#editors-role-editor)
* [Admins](#admins-role-admin)
* [Cross-organization SuperAdmin permission](#cross-organization-superadmin-permission)
* [All New Users are SuperAdmins configuration option](#all-new-users-are-superadmins-configuration-option)
* [Create users](#create-users)
* [Update users](#update-users)
* [Remove users](#remove-users)
* [Navigate organizations](#navigate-organizations)
* [Log in and log out](#log-in-and-log-out)
* [Switch the current organization](#switch-the-current-organization)
* [Purgatory](#purgatory)
## Manage Chronograf users and roles
{{% note %}}
**Note:** Support for organizations and user roles is available in Chronograf 1.4 or later.
First, OAuth 2.0 authentication must be configured (if it is, you'll see the
Chronograf Admin tab on the Admin menu).
For more information, see [Managing security](/chronograf/v1.9/administration/managing-security/).
{{% /note %}}
Chronograf includes four organization-bound user roles and one cross-organization SuperAdmin permission. In an organization, admins (with the `admin` role) or users with SuperAdmin permission can create, update, and assign roles to a user or remove a role assignment.
### Organization-bound users
Chronograf users are assigned one of the following four organization-bound user roles, listed here in order of increasing capabilities:
- [`member`](#members-role-member)
- [`viewer`](#viewers-role-viewer)
- [`editor`](#editors-role-editor)
- [`admin`](#admins-role-admin)
Each of these four roles, described in detail below, have different capabilities for the following Chronograf-owned or Chronograf-accessed resources.
#### InfluxDB and Kapacitor users within Chronograf
Chronograf uses InfluxDB and Kapacitor connections to manage user access control to InfluxDB and Kapacitor resources within Chronograf. The permissions of the InfluxDB and Kapacitor user specified within such a connection determine the capabilities for any Chronograf user with access (i.e., viewers, editors, and administrators) to that connection. Administrators include either an admin (`admin` role) or a user of any role with SuperAdmin permission.
{{% note %}}
**Note:** Chronograf users are entirely separate from InfluxDB and Kapacitor users.
The Chronograf user and authentication system applies to the Chronograf user interface.
InfluxDB and Kapacitor users and their permissions are managed separately.
[Chronograf connections](/chronograf/v1.9/administration/creating-connections/)
determine which InfluxDB or Kapacitor users to use when when connecting to each service.
{{% /note %}}
#### Chronograf-owned resources
Chronograf-owned resources include internal resources that are under the full control of Chronograf, including:
- Kapacitor connections
- InfluxDB connections
- Dashboards
- Canned layouts
- Chronograf organizations
- Chronograf users
- Chronograf Status Page content for News Feeds and Getting Started
#### Chronograf-accessed resources
Chronograf-accessed resources include external resources that can be accessed using Chronograf, but are under limited control by Chronograf. Chronograf users with the roles of `viewer`, `editor`, and `admin`, or users with SuperAdmin permission, have equal access to these resources:
- InfluxDB databases, users, queries, and time series data (if using InfluxDB Enterprise, InfluxDB roles can be accessed too)
- Kapacitor alerts and alert rules (called tasks in Kapacitor)
#### Members (role:`member`)
Members are Chronograf users who have been added to organizations but do not have any functional capabilities. Members cannot access any resources within an organization and thus effectively cannot use Chronograf. Instead, a member can only access Purgatory, where the user can [switch into organizations](#navigate-organizations) based on assigned roles.
By default, new organizations have a default role of `member`. If the Default organization is Public, then anyone who can authenticate, would become a member, but not be able to use Chronograf until an administrator assigns a different role.
#### Viewers (role:`viewer`)
Viewers are Chronograf users with effectively read-only capabilities for Chronograf-owned resources within their current organization:
- View canned dashboards
- View canned layouts
- View InfluxDB connections
- Switch current InfluxDB connection to other available connections
- Access InfluxDB resources through the current connection
- View the name of the current Kapacitor connection associated with each InfluxDB connection
- Access Kapacitor resources through the current connection
- [Switch into organizations](#navigate-organizations) where the user has a role
For Chronograf-accessed resources, viewers can:
- InfluxDB
- Read and write time series data
- Create, view, edit, and delete databases and retention policies
- Create, view, edit, and delete InfluxDB users
- View and kill queries
- _InfluxDB Enterprise_: Create, view, edit, and delete InfluxDB Enterprise roles
- Kapacitor
- View alerts
- Create, edit, and delete alert rules
#### Editors (role:`editor`)
Editors are Chronograf users with limited capabilities for Chronograf-owned resources within their current organization:
- Create, view, edit, and delete dashboards
- View canned layouts
- Create, view, edit, and delete InfluxDB connections
- Switch current InfluxDB connection to other available connections
- Access InfluxDB resources through the current connection
- Create, view, edit, and delete Kapacitor connections associated with InfluxDB connections
- Switch current Kapacitor connection to other available connections
- Access Kapacitor resources through the current connection
- [Switch into organizations](#navigate-organizations) where the user has a role
For Chronograf-accessed resources, editors can:
- InfluxDB
- Read and write time series data
- Create, view, edit, and delete databases and retention policies
- Create, view, edit, and delete InfluxDB users
- View and kill queries
- _InfluxDB Enterprise_: Create, view, edit, and delete InfluxDB Enterprise roles
- Kapacitor
- View alerts
- Create, edit, and delete alert rules
#### Admins (role:`admin`)
Admins are Chronograf users with all capabilities for the following Chronograf-owned resources within their current organization:
- Create, view, update, and remove Chronograf users
- Create, view, edit, and delete dashboards
- View canned layouts
- Create, view, edit, and delete InfluxDB connections
- Switch current InfluxDB connection to other available connections
- Access InfluxDB resources through the current connection
- Create, view, edit, and delete Kapacitor connections associated with InfluxDB connections
- Switch current Kapacitor connection to other available connections
- Access Kapacitor resources through the current connection
- [Switch into organizations](#navigate-organizations) where the user has a role
For Chronograf-accessed resources, admins can:
- InfluxDB
- Read and write time series data
- Create, view, edit, and delete databases and retention policies
- Create, view, edit, and delete InfluxDB users
- View and kill queries
- _InfluxDB Enterprise_: Create, view, edit, and delete InfluxDB Enterprise roles
- Kapacitor
- View alerts
- Create, edit, and delete alert rules
### Cross-organization SuperAdmin permission
SuperAdmin permission is a Chronograf permission that allows any user, regardless of role, to perform all administrator functions both within organizations, as well as across organizations. A user with SuperAdmin permission has _unlimited_ capabilities, including for the following Chronograf-owned resources:
* Create, view, update, and remove organizations
* Create, view, update, and remove users within an organization
* Grant or revoke the SuperAdmin permission of another user
* [Switch into any organization](#navigate-organizations)
* Toggle the Public setting of the Default organization
* Toggle the global config setting for [All new users are SuperAdmin](#all-new-users-are-superadmins-configuration-option)
Important SuperAdmin behaviors:
* SuperAdmin permission grants any user (whether `member`, `viewer`, `editor`, or `admin`) the full capabilities of admins and the SuperAdmin capabilities listed above.
* When a Chronograf user with SuperAdmin permission creates a new organization or switches into an organization where that user has no role, that SuperAdmin user is automatically assigned the `admin` role by default.
* SuperAdmin users cannot revoke their own SuperAdmin permission.
* SuperAdmin users are the only ones who can change the SuperAdmin permission of other Chronograf users. Regular admins who do not have SuperAdmin permission can perform normal operations on SuperAdmin users (create that user within their organization, change roles, and remove them), but they will not see that these users have SuperAdmin permission, nor will any of their actions affect the SuperAdmin permission of these users.
* If a user has their SuperAdmin permission revoked, that user will retain their assigned roles within their organizations.
#### All New Users are SuperAdmins configuration option
By default, the **Config** setting for "**All new users are SuperAdmins"** is **On**. Any user with SuperAdmin permission can toggle this under the **Admin > Chronograf > Organizations** tab. If this setting is **On**, any new user (who is created or who authenticates) will_ automatically have SuperAdmin permisison. If this setting is **Off**, any new user (who is created or who authenticates) will _not_ have SuperAdmin permisison unless they are explicitly granted it later by another user with SuperAdmin permission.
### Create users
Role required: `admin`
**To create a user:**
1. Open Chronograf in your web browser and select **Admin (crown icon) > Chronograf**.
2. Click the **Users** tab and then click **Create User**.
3. Add the following user information:
* **Username**: Enter the username as provided by the OAuth provider.
* **Role**: Select the Chronograf role.
* **Provider**: Enter the OAuth 2.0 provider to be used for authentication. Valid values are: `github`, `google`, `auth0`, `heroku`, or other names defined in the [`GENERIC_NAME` environment variable](/chronograf/v1.9/administration/config-options#generic-name).
* **Scheme**: Displays `oauth2`, which is the only supported authentication scheme in this release.
4. Click **Save** to finish creating the user.
### Update users
Role required: `admin`
Only a user's role can be updated. A user's username, provider, and scheme cannot be updated. (Effectively, to "update" a user's username, provider, or scheme, the user must be removed and added again with the desired values.)
**To change a user's role:**
1. Open Chronograf in your web browser and select **Admin (crown icon) > Chronograf**.
2. Click the **Users** tab to display the list of users within the current organization.
3. Select a new role for the user. The update is automatically persisted.
### Remove users
Role required: `admin`
**To remove a user:**
1. Open Chronograf in your web browser and select **Admin (crown icon) > Chronograf**.
2. Click the **Users** tab to display the list of users.
3. Hover your cursor over the user you want to remove and then click **Remove** and **Confirm**.
### Navigate organizations
Chronograf is always used in the context of an organization. When a user logs in to Chronograf, that user will access only the resources owned by their current organization. The only exception to this is that users with SuperAdmin permission will also be able to [manage organizations](/chronograf/v1.9/administration/managing-organizations/) in the Chronograf Admin page.
#### Log in and log out
A user can log in from the Chronograf homepage using any configured OAuth 2.0 provider.
A user can log out by hovering over the **User (person icon)** in the left navigation bar and clicking **Log out**.
#### Switch the current organization
A user's current organization and role is highlighted in the **Switch Organizations** list, which can be found by hovering over the **User (person icon)** in the left navigation bar.
When a user has a role in more than one organization, that user can switch into any other organization where they have a role by selecting the desired organization in the **Switch Organizations** list.
#### Purgatory
If at any time, a user is a `member` within their current organization and does not have SuperAdmin permission, that user will be redirected to a page called Purgatory. There, the user will see their current organization and role, as well as a message to contact an administrator for access.
On the same page, that user will see a list of all of their organizations and roles. The user can switch into any listed organization where their role is `viewer`, `editor`, or `admin` by clicking **Log in** next to the desired organization.
**Note** In the rare case that a user is granted SuperAdmin permission while in Purgatory, they will be able to switch into any listed organization, as expected.

View File

@ -0,0 +1,327 @@
---
title: Manage InfluxDB users in Chronograf
description: >
Enable authentication and manage InfluxDB OSS and InfluxDB Enterprise users in Chronograf.
aliases:
- /chronograf/v1.9/administration/user-management/
menu:
chronograf_1_9:
name: Manage InfluxDB users
weight: 60
parent: Administration
---
The **Chronograf Admin** provides InfluxDB user management for InfluxDB OSS and InfluxDB Enterprise users.
{{% note %}}
***Note:*** For details on Chronograf user authentication and management, see [Managing security](/chronograf/v1.9/administration/managing-security/).
{{% /note %}}
{{% note %}}
#### Disabled administrative features
If connected to **InfluxDB OSS v2.x** or **InfluxDB Cloud**, all InfluxDB administrative
features are disabled in Chronograf. Use the InfluxDB OSS v2.x or InfluxDB Cloud user
interfaces, CLIs, or APIs to complete administrative tasks.
{{% /note %}}
**On this page:**
* [Enable authentication](#enable-authentication)
* [InfluxDB OSS user management](#influxdb-oss-user-management)
* [InfluxDB Enterprise user management](#influxdb-enterprise-user-management)
## Enable authentication
Follow the steps below to enable authentication.
The steps are the same for InfluxDB OSS instances and InfluxDB Enterprise clusters.
{{% note %}}
_**InfluxDB Enterprise clusters:**_
Repeat the first three steps for each data node in a cluster.
{{% /note %}}
### Step 1: Enable authentication.
Enable authentication in the InfluxDB configuration file.
For most Linux installations, the configuration file is located in `/etc/influxdb/influxdb.conf`.
In the `[http]` section of the InfluxDB configuration file (`influxdb.conf`), uncomment the `auth-enabled` option and set it to `true`, as shown here:
```
[http]
# Determines whether HTTP endpoint is enabled.
# enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true #
```
### Step 2: Restart the InfluxDB service.
Restart the InfluxDB service for your configuration changes to take effect:
```
~# sudo systemctl restart influxdb
```
### Step 3: Create an admin user.
Because authentication is enabled, you need to create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) before you can do anything else in the database.
Run the `curl` command below to create an admin user, replacing:
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxDB Enterprise data nodes
* `chronothan` with your own username
* `supersecret` with your own password (note that the password requires single quotes)
{{< keep-url >}}
```
~# curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE USER chronothan WITH PASSWORD 'supersecret' WITH ALL PRIVILEGES"
```
A successful `CREATE USER` query returns a blank result:
```
{"results":[{"statement_id":0}]} <--- Success!
```
### Step 4: Edit the InfluxDB source in Chronograf.
If you've already [connected your database to Chronograf](/chronograf/v1.9/introduction/installation/#connect-chronograf-to-your-influxdb-instance-or-influxdb-enterprise-cluster), update the connection configuration in Chronograf with your new username and password.
Edit existing InfluxDB database sources by navigating to the Chronograf configuration page and clicking on the name of the source.
## InfluxDB OSS User Management
On the **Chronograf Admin** page:
* View, create, and delete admin and non-admin users
* Change user passwords
* Assign admin and remove admin permissions to or from a user
![InfluxDB OSS user management](/img/chronograf/1-6-admin-usermanagement-oss.png)
InfluxDB users are either admin users or non-admin users.
See InfluxDB's [authentication and authorization](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) documentation for more information about those user types.
{{% note %}}
Chronograf currently does not support assigning InfluxDB database `READ`or `WRITE` access to non-admin users.
As a workaround, grant `READ`, `WRITE`, or `ALL` (`READ` and `WRITE`) permissions to non-admin users with the following curl commands, replacing anything inside `< >` with your own values:
#### Grant `READ` permission:
```sh
curl --request POST "http://<InfluxDB-IP>:8086/query?u=<username>&p=<password>" \
--data-urlencode "q=GRANT READ ON <database-name> TO <non-admin-username>"
```
#### Grant `WRITE` permission:
```sh
curl --request POST "http://<InfluxDB-IP>:8086/query?u=<username>&p=<password>" \
--data-urlencode "q=GRANT WRITE ON <database-name> TO <non-admin-username>"
```
#### Grant `ALL` permission:
```sh
curl --request POST "http://<InfluxDB-IP>:8086/query?u=<username>&p=<password>" \
--data-urlencode "q=GRANT ALL ON <database-name> TO <non-admin-username>"
```
In all cases, a successful `GRANT` query returns a blank result:
```sh
{"results":[{"statement_id":0}]} # <--- Success!
```
Remove `READ`, `WRITE`, or `ALL` permissions from non-admin users by replacing `GRANT` with `REVOKE` in the curl commands above.
{{% /note %}}
## InfluxDB Enterprise user management
On the `Admin` page:
* View, create, and delete users
* Change user passwords
* Assign and remove permissions to or from a user
* Create, edit, and delete roles
* Assign and remove roles to or from a user
![InfluxDB Enterprise user management](/img/chronograf/1-6-admin-usermanagement-cluster.png)
### User types
Admin users have the following permissions by default:
* [CreateDatabase](#createdatabase)
* [CreateUserAndRole](#createuserandrole)
* [DropData](#dropdata)
* [DropDatabase](#dropdatabase)
* [ManageContinuousQuery](#managecontinuousquery)
* [ManageQuery](#managequery)
* [ManageShard](#manageshard)
* [ManageSubscription](#managesubscription)
* [Monitor](#monitor)
* [ReadData](#readdata)
* [WriteData](#writedata)
Non-admin users have no permissions by default.
Assign permissions and roles to both admin and non-admin users.
### Permissions
#### AddRemoveNode
Permission to add or remove nodes from a cluster.
**Relevant `influxd-ctl` arguments**:
[`add-data`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#add-data),
[`add-meta`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#add-meta),
[`join`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#join),
[`remove-data`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#remove-data),
[`remove-meta`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#remove-meta), and
[`leave`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#leave)
**Pages in Chronograf that require this permission**: NA
#### CopyShard
Permission to copy shards.
**Relevant `influxd-ctl` arguments**:
[`copy-shard`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#copy-shard)
**Pages in Chronograf that require this permission**: NA
#### CreateDatabase
Permission to create databases, create [retention policies](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp), alter retention policies, and view retention policies.
**Relevant InfluxQL queries**:
[`CREATE DATABASE`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-database),
[`CREATE RETENTION POLICY`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-retention-policies-with-create-retention-policy),
[`ALTER RETENTION POLICY`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy), and
[`SHOW RETENTION POLICIES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-retention-policies)
**Pages in Chronograf that require this permission**: Dashboards, Data Explorer, and Databases on the Admin page
#### CreateUserAndRole
Permission to manage users and roles; create users, drop users, grant admin status to users, grant permissions to users, revoke admin status from users, revoke permissions from users, change user passwords, view user permissions, and view users and their admin status.
**Relevant InfluxQL queries**:
[`CREATE USER`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
[`DROP USER`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#general-admin-and-non-admin-user-management),
[`GRANT ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
[`GRANT [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management),
[`REVOKE ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
[`REVOKE [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management),
[`SET PASSWORD`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#general-admin-and-non-admin-user-management),
[`SHOW GRANTS`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management), and
[`SHOW USERS`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards, Users and Roles on the Admin page
#### DropData
Permission to drop data, in particular [series](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#series) and [measurements](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement).
**Relevant InfluxQL queries**:
[`DROP SERIES`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#drop-series-from-the-index-with-drop-series),
[`DELETE`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-series-with-delete), and
[`DROP MEASUREMENT`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-measurements-with-drop-measurement)
**Pages in Chronograf that require this permission**: NA
#### DropDatabase
Permission to drop databases and retention policies.
**Relevant InfluxQL queries**:
[`DROP DATABASE`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-a-database-with-drop-database) and
[`DROP RETENTION POLICY`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-retention-policies-with-drop-retention-policy)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards, Databases on the Admin page
#### KapacitorAPI
Permission to access the API for InfluxKapacitor Enterprise.
This does not include configuration-related API calls.
**Pages in Chronograf that require this permission**: NA
#### KapacitorConfigAPI
Permission to access the configuration-related API calls for InfluxKapacitor Enterprise.
**Pages in Chronograf that require this permission**: NA
#### ManageContinuousQuery
Permission to create, drop, and view [continuous queries](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#continuous-query-cq).
**Relevant InfluxQL queries**:
[`CreateContinuousQueryStatement`](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/),
[`DropContinuousQueryStatement`](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#deleting-continuous-queries), and
[`ShowContinuousQueriesStatement`](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#listing-continuous-queries)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards
#### ManageQuery
Permission to view and kill queries.
**Relevant InfluxQL queries**:
[`SHOW QUERIES`](/{{< latest "influxdb" "v1" >}}/troubleshooting/query_management/#list-currently-running-queries-with-show-queries) and
[`KILL QUERY`](/{{< latest "influxdb" "v1" >}}/troubleshooting/query_management/#stop-currently-running-queries-with-kill-query)
**Pages in Chronograf that require this permission**: Queries on the Admin page
#### ManageShard
Permission to copy, delete, and view [shards](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#shard).
**Relevant InfluxQL queries**:
[`DropShardStatement`](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-a-shard-with-drop-shard),
[`ShowShardGroupsStatement`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shard-groups), and
[`ShowShardsStatement`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shards)
**Pages in Chronograf that require this permission**: NA
#### ManageSubscription
Permission to create, drop, and view [subscriptions](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#subscription).
**Relevant InfluxQL queries**:
[`CREATE SUBSCRIPTION`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#create-subscription),
[`DROP SUBSCRIPTION`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#drop-subscription), and
[`SHOW SUBSCRIPTIONS`](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-subscriptions)
**Pages in Chronograf that require this permission**: Alerting
#### Monitor
Permission to view cluster statistics and diagnostics.
**Relevant InfluxQL queries**:
[`SHOW DIAGNOSTICS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-diagnostics) and
[`SHOW STATS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-stats)
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards
#### ReadData
Permission to read data.
**Relevant InfluxQL queries**:
[`SHOW FIELD KEYS`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-field-keys),
[`SHOW MEASUREMENTS`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-measurements),
[`SHOW SERIES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-series),
[`SHOW TAG KEYS`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-keys),
[`SHOW TAG VALUES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-values), and
[`SHOW RETENTION POLICIES`](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-retention-policies)
**Pages in Chronograf that require this permission**: Admin, Alerting, Dashboards, Data Explorer, Host List
#### WriteData
Permission to write data.
**Relevant InfluxQL queries**: NA
**Pages in Chronograf that require this permission**: NA
### Roles
Roles are groups of permissions.
Assign roles to one or more users.
For example, the image below contains three roles: `CREATOR`, `DESTROYER`, and `POWERLESS`.
`CREATOR` includes two permissions (`CreateDatbase` and `CreateUserAndRole`) and is assigned to one user (`chrononut`).
`DESTROYER` also includes two permissions (`DropDatabase` and `DropData`) and is assigned to two users (`chrononut` and `chronelda`).
![InfluxDB OSS user management](/img/chronograf/1-6-admin-usermanagement-roles.png)

View File

@ -0,0 +1,120 @@
---
title: Manage Chronograf organizations
description: Create, configure, map, and remove organizations in Chronograf.
menu:
chronograf_1_9:
name: Manage Chronograf organizations
weight: 80
parent: Administration
---
**On this page:**
* [About Chronograf organizations](#about-chronograf-organizations)
* [Use the default organization](#use-the-default-organization)
* [Create organizations](#create-organizations)
* [Configure organizations](#configure-organizations)
* [Map organizations](#map-organizations)
* [Remove organizations](#remove-organizations)
## About Chronograf organizations
{{% note %}}
**Note:** Support for organizations and user roles is available in Chronograf 1.4 or later.
First, OAuth 2.0 authentication must be configured (if it is, you'll see the Chronograf Admin tab on the Admin menu).
For more information, see [managing security](/chronograf/v1.9/administration/managing-security/).
{{% /note %}}
For information about the new user roles and SuperAdmin permission, see [Managing Chronograf users](/chronograf/v1.9/administration/managing-chronograf-users/).
A Chronograf organization is a collection of Chronograf users who share common Chronograf-owned resources, including dashboards, InfluxDB connections, and Kapacitor connections. Organizations can be used to represent companies, functional units, projects, or teams. Chronograf users can be members of multiple organizations.
{{% note %}}
**Note:** Only users with SuperAdmin permission can manage organizations. Admins, editors, viewers, and members cannot manage organizations unless they have SuperAdmin permission.
{{% /note %}}
## Use the default organization
{{% note %}}
**Note:** The default organization can be used to support Chronograf as configured in versions earlier than 1.4.
Upon upgrading, any Chronograf resources that existed prior to 1.4 automatically become owned by the Default organization.
{{% /note %}}
Upon installation, the default organization is ready for use and allows Chronograf to be used as-is.
## Create organizations
Your company, organizational units, teams, and projects may require the creation of additional organizations, beyond the Default organization. Additional organizations can be created as described below.
**To create an organization:**
**Required permission:** SuperAdmin
1) In the Chronograf navigation bar, click **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) In the **All Orgs** tab, click **Create Organization**.
3) Under **Name**, click on **"Untitled Organization"** and enter the new organization name.
4) Under **Default Role**, select the default role for new users within that organization. Valid options include `member` (default), `viewer`, `editor`, and `admin`.
5) Click **Save**.
## Configure organizations
**Required permission:** SuperAdmin
You can configure existing and new organizations in the **Organizations** tab of the **Chronograf Admin** page as follows:
* **Name**: The name of the organization. Click on the organization name to change it.
> ***Note:*** You can change the Default organization's name, but that organization will always be the default organization.
* **Public**: [Default organization only] Indicates whether a user can authenticate without being explicitly added to the organization. When **Public** is toggled to **Off**, new users cannot authenticate into your Chronograf instance unless they have been explicitly added to the organization by an administrator.
> ***Note:*** All organizations other than the Default organization require users to be explicitly added by an administrator.
* **Default Role**: The role granted to new users by default when added to an organization. Valid options are `member` (default), `viewer`, `editor`, and `admin`.
See the following pages for more information about managing Chronograf users and security:
* [Manage Chronograf users](/chronograf/v1.9/administration/managing-chronograf-users/)
* [Manage security](/chronograf/v1.9/administration/managing-security/)
## Map organizations
**To create an organization mapping:**
**Required permission:** SuperAdmin
1) In the Chronograf navigation bar, select **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) Click the **Org Mappings** tab to view a list of organization mappings.
3) To add an organization mapping, click the **Create Mapping** button. A new row is added to the listing.
4) In the new row, enter the following:
- **Scheme**, select `oauth2`.
- **Provider**: Enter the provider. Valid values include `Google` and `GitHub`.
- **Provider Org**: [Optional] Enter the email domain(s) you want to accept.
- **Organization**: Select the organization that can use this authentication provider.
**To remove an organization mapping:**
**Required permission:** SuperAdmin
1) In the Chronograf navigation bar, select **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) Click the **Org Mappings** tab to view a list of organization mappings.
3) To remove an organization mapping, click the **Delete** button at the end of the mapping row you want to remove, and then confirm the action.
## Remove organizations
When an organization is removed:
* Users within that organization are removed from that organization and will be logged out of the application.
* All users with roles in that organization are updated to no longer have a role in that organization
* All resources owned by that organization are deleted.
**To remove an organization:**
**Required permission:** SuperAdmin
1) In the navigation bar of the Chronograf application, select **Admin** (crown icon) > **Chronograf** to open the **Chronograf Admin** page.
2) Click the **All Orgs** tab to view a list of organizations.
3) To the right of the the organization that you want to remove, click the **Remove** button (trashcan icon) and then confirm by clicking the **Save** button.

View File

@ -0,0 +1,599 @@
---
title: Manage Chronograf security
description: Manage Chronograf security with OAuth 2.0 providers.
aliases: /chronograf/v1.9/administration/security-best-practices/
menu:
chronograf_1_9:
name: Manage Chronograf security
weight: 70
parent: Administration
---
To enhance security, configure Chronograf to authenticate and authorize with [OAuth 2.0](https://oauth.net/) and use TLS/HTTPS.
(Basic authentication with username and password is also available.)
* [Configure Chronograf to authenticate with OAuth 2.0](#configure-chronograf-to-authenticate-with-oauth-20)
1. [Generate a Token Secret](#generate-a-token-secret)
2. [Set configurations for your OAuth provider](#set-configurations-for-your-oauth-provider)
3. [Configure authentication duration](#configure-authentication-duration)
* [Configure Chronograf to authenticate with a username and password](#configure-chronograf-to-authenticate-with-a-username-and-password)
* [Configure TLS (Transport Layer Security) and HTTPS](#configure-tls-transport-layer-security-and-https)
## Configure Chronograf to authenticate with OAuth 2.0
{{% note %}}
After configuring OAuth 2.0, the Chronograf Admin tab becomes visible.
You can then set up [multiple organizations](/chronograf/v1.9/administration/managing-organizations/)
and [users](/chronograf/v1.9/administration/managing-influxdb-users/).
{{% /note %}}
Configure Chronograf to use an OAuth 2.0 provider and JWT (JSON Web Token) to authenticate users and enable role-based access controls.
(For more details on OAuth and JWT, see [RFC 6749](https://tools.ietf.org/html/rfc6749) and [RFC 7519](https://tools.ietf.org/html/rfc7519).)
{{% note %}}
#### OAuth PKCE
OAuth configurations in **Chronograf 1.9+** use [OAuth PKCE](https://oauth.net/2/pkce/) to
mitigate the threat of having the authorization code intercepted during the OAuth token exchange.
OAuth integrations that do no currently support PKCE are not affected.
**To disable OAuth PKCE** and revert to the previous token exchange, use the
[`--oauth-no-pkce` Chronograf configuration option](/chronograf/v1.9/administration/config-options/#--oauth-no-pkce)
or set the `OAUTH_NO_PCKE` environment variable to `true`.
{{% /note %}}
### Generate a Token Secret
To configure any of the supported OAuth 2.0 providers to work with Chronograf,
you must configure the `TOKEN_SECRET` environment variable (or command line option).
Chronograf will use this secret to generate the JWT Signature for all access tokens.
1. Generate a high-entropy pseudo-random string.
For example, to do this with OpenSSL, run this command:
```sh
openssl rand -base64 256 | tr -d '\n'
```
2. Set the environment variable:
```
TOKEN_SECRET=<mysecret>
```
{{% note %}}
***InfluxDB Enterprise clusters:*** If you are running multiple Chronograf servers in a high availability configuration,
set the `TOKEN_SECRET` environment variable on each server to ensure that users can stay logged in.
{{% /note %}}
### JWKS Signature Verification (optional)
If the OAuth provider implements OpenID Connect with RS256 signatures, you need to enable this feature with the `USE_ID_TOKEN` variable
and provide a JSON Web Key Set (JWKS) document (holding the certificate chain) to validate the RSA signatures against.
This certificate chain is regularly rolled over (when the certificates expire), so it is fetched from the `JWKS_URL` on demand.
**Example:**
```sh
export USE_ID_TOKEN=true
export JWKS_URL=https://example.com/adfs/discovery/keys
```
### Set configurations for your OAuth provider
To enable OAuth 2.0 authorization and authentication in Chronograf,
you must set configuration options that are specific for the OAuth 2.0 authentication provider you want to use.
Configuration steps for the following supported authentication providers are provided in these sections below:
* [GitHub](#configure-github-authentication)
* [Google](#configure-google-authentication)
* [Auth0](#configure-auth0-authentication)
* [Heroku](#configure-heroku-authentication)
* [Okta](#configure-okta-authentication)
* [Gitlab](#configure-gitlab-authentication)
* [Azure Active Directory](#configure-azure-active-directory-authentication)
* [Bitbucket](#configure-bitbucket-authentication)
* [Configure Chronograf to use any OAuth 2.0 provider](#configure-chronograf-to-use-any-oauth-20-provider)
{{% note %}}
If you haven't already, you must first [generate a token secret](#generate-a-token-secret) before proceeding.
{{% /note %}}
---
#### Configure GitHub authentication
1. Follow the steps to [Register a new OAuth application](https://github.com/settings/applications/new)
on GitHub to obtain your Client ID and Client Secret.
On the GitHub application registration page, enter the following values:
- **Homepage URL**: the full Chronograf server name and port.
For example, to run the application locally with default settings, set the this URL to `http://localhost:8888`.
- **Authorization callback URL**: the **Homepage URL** plus the callback URL path `/oauth/github/callback`
(for example, `http://localhost:8888/oauth/github/callback`).
2. Set the Chronograf environment variables with the credentials provided by GitHub:
```sh
export GH_CLIENT_ID=<github-client-id>
export GH_CLIENT_SECRET=<github-client-secret>
# If using Github Enterprise
export GH_URL=https://github.custom-domain.com
```
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
Alternatively, set environment variables using the equivalent command line options:
- [`--github-url`](/chronograf/v1.9/administration/config-options/#--github-url)
- [`--github-client-id`](/chronograf/v1.9/administration/config-options/#--github-client-id-i)
- [`--github-client-secret`](/chronograf/v1.9/administration/config-options/#--github-client-secret-s)
- [`--token_secret=`](/chronograf/v1.9/administration/config-options/#--token-secret-t)
For details on the command line options and environment variables, see [GitHub OAuth 2.0 authentication options](/chronograf/v1.9/administration/config-options#github-specific-oauth-20-authentication-options).
##### GitHub organizations (optional)
To require GitHub organization membership for authenticating users, set the `GH_ORGS` environment variable with the name of your organization.
```sh
export GH_ORGS=biffs-gang
```
If the user is not a member of the specified GitHub organization, then the user will not be granted access.
To support multiple organizations, use a comma-delimited list.
```sh
export GH_ORGS=hill-valley-preservation-sociey,the-pinheads
```
{{% note %}}
When logging in for the first time, make sure to grant access to the organization you configured.
The OAuth application can only see membership in organizations it has been granted access to.
{{% /note %}}
##### Example GitHub OAuth configuration
```bash
# Github Enterprise base URL
export GH_URL=https://github.mydomain.com
# GitHub Client ID
export GH_CLIENT_ID=b339dd4fddd95abec9aa
# GitHub Client Secret
export GH_CLIENT_SECRET=260041897d3252c146ece6b46ba39bc1e54416dc
# Secret used to generate JWT tokens
export TOKEN_SECRET=Super5uperUdn3verGu355!
# Restrict to specific GitHub organizations
export GH_ORGS=biffs-gang
```
#### Configure Google authentication
1. Follow the steps in [Obtain OAuth 2.0 credentials](https://developers.google.com/identity/protocols/OpenIDConnect#getcredentials)
to obtain the required Google OAuth 2.0 credentials, including a Google Client ID and Client Secret, by
2. Verify that Chronograf is publicly accessible using a fully-qualified domain name so that Google can properly redirect users back to the application.
3. Set the Chronograf environment variables for the Google OAuth 2.0 credentials and **Public URL** used to access Chronograf:
```sh
export GOOGLE_CLIENT_ID=812760930421-kj6rnscmlbv49pmkgr1jq5autblc49kr.apps.googleusercontent.com
export GOOGLE_CLIENT_SECRET=wwo0m29iLirM6LzHJWE84GRD
export PUBLIC_URL=http://localhost:8888
```
4. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
Alternatively, the environment variables discussed above can be set using their corresponding command line options:
* [`--google-client-id=`](/chronograf/v1.9/administration/config-options/#google-client-id)
* [`--google-client-secret=`](/chronograf/v1.9/administration/config-options/#google-client-secret)
* [`--public-url=`](/chronograf/v1.9/administration/config-options/#public-url)
* [`--token_secret=`](/chronograf/v1.9/administration/config-options/#token-secret-t)
For details on Chronograf command line options and environment variables, see [Google OAuth 2.0 authentication options](/chronograf/v1.9/administration/config-options#google-specific-oauth-20-authentication-options).
##### Optional Google domains
Configure Google authentication to restrict access to Chronograf to specific domains.
Set the `GOOGLE_DOMAINS` environment variable or the [`--google-domains`](/chronograf/v1.9/administration/config-options/#google-domains) command line option.
Separate multiple domains using commas.
For example, to permit access only from `biffspleasurepalace.com` and `savetheclocktower.com`, set the environment variable as follows:
```sh
export GOOGLE_DOMAINS=biffspleasurepalace.com,savetheclocktower.com
```
#### Configure Auth0 authentication
See [OAuth 2.0](https://auth0.com/docs/protocols/oauth2) for details about the Auth0 implementation.
1. Set up your Auth0 account to obtain the necessary credentials.
1. From the Auth0 user dashboard, click **Create Application**.
2. Choose **Regular Web Applications** as the type of application and click **Create**.
3. In the **Settings** tab, set **Token Endpoint Authentication** to **None**.
4. Set **Allowed Callback URLs** to `https://www.example.com/oauth/auth0/callback` (substituting `example.com` with the [`PUBLIC_URL`](/chronograf/v1.9/administration/config-options/#general-authentication-options) of your Chronograf instance)
5. Set **Allowed Logout URLs** to `https://www.example.com` (substituting `example.com` with the [`PUBLIC_URL`](/chronograf/v1.9/administration/config-options/#general-authentication-options) of your Chronograf instance)
<!-- ["OIDC Conformant"](https://auth0.com/docs/api-auth/intro#how-to-use-the-new-flows). -->
2. Set the Chronograf environment variables based on your Auth0 client credentials:
* `AUTH0_DOMAIN` (Auth0 domain)
* `AUTH0_CLIENT_ID` (Auth0 Client ID)
* `AUTH0_CLIENT_SECRET` (Auth0 client Secret)
* `PUBLIC_URL` (Public URL, used in callback URL and logout URL above)
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
Alternatively, the environment variables discussed above can be set using their corresponding command line options:
* [`--auth0-domain`](/chronograf/v1.9/administration/config-options/#auth0-specific-oauth-20-authentication-options)
* [`--auth0-client-id`](/chronograf/v1.9/administration/config-options/#auth0-specific-oauth-20-authentication-options)
* [`--auth0-client-secret`](/chronograf/v1.9/administration/config-options/#auth0-specific-oauth-20-authentication-options)
* [`--public-url`](/chronograf/v1.9/administration/config-options/#general-authentication-options)
##### Auth0 organizations (optional)
Auth0 can be customized to the operator's requirements, so it has no official concept of an "organization."
Organizations are supported in Chronograf using a lightweight `app_metadata` key that can be inserted into Auth0 user profiles automatically or manually.
To assign a user to an organization, add an `organization` key to the user `app_metadata` field with the value corresponding to the user's organization.
For example, you can assign the user Marty McFly to the "time-travelers" organization by setting `app_metadata` to `{"organization": "time-travelers"}`.
This can be done either manually by an operator or automatically through the use of an [Auth0 Rule](https://auth0.com/docs/rules) or a [pre-user registration Auth0 Hook](https://auth0.com/docs/hooks/concepts/pre-user-registration-extensibility-point).
Next, you will need to set the Chronograf [`AUTH0_ORGS`](/chronograf/v1.9/administration/config-options/#auth0-organizations) environment variable to a comma-separated list of the allowed organizations.
For example, if you have one group of users with an `organization` key set to `biffs-gang` and another group with an `organization` key set to `time-travelers`, you can permit access to both with this environment variable: `AUTH0_ORGS=biffs-gang,time-travelers`.
An `--auth0-organizations` command line option is also available, but it is limited to a single organization and does not accept a comma-separated list like its environment variable equivalent.
#### Configure Heroku authentication
1. Obtain a client ID and application secret for Heroku by following the guide posted [here](https://devcenter.heroku.com/articles/oauth#register-client).
2. Set the Chronograf environment variables based on your Heroku client credentials:
```sh
export HEROKU_CLIENT_ID=<client-id-from-heroku>
export HEROKU_SECRET=<client-secret-from-heroku>
```
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
##### Heroku organizations (optional)
To restrict access to members of specific Heroku organizations,
use the `HEROKU_ORGS` environment variable (or associated command line option).
Multiple values must be comma-separated.
For example, to permit access from the `hill-valley-preservation-society` organization and `the-pinheads` organization,
use the following environment variable:
```sh
export HEROKU_ORGS=hill-valley-preservation-sociey,the-pinheads
```
#### Configure Okta authentication
1. Create an Okta web application by following the steps in the Okta documentation: [Implement the Authorization Code Flow](https://developer.okta.com/docs/guides/implement-auth-code/overview/).
1. In the **General Settings** section, find the **Allowed grant types** listing and select
only the **Client acting on behalf of a user:** **Authorization Code** option.
2. In the **LOGIN** section, set the **Login redirect URIs* and **Initiate login URI** to `http://localhost:8888/oauth/okta/callback` (the default callback URL for Chronograf).
2. Set the following Chronograf environment variables:
```bash
GENERIC_NAME=okta
# The client ID is provided in the "Client Credentials" section of the Okta dashboard.
GENERIC_CLIENT_ID=<okta_client_ID>
# The client secret is in the "Client Credentials" section of the Okta dashboard.
GENERIC_CLIENT_SECRET=<okta_client_secret>
GENERIC_AUTH_URL=https://dev-553212.oktapreview.com/oauth2/default/v1/authorize
GENERIC_TOKEN_URL=https://dev-553212.oktapreview.com/oauth2/default/v1/token
GENERIC_API_URL=https://dev-553212.oktapreview.com/oauth2/default/v1/userinfo
PUBLIC_URL=http://localhost:8888
TOKEN_SECRET=secretsecretsecret
GENERIC_SCOPES=openid,profile,email
```
3. If you haven't already, set the Chronograf environment with your token secret:
```sh
export TOKEN_SECRET=Super5uperUdn3verGu355!
```
#### Configure GitLab authentication
1. In your GitLab profile, [create a new OAuth2 authentication service](https://docs.gitlab.com/ee/integration/oauth_provider.html#adding-an-application-through-the-profile).
1. Provide a name for your application, then enter your publicly accessible Chronograf URL with the `/oauth/gitlab/callback` path as your GitLab **callback URL**.
(For example, `http://<your_chronograf_server>:8888/oauth/gitlab/callback`.)
2. Click **Submit** to save the service details.
3. Make sure your application has **openid** and **read_user** scopes.
2. Copy the provided **Application Id** and **Secret** and set the following environment variables:
> In the examples below, note the use of `gitlab-server-example.com` and `chronograf-server-example.com` in urls.
> These should be replaced by the actual URLs used to access each service.
```bash
GENERIC_NAME="gitlab"
GENERIC_CLIENT_ID=<gitlab_application_id>
GENERIC_CLIENT_SECRET=<gitlab_secret>
GENERIC_AUTH_URL="https://gitlab.com/oauth/authorize"
GENERIC_TOKEN_URL="https://gitlab.com/oauth/token"
TOKEN_SECRET=<mytokensecret>
GENERIC_SCOPES="api,openid,read_user"
PUBLIC_URL="http://<chronograf-host>:8888"
GENERIC_API_URL="https://gitlab.com/api/v3/user"
```
The equivalent command line options are:
```bash
--generic-name=gitlab
--generic-client-id=<gitlab_application_id>
--generic-client-secret=<gitlab_secret>
--generic-auth-url=https://gitlab.com/oauth/authorize
--generic-token-url=https://gitlab.com/oauth/token
--token-secret=<mytokensecret>
--generic-scopes=openid,read_user
--generic-api-url=https://gitlab.com/api/v3/user
--public-url=http://<chronograf-host>:8888/
```
#### Configure Azure Active Directory authentication
1. [Create an Azure Active Directory application](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-an-azure-active-directory-application).
Note the following information: `<APPLICATION-ID>`, `<TENANT-ID>`, and `<APPLICATION-KEY>`.
You'll need these to define your Chronograf environment.
2. Be sure to register a reply URL in your Azure application settings.
This should match the calling URL from Chronograf.
Otherwise, you will get an error stating no reply address is registered for the application.
For example, if Chronograf is configured with a `GENERIC_NAME` value of AzureAD, the reply URL would be `http://localhost:8888/oauth/AzureAD/callback`.
3. After completing the application provisioning within Azure AD, you can now complete the configuration with Chronograf.
Using the metadata from your Azure AD instance, proceed to export the following environment variables:
Set the following environment variables in `/etc/default/chronograf`:
```
GENERIC_TOKEN_URL=https://login.microsoftonline.com/<<TENANT-ID>>/oauth2/token
TENANT=<<TENANT-ID>>
GENERIC_NAME=AzureAD
GENERIC_API_KEY=userPrincipalName
GENERIC_SCOPES=openid
GENERIC_CLIENT_ID=<<APPLICATION-ID>>
GENERIC_AUTH_URL=https://login.microsoftonline.com/<<TENANT-ID>>/oauth2/authorize?resource=https://graph.windows.net
GENERIC_CLIENT_SECRET=<<APPLICATION-KEY>>
TOKEN_SECRET=secret
GENERIC_API_URL=https://graph.windows.net/<<TENANT-ID>>/me?api-version=1.6
PUBLIC_URL=http://localhost:8888
```
Note: If youve configured TLS/SSL, modify the `PUBLIC_URL` to ensure you're using HTTPS.
#### Configure Bitbucket authentication
1. Complete the instructions to [Use OAuth on Bitbucket Cloud](https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/), and include the following information:
- **Callback URL**: <http://localhost:8888/oauth/bitbucket/callback>
- **Permissions**: Account read, email
2. Run the following command to set Chronograf environment variables for Bitbucket in `/etc/default/chronograf`:
```sh
export TOKEN_SECRET=...
export GENERIC_CLIENT_ID=...
export GENERIC_CLIENT_SECRET=...
export GENERIC_AUTH_URL=https://bitbucket.org/site/oauth2/authorize
export GENERIC_TOKEN_URL=https://bitbucket.org/site/oauth2/access_token
export GENERIC_API_URL=https://api.bitbucket.org/2.0/user
export GENERIC_SCOPES=account
export PUBLIC_URL=http://localhost:8888
export GENERIC_NAME=bitbucket
```
#### Configure Chronograf to use any OAuth 2.0 provider
Chronograf can be configured to work with any OAuth 2.0 provider, including those defined above, by using the generic configuration options below.
Additionally, the generic provider implements OpenID Connect (OIDC) as implemented by Active Directory Federation Services (AD FS).
When using the generic configuration, some or all of the following environment variables (or corresponding command line options) are required (depending on your OAuth 2.0 provider):
* `GENERIC_CLIENT_ID`: Application client [identifier](https://tools.ietf.org/html/rfc6749#section-2.2) issued by the provider
* `GENERIC_CLIENT_SECRET`: Application client [secret](https://tools.ietf.org/html/rfc6749#section-2.3.1) issued by the provider
* `GENERIC_AUTH_URL`: Provider's authorization [endpoint](https://tools.ietf.org/html/rfc6749#section-3.1) URL
* `GENERIC_TOKEN_URL`: Provider's token [endpoint](https://tools.ietf.org/html/rfc6749#section-3.2) URL used by the Chronograf client to obtain an access token
* `USE_ID_TOKEN`: Enable OpenID [id_token](https://openid.net/specs/openid-connect-core-1_0.html#rfc.section.3.1.3.3) processing
* `JWKS_URL`: Provider's JWKS [endpoint](https://tools.ietf.org/html/rfc7517#section-4.7) used by the client to validate RSA signatures
* `GENERIC_API_URL`: Provider's [OpenID UserInfo endpoint](https://connect2id.com/products/server/docs/api/userinfo) URL used by Chronograf to request user data
* `GENERIC_API_KEY`: JSON lookup key for [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo) (known to be required for Microsoft Azure, with the value `userPrincipalName`)
* `GENERIC_SCOPES`: [Scopes](https://tools.ietf.org/html/rfc6749#section-3.3) of user data required for your instance of Chronograf, such as user email and OAuth provider organization
- Multiple values must be space-delimited, e.g. `user:email read:org`
- These may vary by OAuth 2.0 provider
- Default value: `user:email`
* `PUBLIC_URL`: Full public URL used to access Chronograf from a web browser, i.e. where Chronograf is hosted
- Used by Chronograf, for example, to construct the callback URL
* `TOKEN_SECRET`: Used to validate OAuth [state](https://tools.ietf.org/html/rfc6749#section-4.1.1) response. (see above)
##### Optional environment variables
The following environment variables (and corresponding command line options) are also available for optional use:
* `GENERIC_DOMAINS`: Email domain where email address must include.
* `GENERIC_NAME`: Value used in the callback URL in conjunction with `PUBLIC_URL`, e.g. `<PUBLIC_URL>/oauth/<GENERIC_NAME>/callback`
- This value is also used in the text for the Chronograf Login button
- Default value is `generic`
- So, for example, if `PUBLIC_URL` is `https://localhost:8888` and `GENERIC_NAME` is its default value, then the callback URL would be `https://localhost:8888/oauth/generic/callback`, and the Chronograf Login button would read `Log in with Generic`
- While using Chronograf, this value should be supplied in the `Provider` field when adding a user or creating an organization mapping.
##### Example: OIDC with AD FS
See [Enabling OpenID Connect with AD FS 2016](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/development/enabling-openid-connect-with-ad-fs) for a walk through of the server configuration.
Exports for Chronograf (e.g. in `/etc/default/chronograf`):
```sh
PUBLIC_URL="https://example.com:8888"
GENERIC_CLIENT_ID="chronograf"
GENERIC_CLIENT_SECRET="KW-TkvH7vzYeJMAKj-3T1PdHx5bxrZnoNck2KlX8"
GENERIC_AUTH_URL="https://example.com/adfs/oauth2/authorize"
GENERIC_TOKEN_URL="https://example.com/adfs/oauth2/token"
GENERIC_SCOPES="openid"
GENERIC_API_KEY="upn"
USE_ID_TOKEN="true"
JWKS_URL="https://example.com/adfs/discovery/keys"
TOKEN_SECRET="ZNh2N9toMwUVQxTVEe2ZnnMtgkh3xqKZ"
```
{{% note %}}
Do not use special characters for the `GENERIC_CLIENT_ID` as AD FS may split strings at the special character, resulting in an identifier mismatch.
{{% /note %}}
{{% note %}}
#### Troubleshoot OAuth errors
##### ERRO[0053]
A **ERRO[0053]** error indicates that a primary email is not found for the specified user.
A user must have a primary email.
```
ERRO[0053] Unable to get OAuth Group malformed email address, expected "..." to contain @ symbol
```
{{% /note %}}
### Configure authentication duration
By default, user authentication remains valid for 30 days using a cookie stored in the web browser.
To configure a different authorization duration, set a duration using the `AUTH_DURATION` environment variable.
**Example:**
To set the authentication duration to 1 hour, use the following shell command:
```sh
export AUTH_DURATION=1h
```
The duration uses the Go (golang) [time duration format](https://golang.org/pkg/time/#ParseDuration), so the largest time unit is `h` (hours).
So to change it to 45 days, use:
```sh
export AUTH_DURATION=1080h
```
To require re-authentication every time the browser is closed, set `AUTH_DURATION` to `0`.
This makes the cookie transient (aka "in-memory").
## Configure Chronograf to authenticate with a username and password
Chronograf can be configured to authenticate users by username and password ("basic authentication").
Turn on basic authentication access to restrict HTTP requests to Chronograf to selected users.
{{% warn %}}
[OAuth 2.0](#configure-chronograf-to-authenticate-with-oauth-20) is the preferred method for authentication.
Only use basic authentication in cases where an OAuth 2.0 integration is not possible.
{{% /warn %}}
When using basic authentication, *all users have SuperAdmin status*; Chronograf authorization rules are not enforced.
For more information, see [Cross-organization SuperAdmin status](/chronograf/v1.9/administration/managing-chronograf-users/#cross-organization-superadmin-status).
To enable basic authentication, run chronograf with the `--htpasswd` flag or use the `HTPASSWD` environment variable.
```sh
chronograf --htpasswd <path to .htpasswd file>
```
The `.htpasswd` file contains users and their passwords, and should be created with a password file utility tool such as `apache2-utils`.
For more information about how to restrict access with basic authentication, see NGINX documentation on [Restricting Access with HTTP Basic Authentication](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/).
## Configure TLS (Transport Layer Security) and HTTPS
The TLS (Transport Layer Security) cryptographic protocol is supported in Chronograf to provides server authentication, data confidentiality, and data integrity.
Using TLS secures traffic between a server and web browser and enables the use of HTTPS.
InfluxData recommends using HTTPS to communicate securely with Chronograf applications.
If you are not using a TLS termination proxy, you can run your Chronograf server with TLS connections.
Chronograf includes command line and environment variable options for configuring TLS (Transport Layer Security) certificates and key files.
Use of the TLS cryptographic protocol provides server authentication, data confidentiality, and data integrity.
When configured, users can use HTTPS to securely communicate with your Chronograf applications.
{{% note %}}
Using HTTPS helps guard against nefarious agents sniffing the JWT and using it to spoof a valid user against the Chronograf server.
{{% /note %}}
### Configuring TLS for Chronograf
Chronograf server has command line and environment variable options to specify the certificate and key files.
The server reads and parses a public/private key pair from these files.
The files must contain PEM-encoded data.
All Chronograf command line options have corresponding environment variables.
**To configure Chronograf to support TLS:**
1. Specify the certificate file using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
2. Specify the key file using the `TLS_PRIVATE_KEY` environment variable (or `--key` CLI option).
{{% note %}}
If both the TLS certificate and key are in the same file, specify them using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
{{% /note %}}
#### Example with CLI options
```sh
chronograf --cert=my.crt --key=my.key
```
#### Example with environment variables
```sh
TLS_CERTIFICATE=my.crt TLS_PRIVATE_KEY=my.key chronograf
```
#### Docker example with environment variables
```sh
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key quay.io/influxdb/chronograf:latest
```
### Testing with self-signed certificates
In a production environment you should not use self-signed certificates, but for testing it is fast to create your own certificates.
To create a certificate and key in one file with OpenSSL:
```sh
openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout testing.pem -out testing.pem -subj "/CN=localhost" -days 365
```
Next, set the environment variable `TLS_CERTIFICATE`:
```sh
export TLS_CERTIFICATE=$PWD/testing.pem
```
Run Chronograf:
```sh
./chronograf
INFO[0000] Serving chronograf at https://[::]:8888 component=server
```
In the first log message you should see `https` rather than `http`.

View File

@ -0,0 +1,58 @@
---
title: Migrate to a Chronograf HA configuration
description: >
Migrate a Chronograf single instance configuration using BoltDB to a Chronograf high-availability (HA) cluster configuration using etcd.
menu:
chronograf_1_9:
weight: 10
parent: Administration
---
Use [`chronoctl`](/chronograf/v1.9/tools/chronoctl/) to migrate your Chronograf configuration store from BoltDB to a shared `etcd` data store used for Chronograf high-availability (HA) clusters.
{{% note %}}
#### Update resource IDs
Migrating Chronograf to a shared data source creates new source IDs for each resource.
Update external links to Chronograf dashboards to reflect new source IDs.
{{% /note %}}
1. Stop the Chronograf server by killing the `chronograf` process.
2. To prevent data loss, we **strongly recommend** that you back up your Chronograf data store before migrating to a Chronograf cluster.
3. [Install and start etcd](/chronograf/v1.9/administration/create-high-availability/#install-and-start-etcd).
4. Run the following command, specifying the local BoltDB file and the `etcd` endpoint beginning with `etcd://`.
(We recommend adding the prefix `bolt://` to an absolute path.
Do not use the prefix to specify a relative path to the BoltDB file.)
```sh
chronoctl migrate \
--from bolt:///path/to/chronograf-v1.db \
--to etcd://localhost:2379
```
##### Provide etcd authentication credentials
If authentication is enabled on `etcd`, use the standard URI basic
authentication format to define a username and password. For example:
```sh
etcd://username:password@localhost:2379
```
##### Provide etcd TLS credentials
If TLS is enabled on `etcd`, provide your TLS certificate credentials using
the following query parameters in your etcd URL:
- **cert**: Path to client certificate file or PEM file
- **key**: Path to client key file
- **ca**: Path to trusted CA certificates
```sh
etcd://127.0.0.1:2379?cert=/tmp/client.crt&key=/tst/client.key&ca=/tst/ca.crt
```
5. Update links to Chronograf (for example, from external sources) to reflect your new URLs:
- **from BoltDB:**
http://localhost:8888/sources/1/status
- **to etcd:**
http://localhost:8888/sources/373921399246786560/status
6. Set up a load balancer for Chronograf.
7. [Start Chronograf](/chronograf/v1.9/administration/create-high-availability/#start-chronograf).

View File

@ -0,0 +1,486 @@
---
title: Prebuilt dashboards in Chronograf
description: Import prebuilt dashboards into Chronograf based on Telegraf plugins.
menu:
chronograf_1_9:
name: Prebuilt dashboards in Chronograf
weight: 50
parent: Administration
---
Chronograf lets you import a variety of prebuilt dashboards that visualize metrics collect by specific [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins). The following Telegraf-related dashboards templates are available.
For details on how to import dashboards while adding a connection in Chronograf, see [Creating connections](/chronograf/v1.9/administration/creating-connections/#manage-influxdb-connections-using-the-chronograf-ui).
## Docker
The Docker dashboard displays the following information:
- nCPU
- Total Memory
- Containers
- System Memory Usage
- System Load
- Disk I/O
- Filesystem Usage
- Block I/O per Container
- CPU Usage per Container
- Memory Usage % per Container
- Memory Usage per Container
- Net I/O per Container
### Plugins
- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#docker)
- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu)
## Kubernetes Node
The Kubernetes Node dashboard displays the following information:
- Total Nodes
- Total Pod Count
- Total Containers
- K8s - Node Millicores
- K8s - Node Memory Bytes
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod TX Bytes/Second
- K8s - Pod RX Bytes/Second
- K8s - Kubelet Millicores
- K8s - Kubelet Memory Bytes
### Plugins
- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes)
## Kubernetes Overview
The Kubernetes Node dashboard displays the following information:
- Total Nodes
- Total Pod Count
- Total Containers
- K8s - Node Millicores
- K8s - Node Memory Bytes
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod TX Bytes/Second
- K8s - Pod RX Bytes/Second
- K8s - Kubelet Millicores
- K8s - Kubelet Memory Bytes
### Plugins
- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes)
## Kubernetes Pod
The Kubernetes Pod dashboard displays the following information:
- Total Nodes
- Total Pod Count
- Total Containers
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod Millicores
- K8s - Pod Memory Bytes
- K8s - Pod TX Bytes/Second
### Plugins
- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes)
## Riak
The Riak dashboard displays the following information:
- Riak - Total Memory Bytes
- Riak - Object Byte Size
- Riak - Number of Siblings/Minute
- Riak - Latency (ms)
- Riak - Reads and Writes/Minute
- Riak - Active Connections
- Riak - Read Repairs/Minute
### Plugins
- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#riak)
## Consul
The Consul dashboard displays the following information:
- Consul - Number of Critical Health Checks
- Consul - Number of Warning Health Checks
### Plugins
- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul)
## Consul Telemetry
The Consul Telemetry dashboard displays the following information:
- Consul Agent - Number of Go Routines
- Consul Agent - Runtime Alloc Bytes
- Consul Agent - Heap Objects
- Consul - Number of Agents
- Consul - Leadership Election
- Consul - HTTP Request Time (ms)
- Consul - Leadership Change
- Consul - Number of Serf Events
### Plugins
[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul)
## Mesos
The Mesos dashboard displays the following information:
- Mesos Active Slaves
- Mesos Tasks Active
- Mesos Tasks
- Mesos Outstanding Offers
- Mesos Available/Used CPUs
- Mesos Available/Used Memory
- Mesos Master Uptime
### Plugins
- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#mesos)
## RabbitMQ
The RabbitMQ dashboard displays the following information:
- RabbitMQ - Overview
- RabbitMQ - Published/Delivered per Second
- RabbitMQ - Acked/Unacked per Second
### Plugins
- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq)
## System
The System dashboard displays the following information:
- System Uptime
- CPUs
- RAM
- Memory Used %
- Load
- I/O
- Network
- Processes
- Swap
### Plugins
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu)
- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net)
- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#processes)
- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#swap)
## VMware vSphere Overview
The VMware vSphere Overview dashboard gives an overview of your VMware vSphere Clusters and uses metrics from the `vsphere_cluster_*` and `vsphere_vm_*` set of measurements. It displays the following information:
- Cluster Status
- Uptime for :clustername:
- CPU Usage for :clustername:
- RAM Usage for :clustername:
- Datastores - Usage Capacity
- Network Usage for :clustername:
- Disk Throughput for :clustername:
- VM Status
- VM CPU Usage MHz for :clustername:
- VM Mem Usage for :clustername:
- VM Network Usage for :clustername:
- VM CPU % Ready for :clustername:
### Plugins
- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vmware-vsphere)
## Apache
The Apache dashboard displays the following information:
- System Uptime
- CPUs
- RAM
- Memory Used %
- Load
- I/O
- Network
- Workers
- Scoreboard
- Apache Uptime
- CPU Load
- Requests per Sec
- Throughput
- Response Codes
- Apache Log
### Plugins
- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#apache)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net)
- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#logparser)
## ElasticSearch
The ElasticSearch dashboard displays the following information:
- ElasticSearch - Query Throughput
- ElasticSearch - Open Connections
- ElasticSearch - Query Latency
- ElasticSearch - Fetch Latency
- ElasticSearch - Suggest Latency
- ElasticSearch - Scroll Latency
- ElasticSearch - Indexing Latency
- ElasticSearch - JVM GC Collection Counts
- ElasticSearch - JVM GC Latency
- ElasticSearch - JVM Heap Usage
### Plugins
- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch)
## InfluxDB
The InfluxDB dashboard displays the following information:
- System Uptime
- System Load
- Network
- Memory Usage
- CPU Utilization %
- Filesystems Usage
- # Measurements
- nCPU
- # Series
- # Measurements per DB
- # Series per DB
- InfluxDB Memory Heap
- InfluxDB Active Requests
- InfluxDB - HTTP Requests/Min
- InfluxDB GC Activity
- InfluxDB - Written Points/Min
- InfluxDB - Query Executor Duration
- InfluxDB - Write Errors
- InfluxDB - Client Errors
- # CQ/Minute
### Plugins
- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#influxdb)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net)
## Memcached
The Memcached dashboard displays the following information:
- Memcached - Current Connections
- Memcached - Get Hits/Second
- Memcached - Get Misses/Second
- Memcached - Delete Hits/Second
- Memcached - Delete Misses/Second
- Memcached - Incr Hits/Second
- Memcached - Incr Misses/Second
- Memcached - Current Items
- Memcached - Total Items
- Memcached - Bytes Stored
- Memcached - Bytes Read/Sec
- Memcached - Bytes Written/Sec
- Memcached - Evictions/10 Seconds
### Plugins
- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#memcached)
## NSQ
The NSQ dashboard displays the following information:
- NSQ - Channel Client Count
- NSQ - Channel Messages Count
- NSQ - Topic Count
- NSQ - Server Count
- NSQ - Topic Messages
- NSQ - Topic Messages on Disk
- NSQ - Topic Ingress
- NSQ - Topic Egress
### Plugins
- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#nsq)
## PostgreSQL
The PostgreSQL dashboard displays the following information:
- System Uptime
- nCPU
- System Load
- Total Memory
- Memory Usage
- Filesystems Usage
- CPU Usage
- System Load
- I/O
- Network
- Processes
- Swap
- PostgreSQL rows out/sec
- PostgreSQL rows in/sec
- PostgreSQL - Buffers
- PostgreSQL commit/rollback per sec
- Postgres deadlocks/conflicts
### Plugins
- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#postgresql)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem)
- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu)
- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
## HAProxy
The HAProxy dashboard displays the following information:
- HAProxy - Number of Servers
- HAProxy - Sum HTTP 2xx
- HAProxy - Sum HTTP 4xx
- HAProxy - Sum HTTP 5xx
- HAProxy - Frontend HTTP Requests/Second
- HAProxy - Frontend Sessions/Second
- HAProxy - Frontend Session Usage %
- HAProxy - Frontend Security Denials/Second
- HAProxy - Frontend Request Errors/Second
- HAProxy - Frontend Bytes/Second
- HAProxy - Backend Average Response Time
- HAProxy - Backend Connection Errors/Second
- HAProxy - Backend Queued Requests/Second
- HAProxy - Backend Average Requests Queue Time (ms)
- HAProxy - Backend Error Responses/Second
### Plugins
- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#haproxy)
## NGINX
The NGINX dashboard displays the following information:
- NGINX - Client Connection
- NGINX - Client Errors
- NGINX - Client Requests
- NGINX - Active Client State
### Plugins
- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#nginx)
## Redis
The Redis dashboard displays the following information:
- Redis - Connected Clients
- Redis - Blocked Clients
- Redis - CPU
- Redis - Memory
### Plugins
- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#redis)
## VMware vSphere VMs
The VMWare vSphere VMs dashboard gives an overview of your VMware vSphere virtual machines and includes metrics from the `vsphere_vm_*` set of measurements. It displays the following information:
- Uptime for :vmname:
- CPU Usage for :vmname:
- RAM Usage for :vmname:
- CPU Usage Average for :vmname:
- RAM Usage Average for :vmname:
- CPU Ready Average % for :vmname:
- Network Usage for:vmname:
- Total Disk Latency for :vmname:
### Plugins
- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere)
## VMware vSphere Hosts
The VMWare vSphere Hosts dashboard displays the following information:
- Uptime for :esxhostname:
- CPU Usage for :esxhostname:
- RAM Usage for :esxhostname:
- CPU Usage Average for :esxhostname:
- RAM Usage Average for :esxhostname:
- CPU Ready Average % for :esxhostname:
- Network Usage for :esxhostname:
- Total Disk Latency for :esxhostname:
### Plugins
- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere)
## PHPfpm
The PHPfpm dashboard displays the following information:
- PHPfpm - Accepted Connections
- PHPfpm - Processes
- PHPfpm - Slow Requests
- PHPfpm - Max Children Reached
### Plugins
- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#nginx)
## Win System
The Win System dashboard displays the following information:
- System - CPU Usage
- System - Available Bytes
- System - TX Bytes/Second
- System - RX Bytes/Second
- System - Load
### Plugins
- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#windows-services)
## MySQL
The MySQL dashboard displays the following information:
- System Uptime
- nCPU
- MySQL uptime
- Total Memory
- System Load
- Memory Usage
- InnoDB Buffer Pool Size
- InnoDB Buffer Usage
- Max Connections
- Open Connections
- I/O
- Network
- MySQL Connections/User
- MySQL Received Bytes/Sec
- MySQL Sent Bytes/Sec
- MySQL Connections
- MySQL Queries/Sec
- MySQL Slow Queries
- InnoDB Data
### Plugins
- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#mysql)
- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system)
- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem)
## Ping
The Ping dashboard displays the following information:
- Ping - Packet Loss Percent
- Ping - Response Times (ms)
### Plugins
- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#ping)

View File

@ -0,0 +1,82 @@
---
title: Restore a Chronograf database
description: >
If you're rolling back to a previous version of Chronograf, restore your internal database.
menu:
chronograf_1_9:
weight: 110
parent: Administration
---
Chronograf uses [Bolt](https://github.com/boltdb/bolt) to store Chronograf-specific key-value data.
Generally speaking, you should never have to manually administer your internal Chronograf database.
However, rolling back to a previous version of Chronograf does require restoring
the data and data-structure specific to that version.
Chronograf's internal database, `chronograf-v1.db`, is stored at your specified
[`--bolt-path`](/chronograf/v1.9/administration/config-options/#bolt-path-b) which,
by default, is the current working directory where the `chronograf` binary is executed.
In the upgrade process, an unmodified backup of your Chronograf data is stored inside the
`backup` directory before any necessary migrations are run.
This is done as a convenience in case issues arise with the data migrations
or the upgrade process in general.
The `backup` directory contains a copy of your previous `chronograf-v1.db` file.
Each backup file is appended with the corresponding Chronograf version.
For example, if you moved from Chronograf 1.4.4.2 to {{< latest-patch >}}, there will be a
file called `backup/chronograf-v1.db.1.4.4.2`.
_**Chronograf backup directory structure**_
{{% filesystem-diagram %}}
- chronograf-working-dir/
- chronograf-v1.db
- backup/
- chronograf-v1.db.1.4.4.0
- chronograf-v1.db.1.4.4.1
- chronograf-v1.db.1.4.4.2
- ...
{{% /filesystem-diagram %}}
## Roll back to a previous version
If there is an issue during the upgrade process or you simply want/need to roll
back to an earlier version of Chronograf, you must restore the data file
associated with that specific version, then downgrade and restart Chronograf.
The process is as follows:
### 1. Locate your desired backup file
Inside your `backup` directory, locate the database file with a the appended Chronograf
version that corresponds to the version to which you are rolling back.
For example, if rolling back to 1.4.4.2, find `backup/chronograf-v1.db.1.4.4.2`.
### 2. Stop your Chronograf server
Stop the Chronograf server by killing the `chronograf` process.
### 3. Replace your current database with the backup
Remove the current database file and replace it with the desired backup file:
```bash
# Remove the current database
rm chronograf-v1.db
# Replace it with the desired backup file
cp backup/chronograf-v1.db.1.4.4.2 chronograf-v1.db
```
### 4. Install the desired Chronograf version
Install the desired Chronograf version.
Chronograf releases can be viewed and downloaded either from the
[InfluxData downloads](https://portal.influxdata.com/downloads)
page or from the [Chronograf releases](https://github.com/influxdata/chronograf/releases)
page on Github.
### 5. Start the Chronograf server
Restart the Chronograf server.
Chronograf will use the `chronograf-v1.db` in the current working directory.
## Rerun update migrations
This process can also be used to rerun Chronograf update migrations.
Go through steps 1-5, but on [step 3](#3-replace-your-current-database-with-the-backup)
select the backup you want to use as a base for the migrations.
When Chronograf starts again, it will automatically run the data migrations
required for the installed version.

View File

@ -0,0 +1,19 @@
---
title: Upgrade Chronograf
description: Upgrade to the latest version of Chronograf.
menu:
chronograf_1_9:
name: Upgrade
weight: 10
parent: Administration
---
If you're upgrading from Chronograf 1.3.x, first install 1.7.x, and then install {{< latest-patch >}}.
If you're upgrading from Chronograf 1.4 or later, [download and install](https://portal.influxdata.com/downloads) the most recent version of Chronograf, and then restart Chronograf.
{{% note %}}
Installing a new version of Chronograf automatically clears the localStorage settings.
{{% /note %}}
After upgrading, see [Getting Started](/chronograf/v1.9/introduction/getting-started/) to get up and running.

View File

@ -0,0 +1,12 @@
---
title: Guides for Chronograf
description: Step-by-step instructions for using Chronograf's features.
menu:
chronograf_1_9:
name: Guides
weight: 30
---
Follow the links below to explore Chronograf's features.
{{< children >}}

View File

@ -0,0 +1,77 @@
---
title: Advanced Kapacitor usage
description: >
Use Kapacitor with Chronograf to manage alert history, TICKscripts, and Flux tasks.
menu:
chronograf_1_9:
weight: 100
parent: Guides
---
Chronograf provides a user interface for [Kapacitor](/{{< latest "kapacitor" >}}/),
InfluxData's processing framework for creating alerts, running ETL jobs, and detecting anomalies in your data.
Learn how Kapacitor interacts with Chronograf.
- [Manage Kapacitor alerts](#manage-kapacitor-alerts)
- [Manage Kapacitor tasks](#manage-kapacitor-tasks)
## Manage Kapacitor alerts
Chronograf provides information about Kapacitor alerts on the Alert History page.
Chronograf writes Kapacitor alert data to InfluxDB as time series data.
It stores the data in the `alerts` [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement)
in the `chronograf` database.
By default, this data is subject to an infinite [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp) (RP).
If you expect to have a large number of alerts or do not want to store your alert
history forever, consider shortening the [duration](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#duration)
of the default retention policy.
### Modify the retention policy of the chronograf database
Use the Chronograf **Admin page** to modify the retention policy in the `chronograf` database.
In the Databases tab:
1. Click **{{< icon "crown" >}} InfluxDB Admin** in the left navigation bar.
2. Hover over the retention policy list of the `chronograf` database and click **Edit**
next to the retention policy to update.
3. Update the **Duration** of the retention policy.
The minimum supported duration is one hour (`1h`) and the maximum is infinite (`INF` or `∞`).
_See [supported duration units](/{{< latest "influxdb" "v1" >}}/query_language/spec/#duration-units)._
4. Click **Save**.
If you set the retention policy's duration to one hour (`1h`), InfluxDB
automatically deletes any alerts that occurred before the past hour.
## Manage Kapacitor tasks
- [Manage Kapacitor TICKscripts](#manage-kapacitor-tickscripts)
- [Manage Kapacitor Flux tasks](#manage-kapacitor-flux-tasks)
### Manage Kapacitor TICKscripts
Chronograf lets you manage Kapacitor TICKscript tasks created in Kapacitor or in
Chronograf when [creating a Chronograf alert rule](/chronograf/v1.9/guides/create-alert-rules/).
To manage Kapacitor TICKscript tasks in Chronograf, click
**{{< icon "alert">}} Alerts** in the left navigation bar.
On this page, you can:
- View Kapacitor TICKscript tasks.
- View TICKscript task activity.
- Create new TICKscript tasks.
- Update TICKscript tasks.
- Enable and disable TICKscript tasks.
- Delete TICKscript tasks.
### Manage Kapacitor Flux tasks
**Kapacitor 1.6+** supports Flux tasks.
Chronograf lets you view and manage Kapacitor Flux tasks.
To manage Kapacitor Flux tasks in Chronograf, click
**{{< icon "alert">}} Alerts** in the left navigation bar.
On this page, you can:
- View Kapacitor Flux tasks.
- View Kapacitor Flux task activity.
- Enable and disable Kapacitor Flux tasks.
- Delete Kapacitor Flux tasks.

View File

@ -0,0 +1,109 @@
---
title: Analyze logs with Chronograf
description: Analyze log information using Chronograf.
menu:
chronograf_1_9:
weight: 120
parent: Guides
---
Chronograf gives you the ability to view, search, filter, visualize, and analyze log information from a variety of sources.
This helps to recognize and diagnose patterns, then quickly dive into logged events that lead up to events.
- [Set up logging](#set-up-logging)
- [View logs in Chronograf](#view-logs-in-chronograf)
- [Configure the log viewer](#configure-the-log-viewer)
- [Show or hide the log status histogram](#show-or-hide-the-log-status-histogram)
- [Logs in dashboards](#logs-in-dashboards)
## Set up logging
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/#input-plugins):
- [Docker Log](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/docker_log/README.md)
- [Graylog](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/graylog/README.md)
- [Logparser](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser/README.md)
- [Logstash](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logstash/README.md)
- [Syslog](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/syslog/README.md)
- [Tail](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tail/README.md)
## View logs in Chronograf
Chronograf has a dedicated log viewer accessed by clicking the **Log Viewer** button in the left navigation.
{{< img-hd src="/img/chronograf/1-6-logs-nav-log-viewer.png" alt="Log viewer in the left nav" />}}
The log viewer provides a detailed histogram showing the time-based distribution of log entries color-coded by log severity.
It also includes a live stream of logs that can be searched, filtered, and paused to analyze specific time ranges.
Logs are pulled from the `syslog` measurement.
_Other log inputs and alternate log measurement options will be available in future updates._
{{< img-hd src="/img/chronograf/1-7-log-viewer-overview.png" alt="Chronograf log viewer" />}}
### Search and filter logs
Search for logs using keywords or regular expressions.
They can also be filtered by clicking values in the log table such as `severity` or `facility`.
Any tag values included with the log entry can be used as a filter.
You can also use search operators to filter your results. For example, if you want to find results with a severity of critical that don't mention RSS, you can enter: `severity == crit` and `-RSS`.
![Searching and filtering logs](/img/chronograf/1-7-log-viewer-search-filter.gif)
{{% note %}}
**Note:** The log search field is case-sensitive.
{{% /note %}}
To remove filters, click the `×` next to the tag key by which you no longer want to filter.
### Select specific times
In the log viewer, you can select time ranges from which to view logs.
By default, logs are streamed and displayed relative to "now," but it is possible to view logs from a past window of time.
timeframe selection allows you to go to to a specific event and see logs for a time window both preceding and following that event. The default window is one minute, meaning the graph shows logs from thirty seconds before and the target time. Click the dropdown menu change the window.
![Selecting time ranges](/img/chronograf/1-7-log-viewer-specific-time.gif)
## Configure the log viewer
The log viewer can be customized to fit your specific needs.
Open the log viewer configuration options by clicking the gear button in the top right corner of the log viewer. Once done, click **Save** to apply the changes.
{{< img-hd src="/img/chronograf/1-6-logs-log-viewer-config-options.png" alt="Log viewer configuration options" />}}
### Severity colors
Every log severity is assigned a color which is used in the display of log entries.
To customize colors, select a color from the available color dropdown.
### Table columns
Columns in the log viewer are auto-populated with all fields and tags associated with your log data.
Each column can be reordered, renamed, and hidden or shown.
### Severity format
"Severity Format" specifies how the severity of log entries is displayed in your log table.
Below are the options and how they appear in the log table:
| Severity Format | Display |
| --------------- |:------- |
| Dot | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot.png" alt="Log serverity format 'Dot'" style="display:inline;max-height:24px;"/> |
| Dot + Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot-text.png" alt="Log serverity format 'Dot + Text'" style="display:inline;max-height:24px;"/> |
| Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-text.png" alt="Log serverity format 'Text'" style="display:inline;max-height:24px;"/> |
### Truncate or wrap log messages
By default, text in Log Viewer columns is truncated if it exceeds the column width. You can choose to wrap the text instead to display the full content of each cell.
Select the **Truncate** or **Wrap** option to determine how text appears when it exceeds the width of the cell.
To copy the complete, un-truncated log message, select the message cell and click **Copy**.
## Show or hide the log status histogram
The Chronograf Log Viewer displays a histogram of log status.
**To hide the log status histogram**, click the **{{< icon "hide" >}} icon** in
the top right corner of the histogram.
**To show the log status histogram**, click the **{{< icon "bar-chart" >}} icon**
in the top right corner of the log output.
## Logs in dashboards
An incredibly powerful way to analyze log data is by creating dashboards that include log data.
This is possible by using the [Table visualization type](/chronograf/v1.9/guides/visualization-types/#table) to display log data in your dashboard.
![Correlating logs with other metrics](/img/chronograf/1-7-log-viewer-dashboard.gif)
This type of visualization allows you to quickly identify anomalies in other metrics and see logs associated with those anomalies.

View File

@ -0,0 +1,43 @@
---
title: Use annotations in Chronograf views
description: >
Add contextual information to Chronograf dashboards with annotations.
menu:
chronograf_1_9:
name: Use annotations
weight: 50
parent: Guides
---
## Use annotations in the Chronograf interface
Annotations in Chronograf are notes of explanation or comments added to graph views by editors or administrators. Annotations can provide Chronograf users with useful contextual information about single points in time or time intervals. Users can use annotations to correlate the effects of important events, such as system changes or outages across multiple metrics, with Chronograf data.
When an annotation is added, a solid white line appears on all graph views for that point in time or an interval of time.
### Annotations example
The following screenshot of five graph views displays annotations for a single point in time and a time interval.
The text and timestamp for the single point in time can be seem above the annotation line in the graph view on the lower right.
The annotation displays "`Deploy v3.8.1-2`" and the time "`2018/28/02 15:59:30:00`".
![Annotations on multiple graph views](/img/chronograf/1-6-annotations-example.png)
**To add an annotation using the Chronograf user interface:**
1. Click the **Edit** button ("pencil" icon) on the graph view.
2. Click **Add Annotation** to add an annotation.
3. Move cursor to point of time and click or drag cursor to set an annotation.
4. Click **Edit** again and then click **Edit Annotation**.
5. Click the cursor on the annotation point or interval. The annotation text box appears above the annotation point or interval.
6. Click on `Name Me` in the annotation and type a note or comment.
7. Click **Done Editing**.
8. Your annotation is now available in all graph views.
{{% note %}}
Annotations are not associated with specific dashboards and appear in all dashboards.
Annotations are managed per InfluxDB data source.
When a dashboard is deleted, annotation persist until the InfluxDB data source
the annotations are associated with is removed.
{{% /note %}}

View File

@ -0,0 +1,43 @@
---
title: Clone dashboards and cells
description: >
Clone a dashboard or a cell and use the copy as a starting point to create new dashboard or cells.
menu:
chronograf_1_9:
weight: 70
parent: Guides
---
This guide explains how to clone, or duplicate, a dashboard or a cell for use as starting points for creating dashboards or cells using the copy as a template.
## Clone dashboards
Dashboards in Chronograf can be cloned (or copied) to be used to create a dashboard based on the original. Rather than building a new dashboard from scratch, you can clone a dashboard and make changes to the dashboard copy.
### To clone a dashboard
1. On the **Dashboards** page, hover your cursor over the listing of the dashboard that you want to clone and click the **Clone** button that appears.
![Click the Clone button](/img/chronograf/1-6-clone-dashboard.png)
The cloned dashboard opens and displays the name of the original dashboard with `(clone)` after it.
![Cloned dashboard](/img/chronograf/1-6-clone-dashboard-clone.png)
You can now change the dashboard name and customize the dashboard.
## Clone cells
Cells in Chronograf dashboards can be cloned, or copied, to quickly create a cell copy that can be edited for another use.
### To clone a cell
1. On the dashboard cell that you want to make a copy of, click the **Clone** icon and then confirm by clicking **Clone Cell**.
![Click the Clone icon](/img/chronograf/1-6-clone-cell-click-button.png)
2. The cloned cell appears in the dashboard displaying the nameof the original cell with `(clone)` after it.
![Cloned cell](/img/chronograf/1-6-clone-cell-cell-copy.png)
You can now change the cell name and customize the cell.

View File

@ -0,0 +1,370 @@
---
title: Configure Chronograf alert endpoints
aliases:
- /chronograf/v1.9/guides/configure-kapacitor-event-handlers/
description: Send alert messages with Chronograf alert endpoints.
menu:
chronograf_1_9:
name: Configure alert endpoints
weight: 70
parent: Guides
---
Chronograf alert endpoints can be configured using the Chronograf user interface to create Kapacitor-based event handlers that send alert messages.
You can use Chronograf to send alert messages to specific URLs as well as to applications.
This guide offers step-by-step instructions for configuring Chronograf alert endpoints.
## Kapacitor event handlers supported in Chronograf
Chronograf integrates with [Kapacitor](/{{< latest "kapacitor" >}}/), InfluxData's data processing platform, to send alert messages to event handlers.
Chronograf supports the following event handlers:
- [Alerta](#alerta)
- [BigPanda](#bigpanda)
- [Kafka](#kafka)
- [OpsGenie](#opsgenie)
- [OpsGenie2](#opsgenie2)
- [PagerDuty](#pagerduty)
- [PagerDuty2](#pagerduty2)
- [Pushover](#pushover)
- [Sensu](#sensu)
- [ServiceNow](#servicenow)
- [Slack](#slack)
- [SMTP](#smtp)
- [Talk](#talk)
- [Teams](#talk)
- [Telegram](#telegram)
- [VictorOps](#victorops)
- [Zenoss](#zenoss)
To configure a Kapacitor event handler in Chronograf, [install Kapacitor](/{{< latest "kapacitor" >}}/introduction/installation/) and [connect it to Chronograf](/{{< latest "kapacitor" >}}/working/kapa-and-chrono/#add-a-kapacitor-instance).
The **Configure Kapacitor** page includes the event handler configuration options.
## Alert endpoint configurations
Alert endpoint configurations appear on the Chronograf Configure Kapacitor page.
You must have a connected Kapacitor instance to access the configurations.
For more information, see [Kapacitor installation instructions](/{{< latest "kapacitor" >}}/introduction/installation/) and how to [connect a Kapacitor instance](/{{< latest "kapacitor" >}}/working/kapa-and-chrono/#add-a-kapacitor-instance) to Chronograf.
Note that the configuration options in the **Configure alert endpoints** section are not all-inclusive.
Some event handlers allow users to customize event handler configurations per [alert rule](/chronograf/v1.9/guides/create-a-kapacitor-alert/).
For example, Chronograf's Slack integration allows users to specify a default channel in the **Configure alert endpoints** section and a different channel for individual alert rules.
### Alerta
**To configure an Alerta alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page, click the **Alerta** tab.
2. Enter the following:
- **Environment**: Alerta environment. Can be a template and has access to the same data as the AlertNode.Details property. Default is set from the configuration.
- **Origin**: Alerta origin. If empty, uses the origin from the configuration.
- **Token**: Default Alerta authentication token..
- **Token Prefix**: Default token prefix. If you receive invalid token errors, you may need to change this to “Key”.
- **User**: Alerta user.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### BigPanda
**To configure an BigPanda alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **BigPanda** tab.
2. Enter the following:
- **URL**: BigPanda [alerts API URL](https://docs.bigpanda.io/reference#alerts-how-it-works).
Default is `https://api.bigpanda.io/data/v2/alerts`.
- **Token**: BigPanda [API Authorization token (API key)](https://docs.bigpanda.io/docs/api-key-management).
- **Application Key**: BigPanda [App Key](https://docs.bigpanda.io/reference#integrating-monitoring-systems).
- **Insecure Skip Verify**: Required if using a self-signed TLS certificate. Select to skip TLS certificate chain and host
verification when connecting over HTTPS.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Kafka
**To configure a Kafka alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Kafka** tab.
2. Enter the following:
- **ID**: Unique identifier for a Kafka cluster. Default is `localhost`.
- **Brokers**: List of Kafka broker addresses, using the `host:port` format.
- **Timeout**: Maximum amount of time to wait before flushing an incomplete batch. Default is `10s`.
- **Batch Size**: Number of messages batched before sending to Kafka. Default is `100`.
- **Batch Timeout**: Timeout period for the batch. Default is `1s`.
- **Use SSL**: Select to enable SSL communication.
- **SSL CA**: Path to the SSL CA (certificate authority) file.
- **SSL Cert**: Path to the SSL host certificate.
- **SSL Key**: Path to the SSL certificate private key file.
- **Insecure Skip Verify**: Required if using a self-signed TLS certificate. Select to skip TLS certificate chain and host
verification when connecting over HTTPS.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
To enable Kafka services using TICKscript, see [Kafka event handler (Kapacitor)](/{{< latest "kapacitor" >}}/event_handlers/kafka/).
### OpsGenie
{{% warn %}}
**Note:** Support for OpsGenie Events API 1.0 is deprecated (as [noted by OpGenie](https://docs.opsgenie.com/docs/migration-guide-for-alert-rest-api)).
As of June 30, 2018, the OpsGenine Events API 1.0 is disabled.
Use the [OpsGenie2](#opsgenie2) alert endpoint.
{{% /warn %}}
### OpsGenie2
Send an incident alert to OpsGenie teams and recipients using the Chronograf alert endpoint.
**To configure a OpsGenie alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **OpsGenie** tab.
2. Enter the following information:
- **API Key**: API key (or GenieKey).
To find the API key, sign into your [OpsGenie account](https://app.opsgenie.com/auth/login)
and select the **Settings** menu option in the **Admin** menu.
- **Teams**: List of [OpsGenie teams](https://docs.opsgenie.com/docs/teams) to be alerted.
- **Recipients** List of [OpsGenie team members](https://docs.opsgenie.com/docs/teams#section-team-members)) to receive alerts.
- **Select recovery action**: Actions to take when an alert recovers:
- Add a note to the alert
- Close the alert
- **Configuration Enabled**: Select to enable configuration.
4. Click **Save Changes** to save the configuration settings.
5. Click **Send Test Alert** to verify the configuration.
See [Alert API](https://docs.opsgenie.com/docs/alert-api) in the OpsGenie documentation for details on the OpsGenie Alert API
See [OpsGenie V2 event handler](/{{< latest "kapacitor" >}}/event_handlers/opsgenie/v2/) in the Kapacitor documentation for details about the OpsGenie V2 event handler.
See the [AlertNode (Kapacitor TICKscript node) - OpsGenie v2](/{{< latest "kapacitor" >}}/nodes/alert_node/#opsgenie-v2) in the Kapacitor documentation for details about enabling OpsGenie services using TICKscripts.
### PagerDuty
{{% warn %}}
The original PagerDuty alert endpoint is deprecated.
Use the [PagerDuty2](#pagerduty2) alert endpoint.
{{% /warn %}}
### PagerDuty2
**To configure a PagerDuty alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **PagerDuty** tab.
2. Enter the following:
- **Routing Key**: GUID of your PagerDuty Events API V2 integration, listed as "Integration Key" on the Events API V2 integration's detail page. See [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service) in the PagerDuty documentation details on getting an "Integration Key" (`routing_key`).
- **PagerDuty URL**: URL used to POST a JSON body representing the event. This value should not be changed. Valid value is `https://events.pagerduty.com/v2/enqueue`.
- **Configuration Enabled**: Select to enable this configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
See the [PagerDuty Events API V2 Overview](https://v2.developer.pagerduty.com/docs/events-api-v2)
for details on the PagerDuty Events API and recognized event types (`trigger`, `acknowledge`, and `resolve`).
To enable a new "Generic API" service using TICKscript, see [AlertNode (Kapacitor TICKscript node) - PagerDuty v2](/{{< latest "kapacitor" >}}/nodes/alert_node/#pagerduty-v2).
### Pushover
**To configure a Pushover alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Pushover** tab.
2. Enter the following:
- **User Key**: Pushover USER_TOKEN.
- **Token**: Pushover API token.
- **Pushover URL**: Pushover API URL.
Default is `https://api.pushover.net/1/messages.json`.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Sensu
**To configure a Sensu alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Sensu** tab.
2. Enter the following:
- **Source**: Event source. Default is `Kapacitor`.
- **Address**: URL of [Sensu HTTP API](https://docs.sensu.io/sensu-go/latest/migrate/#architecture).
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### ServiceNow
**To configure a ServiceNow alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **ServiceNow** tab.
2. Enter the following:
- **URL**: ServiceNow API URL. Default is `https://instance.service-now.com/api/global/em/jsonv2`.
- **Source**: Event source.
- **Username**: ServiceNow username.
- **Password**: ServiceNow password.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Slack
**To configure a Slack alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Slack** tab.
2. Enter the following:
- **Nickname this Configuration**: Unique name for a Slack endpoint if you
have more than one Slack alert endpoint.
- **Slack WebHook URL**: _(Optional)_ Slack webhook URL _(see [Slack webhooks](https://api.slack.com/messaging/webhooks))_
- **Slack Channel**: _(Optional)_ Slack channel or user to send messages to.
Prefix with `#` to send to a channel.
Prefix with `@` to send directly to a user.
If not specified, Kapacitor sends alert messages to the channel or user
specified in the [alert rule](/chronograf/v1.9/guides/create-a-kapacitor-alert/)
or configured in the **Slack Webhook**.
- **Configuration Enabled**: Check to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
**To add another Slack configuration:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Slack** tab.
2. Click **{{< icon "plus" >}} Add Another Config**.
3. Complete steps 2-4 [above](#slack).
### SMTP
**To configure a SMTP alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **SMTP** tab.
2. Enter the following:
- **SMTP Host**: SMTP host. Default is `localhost`.
- **SMTP Port**: SMTP port. Default is `25`.
- **From Email**: Email address to send messages from.
- **To Email**: Email address to send messages to.
- **User**: SMTP username.
- **Password**: SMTP password.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Talk
**To configure a Talk alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Talk** tab.
2. Enter the following:
- **URL**: Talk API URL.
- **Author Name**: Message author name.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Teams
**To configure a Microsoft Teams alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Teams** tab.
2. Enter the following:
- **Channel URL**: Microsoft Teams channel URL.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Telegram
**To configure a Telegram alert endpoint:**
1. [Set up a Telegram bot and credentials](/{{< latest "kapacitor" >}}/guides/event-handler-setup/#telegram-setup).
2. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Telegram** tab.
3. Enter the following:
- **Token**:
- **Chat ID**:
- **Select the alert message format**: Telegram message format
- Markdown _(default)_
- HTML
- **Disable link previews**: Disable [link previews](https://telegram.org/blog/link-preview) in Telegram messages.
- **Disable notifications**: Disable notifications on iOS devices and sounds on Android devices.
Android users will continue to receive notifications.
- **Configuration Enabled**: Select to enable configuration.
### VictorOps
**To configure a VictorOps alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **VictorOps** tab.
2. Enter the following:
- **API Key**: VictorOps API key.
- **Routing Key**: VictorOps [routing key](https://help.victorops.com/knowledge-base/routing-keys/).
- **VictorOps URL**: VictorOps alert API URL.
Default is `https://alert.victorops.com/integrations/generic/20131114/alert`.
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.
### Zenoss
**To configure a Zenoss alert endpoint:**
1. In the **Configure Alert Endpoints** of the **Configure Kapacitor Connection** page,
click the **Zenoss** tab.
2. Enter the following:
- **URL**: Zenoss [router endpoint URL](https://help.zenoss.com/zsd/RM/configuring-resource-manager/enabling-access-to-browser-interfaces/creating-and-changing-public-endpoints).
Default is `https://tenant.zenoss.io:8080/zport/dmd/evconsole_router`.
- **Username**: Zenoss username. Leave blank for no authentication.
- **Password**: Zenoss password. Leave blank for no authentication.
- **Action (Router Name)**: Zenoss [router name](https://help.zenoss.com/dev/collection-zone-and-resource-manager-apis/anatomy-of-an-api-request#AnatomyofanAPIrequest-RouterURL).
Default is `EventsRouter`.
- **Router Method**: [EventsRouter method](https://help.zenoss.com/dev/collection-zone-and-resource-manager-apis/codebase/routers/router-reference/eventsrouter).
Default is `add_event`.
- **Event Type**: Event type. Default is `rpc`.
- **Event TID**: Temporary request transaction ID. Default is `1`.
- **Collector Name**: Zenoss collector name. Default is `Kapacitor`.
- **Kapacitor to Zenoss Severity Mapping**: Map Kapacitor severities to [Zenoss severities](https://help.zenoss.com/docs/using-collection-zones/event-management/event-severity-levels).
- **OK**: Clear _(default)_
- **Info**: Info _(default)_
- **Warning**: Warning _(default)_
- **Critical**: Critical _(default)_
- **Configuration Enabled**: Select to enable configuration.
3. Click **Save Changes** to save the configuration settings.
4. Click **Send Test Alert** to verify the configuration.

View File

@ -0,0 +1,164 @@
---
title: Create Chronograf dashboards
description: Visualize your data with custom Chronograf dashboards.
menu:
chronograf_1_9:
name: Create dashboards
weight: 30
parent: Guides
---
Chronograf offers a complete dashboard solution for visualizing your data and monitoring your infrastructure:
- View [pre-created dashboards](/chronograf/v1.9/guides/using-precreated-dashboards) from the Host List page.
Dashboards are available depending on which Telegraf input plugins you have enabled.
These pre-created dashboards cannot be cloned or edited.
- Create custom dashboards from scratch by building queries in the Data Explorer, as described [below](#build-a-dashboard).
- [Export a dashboard](/chronograf/latest/administration/import-export-dashboards/#export-a-dashboard) you create.
- Import a dashboard:
- When you want to [import an exported dashboard](/chronograf/latest/administration/import-export-dashboards/#import-a-dashboard).
- When you want to add or update a connection in Chronograf. See [Dashboard templates](#dashboard-templates) for details.
By the end of this guide, you'll be aware of the tools available to you for creating dashboards similar to this example:
![Chronograf dashboard](/img/chronograf/1-6-g-dashboard-possibilities.png)
## Requirements
To perform the tasks in this guide, you must have a working Chronograf instance that is connected to an InfluxDB source.
Data is accessed using the Telegraf [system ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugins.
For more information, see [Configuring Chronograf](/chronograf/v1.9/administration/configuration).
## Build a dashboard
1. #### Create a new dashboard
Click **Dashboards** in the navigation bar and then click the **{{< icon "plus" >}} Create Dashboard** button.
A new dashboard is created and ready to begin adding cells.
2. #### Name your dashboard
Click **Name This Dashboard** and type a new name. For example, "ChronoDash".
3. #### Enter cell editor mode
In the first cell, titled "Untitled Cell", click **{{< icon "plus" >}} Add Data**
to open the cell editor mode.
{{< img-hd src="/img/chronograf/1-9-dashboard-cell-add-data.png" alt="Add data to a Chronograf cell" />}}
4. #### Create your query
Click the **Add a Query** button to create an [InfluxQL](/{{< latest "influxdb" "v1" >}}/query_language/) query.
In query editor mode, use the builder to select from your existing data and
allow Chronograf to format the query for you.
Alternatively, manually enter and edit a query.
Chronograf allows you to move seamlessly between using the builder and
manually editing the query; when possible, the interface automatically
populates the builder with the information from your raw query.
For our example, the query builder is used to generate a query that shows
the average idle CPU usage grouped by host (in this case, there are three hosts).
By default, Chronograf applies the [`MEAN()` function](/{{< latest "influxdb" "v1" >}}/query_language/functions/#mean)
to the data, groups averages into auto-generated time intervals (`:interval:`),
and shows data for the past hour (`:dashboardTime:`).
Those defaults are configurable using the query builder or by manually editing the query.
In addition, the time range (`:dashboardTime:` and `:upperDashboardTime:`) are
[configurable on the dashboard](#configure-your-dashboard).
![Build your query](/img/chronograf/1-6-g-dashboard-builder.png)
5. #### Choose your visualization type
Chronograf supports many different [visualization types](/chronograf/v1.9/guides/visualization-types/). To choose a visualization type, click **Visualization** and select **Step-Plot Graph**.
![Visualization type](/img/chronograf/1-6-g-dashboard-visualization.png)
6. #### Save your cell
Click **Save** (the green checkmark icon) to save your cell.
{{% note %}}
_**Note:**_ If you navigate away from this page without clicking Save, your work will not be saved.
{{% /note %}}
## Configure your dashboard
### Customize cells
- You can change the name of the cell from "Untitled Cell" by returning to the cell editor mode, clicking on the name, and renaming it. Remember to save your changes.
- **Move** your cell around by clicking its top bar and dragging it around the page
- **Resize** your cell by clicking and dragging its bottom right corner
### Explore cell data
- **Zoom** in on your cell by clicking and dragging your mouse over the area of interest
- **Pan** over your cell data by pressing the shift key and clicking and dragging your mouse over the graph
- **Reset** your cell by double-clicking your mouse in the cell window
{{% note %}}
**Note:** These tips only apply to the line, stacked, step-plot, and line+stat
[visualization types](/chronograf/v1.9/guides/visualization-types/).
{{% /note %}}
### Configure dashboard-wide settings
- Change the dashboard's *selected time* at the top of the page - the default
time is **Local**, which uses your browser's local time. Select **UTC** to use
Coordinated Universal Time.
{{% note %}}
**Note:** If your organization spans multiple time zones, we recommend using UTC
(Coordinated Universal Time) to ensure that everyone sees metrics and events for the same time.
{{% /note %}}
- Change the dashboard's *auto-refresh interval* at the top of the page - the default interval selected is **Every 10 seconds**.
{{% note %}}
**Note:** A dashboard's refresh rate persists in local storage, so the default
refresh rate is only used when a refresh rate isn't found in local storage.
{{% /note %}}
{{% note %}}
**To add custom auto-refresh intervals**, use the [`--custom-auto-refresh` configuration
option](/chronograf/v1.9/administration/config-options/#--custom-auto-refresh)
or `$CUSTOM_AUTO_REFRESH` environment variable when starting Chronograf.
{{% /note %}}
- Modify the dashboard's *time range* at the top of the page - the default range
is **Past 15 minutes**.
## Dashboard templates
Select from a variety of dashboard templates to import and customize based on which Telegraf plugins you have enabled, such as the following examples:
###### Kubernetes dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-kubernetes.png" alt="Kubernetes Chronograf dashboard template" />}}
###### MySQL dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-mysql.png" alt="MySQL Chronograf dashboard template" />}}
###### System metrics dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-system.png" alt="System metrics Chronograf dashboard template" />}}
###### vSphere dashboard template
{{< img-hd src="/img/chronograf/1-7-protoboard-vsphere.png" alt="vSphere Chronograf dashboard template" />}}
### Import dashboard templates
1. From the Configuration page, click **Add Connection** or select an existing connection to edit it.
2. In the **InfluxDB Connection** window, enter or verify your connection details and click **Add** or **Update Connection**.
3. In the **Dashboards** window, select from the available dashboard templates to import based on which Telegraf plugins you have enabled.
{{< img-hd src="/img/chronograf/1-7-protoboard-select.png" alt="Select dashboard template" />}}
4. Click **Create (x) Dashboards**.
5. Edit, clone, or configure the dashboards as needed.
## Extra Tips
### Full screen mode
View your dashboard in full screen mode by clicking on the full screen icon (**{{< icon "fullscreen" >}}**) in the top right corner of your dashboard.
To exit full screen mode, press the Esc key.
### Template variables
Dashboards support template variables.
See the [Dashboard Template Variables](/chronograf/v1.9/guides/dashboard-template-variables/) guide for more information.

View File

@ -0,0 +1,162 @@
---
title: Create Chronograf alert rules
description: >
Trigger alerts by building Kapacitor alert rules in the Chronograf user interface (UI).
aliases:
- /chronograf/v1.9/guides/create-a-kapacitor-alert/
menu:
chronograf_1_9:
name: Create alert rules
weight: 60
parent: Guides
---
Chronograf provides a user interface for [Kapacitor](/{{< latest "kapacitor" >}}/), InfluxData's processing framework for creating alerts, ETL jobs (running extract, transform, load), and detecting anomalies in your data.
Chronograf alert rules correspond to Kapacitor tasks that trigger alerts whenever certain conditions are met.
Behind the scenes, these tasks are stored as [TICKscripts](/{{< latest "kapacitor" >}}/tick/) that can be edited manually or through Chronograf.
Common alerting use cases that can be managed using Chronograf include:
* Thresholds with static ceilings, floors, and ranges.
* Relative thresholds based on unit or percentage changes.
* Deadman switches.
Complex alerts and other tasks can be defined directly in Kapacitor as TICKscripts, but can be viewed and managed within Chronograf.
This guide walks through creating a Chronograf alert rule that sends an alert message to an existing [Slack](https://slack.com/) channel whenever your idle CPU usage crosses the 80% threshold.
## Requirements
[Getting started with Chronograf](/chronograf/v1.9/introduction/getting-started/) offers step-by-step instructions for each of the following requirements:
* Downloaded and install the entire TICKstack (Telegraf, InfluxDB, Chronograf, and Kapacitor).
* Configure Telegraf to collect data using the InfluxDB [system statistics](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin and write data to your InfluxDB instance.
* [Create a Kapacitor connection in Chronograf](/chronograf/v1.9/introduction/installation/#connect-chronograf-to-kapacitor).
* Slack is available and configured as an event handler in Chronograf. See [Configuring Chronograf alert endpoints](/chronograf/v1.9/guides/configuring-alert-endpoints/) for detailed configuration instructions.
## Configure Chronograf alert rules
Navigate to the **Manage Tasks** page under **Alerting** in the left navigation, then click **+ Build Alert Rule** in the top right corner.
![Navigate to Manage Tasks](/img/chronograf/1-6-alerts-manage-tasks-nav.png)
The **Manage Tasks** page is used to create and edit your Chronograf alert rules.
The steps below guide you through the process of creating a Chronograf alert rule.
![Empty Rule Configuration](/img/chronograf/1-6-alerts-rule-builder.png)
### Step 1: Name the alert rule
Under **Name this Alert Rule** provide a name for the alert.
For this example, use "Idle CPU Usage" as your alert name.
### Step 2: Select the alert type
Choose from three alert types under the **Alert Types** section of the Rule Configuration page:
_**Threshold**_
Alert if data crosses a boundary.
_**Relative**_
Alert if data changes relative to data in a different time range.
_**Deadman**_
Alert if InfluxDB receives no relevant data for a specified time duration.
For this example, select the **Threshold** alert type.
### Step 3: Select the time series data
Choose the time series data you want the Chronograf alert rule to use.
Navigate through databases, measurements, fields, and tags to select the relevant data.
In this example, select the `telegraf` [database](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database), the `autogen` [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp), the `cpu` [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement), and the `usage_idle` [field](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field).
![Select your data](/img/chronograf/1-6-alerts-time-series.png)
### Step 4: Define the rule condition
Define the threshold condition.
Condition options are determined by the [alert type](#step-2-select-the-alert-type).
For this example, the alert conditions are if `usage_idle` is less than `80`.
![Create a condition](/img/chronograf/1-6-alerts-conditions.png)
The graph shows a preview of the relevant data and the threshold number.
By default, the graph shows data from the past 15 minutes.
Adjusting the graph's time range is helpful when determining a reasonable threshold number based on your data.
{{% note %}}
We set the threshold number to `80` for demonstration purposes.
Setting the threshold for idle CPU usage to a high number ensures that we'll be able to see the alert in action.
In practice, you'd set the threshold number to better match the patterns in your data and your alerting needs.
{{% /note %}}
### Step 5: Select and configure the alert handler
The **Alert Handler** section determines where the system sends the alert (the event handler)
Chronograf supports several event handlers.
Each handler has unique configurable options.
For this example, choose the **slack** alert handler and enter the desired options.
![Select the alert handler](/img/chronograf/1-6-alerts-configure-handlers.png)
{{% note %}}
Multiple alert handlers can be added to send alerts to multiple endpoints.
{{% /note %}}
### Step 6: Configure the alert message
The alert message is the text that accompanies an alert.
Alert messages are templates that have access to alert data.
Available data templates appear below the message text field.
As you type your alert message, clicking the data templates will insert them at end of whatever text has been entered.
In this example, use the alert message, `Your idle CPU usage is {{.Level}} at {{ index .Fields "value" }}.`.
![Specify event handler and alert message](/img/chronograf/1-6-alerts-message.png)
*View the Kapacitor documentation for more information about [message template data](/{{< latest "kapacitor" >}}/nodes/alert_node/#message).*
### Step 7: Save the alert rule
Click **Save Rule** in the top right corner and navigate to the **Manage Tasks** page to see your rule.
Notice that you can easily enable and disable the rule by toggling the checkbox in the **Enabled** column.
![See the alert rule](/img/chronograf/1-6-alerts-view-rules.png)
Next, move on to the section below to experience your alert rule in action.
## View alerts in practice
### Step 1: Create some load on your system
The purpose of this step is to generate enough load on your system to trigger an alert.
More specifically, your idle CPU usage must dip below `80%`.
On the machine that's running Telegraf, enter the following command in the terminal to start some `while` loops:
```
while true; do i=0; done
```
Let it run for a few seconds or minutes before terminating it.
On most systems, kill the script by using `Ctrl+C`.
### Step 2: View the alerts
Go to the Slack channel that you specified in the previous section.
In this example, it's the `#chronocats` channel.
Assuming the first step was successful, `#ohnos` should reveal at least two alert messages:
* The first alert message indicates that your idle CPU usage was `CRITICAL`, meaning it dipped below `80%`.
* The second alert message indicates that your idle CPU usage returned to an `OK` level of `80%` or above.
![See the alerts](/img/chronograf/1-6-alerts-slack-notifications.png)
You can also see alerts on the **Alert History** page available under **Alerting** in the left navigation.
![Chronograf alert history](/img/chronograf/1-6-alerts-history.png)
That's it! You've successfully used Chronograf to configure an alert rule to monitor your idle CPU usage and send notifications to Slack.

View File

@ -0,0 +1,608 @@
---
title: Use dashboard template variables
description: >
Chronograf dashboard template variables let you update cell queries without editing queries,
making it easy to interact with your dashboard cells and explore your data.
aliases:
- /chronograf/v1.9/introduction/templating/
- /chronograf/v1.9/templating/
menu:
chronograf_1_9:
weight: 90
parent: Guides
---
Chronograf dashboard template variables let you update cell queries without editing queries,
making it easy to interact with your dashboard cells and explore your data.
- [Use template variables](#use-template-variables)
- [Predefined template variables](#predefined-template-variables)
- [Create custom template variables](#create-custom-template-variables)
- [Template variable types](#template-variable-types)
- [Reserved variable names](#reserved-variable-names)
- [Advanced template variable usage](#advanced-template-variable-usage)
## Use template variables
When creating Chronograf dashboards, use either [predefined template variables](#predefined-template-variables)
or [custom template variables](#create-custom-template-variables) in your cell queries and titles.
After you set up variables, variables are available to select in your dashboard user interface (UI).
- [Use template variables in cell queries](#use-template-variables-in-cell-queries)
- [InfluxQL](#influxql)
- [Flux](#flux)
- [Use template variables in cell titles](#use-template-variables-in-cell-titles)
![Use template variables](/img/chronograf/1-6-template-vars-use.gif)
### Use template variables in cell queries
Both InfluxQL and Flux support template variables.
#### InfluxQL
In an InfluxQL query, surround template variables names with colons (`:`) as follows:
```sql
SELECT :variable_name: FROM "telegraf"."autogen".:measurement: WHERE time < :dashboardTime:
```
##### Quoting template variables in InfluxQL
For **predefined meta queries** such as "Field Keys" and "Tag Values", **do not add quotes** (single or double) to your queries. Chronograf will add quotes as follows:
```sql
SELECT :variable_name: FROM "telegraf"."autogen".:measurement: WHERE time < :dashboardTime:
```
For **custom queries**, **CSV**, or **map queries**, quote the values in the query following standard [InfluxQL](/{{< latest "influxdb" "v1" >}}/query_language/) syntax:
- For numerical values, **do not quote**.
- For string values, choose to quote the values in the variable definition (or not). See [String examples](#string-examples) below.
{{% note %}}
**Tips for quoting strings:**
- When using custom meta queries that return strings, typically, you quote the variable values when using them in a dashboard query, given InfluxQL results are returned without quotes.
- If you are using template variable strings in regular expression syntax (when using quotes may cause query syntax errors), the flexibility in query quoting methods is particularly useful.
{{% /note %}}
##### String examples
Add single quotes when you define template variables, or in your queries, but not both.
###### Add single quotes in variable definition
If you define a custom CSV variable named `host` using single quotes:
```sh
'host1','host2','host3'
```
Do not include quotes in your query:
```sql
SELECT mean("usage_user") AS "mean_usage_user" FROM "telegraf"."autogen"."cpu"
WHERE "host" = :host: and time > :dashboardTime
```
###### Add single quotes in query
If you define a custom CSV variable named `host` without quotes:
```sh
host1,host2,host3
```
Add single quotes in your query:
```sql
SELECT mean("usage_user") AS "mean_usage_user" FROM "telegraf"."autogen"."cpu"
WHERE "host" = ':host:' and time > :dashboardTime
```
#### Flux
In Flux, template variables are stored in a `v` record.
Use dot or bracket notation to reference the variable key inside of the `v` record:
```js
from(bucket: v.bucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStart)
|> filter(fn: (r) => r._field == v["Field key"])
|> aggregateWindow(every: v.windowPeriod, fn: v.aggregateFunction)
```
### Use template variables in cell titles
To dynamically change the title of a dashboard cell,
use the `:variable-name:` syntax.
For example, a variable named `field` with a value of `temp` and a variable
named `location` with a value of `San Antonio`, use the following syntax:
```
:temp: data for :location:
```
Displays as:
{{< img-hd src= "/img/chronograf/1-9-template-var-title.png" alt="Use template variables in cell titles" />}}
## Predefined template variables
Chronograf includes predefined template variables controlled by elements in the Chronograf UI.
Use predefined template variables in your cell queries.
InfluxQL and Flux include their own sets of predefined template variables:
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxQL](#)
[Flux](#)
{{% /tabs %}}
{{% tab-content %}}
- [`:dashboardTime:`](#dashboardtime)
- [`:upperDashboardTime:`](#upperdashboardtime)
- [`:interval:`](#interval)
### dashboardTime
The `:dashboardTime:` template variable is controlled by the "time" dropdown in your Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-time-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard time selector"/>
If using relative times, it represents the time offset specified in the dropdown (-5m, -15m, -30m, etc.) and assumes time is relative to "now".
If using absolute times defined by the date picker, `:dashboardTime:` is populated with lower threshold.
```sql
SELECT "usage_system" AS "System CPU Usage"
FROM "telegraf".."cpu"
WHERE time > :dashboardTime:
```
{{% note %}}
To use the date picker to specify a past time range, construct the query using `:dashboardTime:`
as the start time and [`:upperDashboardTime:`](#upperdashboardtime) as the stop time.
{{% /note %}}
### upperDashboardTime
The `:upperDashboardTime:` template variable is defined by the upper time limit specified using the date picker.
<img src="/img/chronograf/1-6-template-vars-date-picker.png" style="width:100%;max-width:762px;" alt="Dashboard date picker"/>
It will inherit `now()` when using relative time frames or the upper time limit when using absolute timeframes.
```sql
SELECT "usage_system" AS "System CPU Usage"
FROM "telegraf".."cpu"
WHERE time > :dashboardTime: AND time < :upperDashboardTime:
```
### interval
The `:interval:` template variable is defined by the interval dropdown in the Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-interval-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard interval selector"/>
In cell queries, it should be used in the `GROUP BY time()` clause that accompanies aggregate functions:
```sql
SELECT mean("usage_system") AS "Average System CPU Usage"
FROM "telegraf".."cpu"
WHERE time > :dashboardtime:
GROUP BY time(:interval:)
```
{{% /tab-content %}}
{{% tab-content %}}
- [`v.timeRangeStart`](#vtimerangestart)
- [`v.timeRangeStop`](#vtimerangestop)
- [`v.windowPeriod`](#vwindowperiod)
{{% note %}}
#### Backward compatible Flux template variables
**Chronograf 1.9+** supports the InfluxDB 2.0 variable pattern of storing
[predefined template variables](#predefined-template-variables) and [custom template variables](#create-custom-template-variables)
in a `v` record and using dot or bracket notation to reference variables.
For backward compatibility, Chronograf 1.9+ still supports the following predefined
variables that do not use the `v.` syntax:
- [`dashboardTime`](/chronograf/v1.8/guides/dashboard-template-variables/?t=Flux#dashboardtime-flux)
- [`upperDashboardTime`](/chronograf/v1.8/guides/dashboard-template-variables/?t=Flux#upperdashboardtime-flux)
- [`autoInterval`](/chronograf/v1.8/guides/dashboard-template-variables/?t=Flux#autointerval)
{{% /note %}}
### v.timeRangeStart
The `v.timeRangeStart` template variable is controlled by the "time" dropdown in your Chronograf dashboard.
<img src="/img/chronograf/1-6-template-vars-time-dropdown.png" style="width:100%;max-width:549px;" alt="Dashboard time selector"/>
If using relative time, this variable represents the time offset specified in the dropdown (-5m, -15m, -30m, etc.) and assumes time is relative to "now".
If using absolute time defined by the date picker, `v.timeRangeStart` is populated with the start time.
```js
from(bucket: "telegraf/autogen")
|> range(start: v.timeRangeStart)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
```
{{% note %}}
To use the date picker to specify a time range in the past without "now", use
`v.timeRangeStart` as the start time and [`v.timeRangeStop`](#vtimerangestop)
as the stop time.
{{% /note %}}
### v.timeRangeStop
The `v.timeRangeStop` template variable is defined by the upper time limit specified using the date picker.
<img src="/img/chronograf/1-6-template-vars-date-picker.png" style="width:100%;max-width:762px;" alt="Dashboard date picker"/>
For relative time frames, this variable inherits `now()`. For absolute time frames, this variable inherits the upper time limit.
```js
from(bucket: "telegraf/autogen")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
```
### v.windowPeriod
The `v.windowPeriod` template variable is controlled by the display width of the
dashboard cell and is calculated by the duration of time that each pixel covers.
Use the `v.windowPeriod` variable to limit downsample data to display a maximum of one point per pixel.
```js
from(bucket: "telegraf/autogen")
|> range(start: v.timeRangeStart)
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
|> aggregateWindow(every: v.windowPeriod, fn: mean)
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Create custom template variables
Chronograf lets you create custom template variables powered by meta queries or CSV uploads that return an array of possible values.
To create a template variable:
1. Click on **Template Variables** at the top of your dashboard, then **+ Add Variable**.
2. Select a data source from the **Data Source** dropdown menu.
3. Provide a name for the variable.
4. Select the [variable type](#template-variable-types).
The type defines the method for retrieving the array of possible values.
5. View the list of potential values and select a default.
If using the CSV or Map types, upload or input the CSV with the desired values in the appropriate format then select a default value.
6. Click **Create**.
Once created, the template variable can be used in any of your cell's queries or titles
and a dropdown for the variable will be included at the top of your dashboard.
## Template Variable Types
Chronograf supports the following template variable types:
- [Databases](#databases)
- [Measurements](#measurements)
- [Field Keys](#field-keys)
- [Tag Keys](#tag-keys)
- [Tag Values](#tag-values)
- [CSV](#csv)
- [Map](#map)
- [InfluxQL Meta Query](#influxql-meta-query)
- [Flux Query](#flux-query)
- [Text](#text)
### Databases
Database template variables allow you to select from multiple target [databases](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database).
_**Database meta query**_
Database template variables use the following meta query to return an array of all databases in your InfluxDB instance.
```sql
SHOW DATABASES
```
_**Example database variable in a cell query**_
```sql
SELECT "purchases" FROM :databaseVar:."autogen"."customers"
```
#### Database variable use cases
Use database template variables when visualizing multiple databases with similar or identical data structures.
Variables let you quickly switch between visualizations for each of your databases.
### Measurements
Vary the target [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement).
_**Measurement meta query**_
Measurement template variables use the following meta query to return an array of all measurements in a given database.
```sql
SHOW MEASUREMENTS ON database_name
```
_**Example measurement variable in a cell query**_
```sql
SELECT * FROM "animals"."autogen".:measurementVar:
```
#### Measurement variable use cases
Measurement template variables allow you to quickly switch between measurements in a single cell or multiple cells in your dashboard.
### Field Keys
Vary the target [field key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field-key).
_**Field key meta query**_
Field key template variables use the following meta query to return an array of all field keys in a given measurement from a given database.
```sql
SHOW FIELD KEYS ON database_name FROM measurement_name
```
_**Example field key var in a cell query**_
```sql
SELECT :fieldKeyVar: FROM "animals"."autogen"."customers"
```
#### Field key variable use cases
Field key template variables are great if you want to quickly switch between field key visualizations in a given measurement.
### Tag Keys
Vary the target [tag key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-key).
_**Tag key meta query**_
Tag key template variables use the following meta query to return an array of all tag keys in a given measurement from a given database.
```sql
SHOW TAG KEYS ON database_name FROM measurement_name
```
_**Example tag key variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" GROUP BY :tagKeyVar:
```
#### Tag key variable use cases
Tag key template variables are great if you want to quickly switch between tag key visualizations in a given measurement.
### Tag Values
Vary the target [tag value](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-value).
_**Tag value meta query**_
Tag value template variables use the following meta query to return an array of all values associated with a given tag key in a specified measurement and database.
```sql
SHOW TAG VALUES ON database_name FROM measurement_name WITH KEY tag_key
```
_**Example tag value variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "species" = :tagValueVar:
```
#### Tag value variable use cases
Tag value template variables are great if you want to quickly switch between tag value visualizations in a given measurement.
### CSV
Vary part of a query with a customized list of comma-separated values (CSV).
_**Example CSVs:**_
```csv
value1, value2, value3, value4
```
```csv
value1
value2
value3
value4
```
{{% note %}}
String field values [require single quotes in InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries).
```csv
'string1','string2','string3','string4'
```
{{% /note %}}
_**Example CSV variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "petname" = :csvVar:
```
#### CSV variable use cases
CSV template variables are great when the array of values necessary for your variable can't be pulled from InfluxDB using a meta query.
They allow you to use custom variable values.
### Map
Vary part of a query with a customized list of key-value pairs in CSV format.
They key of each key-value pair is used to populate the template variable dropdown in your dashboard.
The value is used when processing cells' queries.
_**Example CSV:**_
```csv
key1,value1
key2,value2
key3,value3
key4,value4
```
<img src="/img/chronograf/1-6-template-vars-map-dropdown.png" style="width:100%;max-width:140px;" alt="Map variable dropdown"/>
{{% note %}}
Wrap string field values in single quotes ([required by InfluxQL](/{{< latest "influxdb" "v1" >}}/troubleshooting/frequently-asked-questions/#when-should-i-single-quote-and-when-should-i-double-quote-in-queries)).
Variable keys do not require quotes.
```csv
key1,'value1'
key2,'value2'
key3,'value3'
key4,'value4'
```
{{% /note %}}
_**Example Map variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "customer" = :mapVar:
```
#### Map variable use cases
Map template variables are good when you need to map or alias simple names or keys to longer or more complex values.
For example, you may want to create a `:customer:` variable that populates your cell queries with a long, numeric customer ID (`11394850823894034209`).
With a map variable, you can alias simple names to complex values, so your list of customers would look something like:
```
Apple,11394850823894034209
Amazon,11394850823894034210
Google,11394850823894034211
Microsoft,11394850823894034212
```
The customer names would populate your template variable dropdown rather than the customer IDs.
### InfluxQL Meta Query
Vary part of a query with a customized meta query that pulls a specific array of values from InfluxDB.
InfluxQL meta query variables let you pull a highly customized array of potential
values and offer advanced functionality such as [filtering values based on other template variables](#filter-template-variables-with-other-template-variables).
<img src="/img/chronograf/1-6-template-vars-custom-meta-query.png" style="width:100%;max-width:667px;" alt="Custom meta query"/>
_**Example custom meta query variable in a cell query**_
```sql
SELECT "purchases" FROM "animals"."autogen"."customers" WHERE "customer" = :customMetaVar:
```
#### InfluxQL meta query variable use cases
Use custom InfluxQL meta query template variables when predefined template variable types aren't able to return the values you want.
### Flux Query
Flux query template variables let you define variable values using Flux queries.
**Variable values are extracted from the `_value` column returned by your Flux query.**
#### Flux query variable use cases
Flux query template variables are great when the values necessary for your
variable can't be queried with InfluxQL or if you need the flexibility of Flux
to return your desired list of variable values.
### Text
Vary a part of a query with a single string of text.
There is only one value per text variable, but this value is easily altered.
#### Text variable use cases
Text template variables allow you to dynamically alter queries, such as adding or altering `WHERE` clauses, for multiple cells at once.
You could also use a text template variable to alter a regular expression used in multiple queries.
They are great when troubleshooting incidents that affect multiple visualized metrics.
## Reserved variable names
The following variable names are reserved and cannot be used when creating template variables.
Chronograf accepts [template variables as URL query parameters](#define-template-variables-in-the-url)
as well as many other parameters that control the display of graphs in your dashboard.
These names are either [predefined variables](#predefined-template-variables) or would
conflict with existing URL query parameters.
- `:database:`
- `:measurement:`
- `:dashboardTime:`
- `:upperDashboardTime:`
- `:interval:`
- `:upper:`
- `:lower:`
- `:zoomedUpper:`
- `:zoomedLower:`
- `:refreshRate:`
## Advanced template variable usage
### Filter template variables with other template variables
[Custom InfluxQL meta query template variables](#influxQL-meta-query) let you filter the array of potential variable values using other existing template variables.
For example, let's say you want to list all the field keys associated with a measurement, but want to be able to change the measurement:
1. Create a template variable named `:measurementVar:` _(the name "measurement" is [reserved]( #reserved-variable-names))_ that uses the [Measurements](#measurements) variable type to pull in all measurements from the `telegraf` database.
<img src="/img/chronograf/1-6-template-vars-measurement-var.png" style="width:100%;max-width:667px;" alt="measurementVar"/>
2. Create a template variable named `:fieldKey:` that uses the [InfluxQL meta query](#influxql-meta-query) variable type.
The following meta query pulls a list of field keys based on the existing `:measurementVar:` template variable.
```sql
SHOW FIELD KEYS ON telegraf FROM :measurementVar:
```
<img src="/img/chronograf/1-6-template-vars-fieldkey.png" style="width:100%;max-width:667px;" alt="fieldKey"/>
3. Create a new dashboard cell that uses the `fieldKey` and `measurementVar` template variables in its query.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[InfluxQL](#)
[Flux](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT :fieldKey: FROM "telegraf"..:measurementVar: WHERE time > :dashboardTime:
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
from(bucket: "telegraf/")
|> range(start: v.timeRangeStart)
|> filter(fn: (r) =>
r._measurement == v.measurementVar and
r._field == v.fieldKey
)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The resulting dashboard will work like this:
![Custom meta query filtering](/img/chronograf/1-6-custom-meta-query-filtering.gif)
### Define template variables in the URL
Chronograf uses URL query parameters (also known as query string parameters) to set both display options and template variables in the URL.
This makes it easy to share links to dashboards so they load in a specific state with specific template variable values selected.
URL query parameters are appended to the end of the URL with a question mark (`?`)
indicating the beginning of query parameters.
Chain multiple query parameters together using an ampersand (`&`).
To declare a template variable or a date range as a URL query parameter, it must follow the following pattern:
#### Pattern for template variable query parameters
```bash
# Spaces for clarity only
& tempVars %5B variableName %5D = variableValue
```
`&`
Indicates the beginning of a new query parameter in a series of multiple query parameters.
`tempVars`
Informs Chronograf that the query parameter being passed is a template variable.
_**Required for all template variable query parameters.**_
`%5B`, `%5D`
URL-encoded `[` and `]` respectively that enclose the template variable name.
`variableName`
Name of the template variable.
`variableValue`
Value of the template variable.
{{% note %}}
When template variables are modified in the dashboard, the corresponding
URL query parameters are automatically updated.
{{% /note %}}
#### Example template variable query parameter
```
.../?&tempVars%5BmeasurementVar%5D=cpu
```
#### Including multiple template variables in the URL
To chain multiple template variables as URL query parameters, include the full [pattern](#pattern-for-template-variable-query-parameters) for _**each**_ template variable.
```bash
# Spaces for clarity only
.../? &tempVars%5BmeasurementVar%5D=cpu &tempVars%5BfieldKey%5D=usage_system
```

View File

@ -0,0 +1,293 @@
---
title: Create a live leaderboard for game scores
description: This example uses Chronograf to build a leaderboard for gamers to be able to see player scores in realtime.
menu:
chronograf_1_9:
name: Live leaderboard of game scores
weight: 20
parent: Guides
draft: true
---
**If you do not have a running Kapacitor instance, check out [Getting started with Kapacitor](/kapacitor/v1.4/introduction/getting-started/) to get Kapacitor up and running on localhost.**
Today we are game developers.
We host a several game servers, each running an instance of the game code, with about a hundred players per game.
We need to build a leaderboard so that spectators can see player scores in realtime.
We would also like to have historical data on leaders in order to do postgame
analysis on who was leading for how long, etc.
We will use Kapacitor stream processing to do the heavy lifting for us.
The game servers can send a [UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol) packet whenever a player's score changes,
or every 10 seconds if the score hasn't changed.
### Setup
{{% note %}}
**Note:** Copies of the code snippets used here can be found in the [scores](https://github.com/influxdata/kapacitor/tree/master/examples/scores) example in Kapacitor project on GitHub.
{{% /note %}}
First, we need to configure Kapacitor to receive the stream of scores.
In this example, the scores update too frequently to store all of the score data in a InfluxDB database, so the score data will be semt directly to Kapacitor.
Like InfluxDB, you can configure a UDP listener.
Add the following settings the `[[udp]]` secton in your Kapacitor configuration file (`kapacitor.conf`).
```
[[udp]]
enabled = true
bind-address = ":9100"
database = "game"
retention-policy = "autogen"
```
Using this configuration, Kapacitor will listen on port `9100` for UDP packets in [Line Protocol](/{{< latest "influxdb" "v1" >}}/write_protocols/line_protocol_tutorial/) format.
Incoming data will be scoped to be in the `game.autogen` database and retention policy.
Restart Kapacitor so that the UDP listener service starts.
Here is a simple bash script to generate random score data so we can test it without
messing with the real game servers.
```bash
#!/bin/bash
# default options: can be overriden with corresponding arguments.
host=${1-localhost}
port=${2-9100}
games=${3-10}
players=${4-100}
games=$(seq $games)
players=$(seq $players)
# Spam score updates over UDP
while true
do
for game in $games
do
game="g$game"
for player in $players
do
player="p$player"
score=$(($RANDOM % 1000))
echo "scores,player=$player,game=$game value=$score" > /dev/udp/$host/$port
done
done
sleep 0.1
done
```
Place the above script into a file `scores.sh` and run it:
```bash
chmod +x ./scores.sh
./scores.sh
```
Now we are spamming Kapacitor with our fake score data.
We can just leave that running since Kapacitor will drop
the incoming data until it has a task that wants it.
### Defining the Kapacitor task
What does a leaderboard need to do?
1. Get the most recent score per player per game.
1. Calculate the top X player scores per game.
1. Publish the results.
1. Store the results.
To complete step one we need to buffer the incoming stream and return the most recent score update per player per game.
Our [TICKscript](/kapacitor/v1.4/tick/) will look like this:
```javascript
var topPlayerScores = stream
|from()
.measurement('scores')
// Get the most recent score for each player per game.
// Not likely that a player is playing two games but just in case.
.groupBy('game', 'player')
|window()
// keep a buffer of the last 11s of scores
// just in case a player score hasn't updated in a while
.period(11s)
// Emit the current score per player every second.
.every(1s)
// Align the window boundaries to be on the second.
.align()
|last('value')
```
Place this script in a file called `top_scores.tick`.
Now our `topPlayerScores` variable contains each player's most recent score.
Next to calculate the top scores per game we just need to group by game and run another map reduce job.
Let's keep the top 15 scores per game.
Add these lines to the `top_scores.tick` file.
```javascript
// Calculate the top 15 scores per game
var topScores = topPlayerScores
|groupBy('game')
|top(15, 'last', 'player')
```
The `topScores` variable now contains the top 15 player's score per game.
All we need to be able to build our leaderboard.
Kapacitor can expose the scores over HTTP via the [HTTPOutNode](/kapacitor/v1.4/nodes/http_out_node/).
We will call our task `top_scores`; with the following addition the most recent scores will be available at
`http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores`.
```javascript
// Expose top scores over the HTTP API at the 'top_scores' endpoint.
// Now your app can just request the top scores from Kapacitor
// and always get the most recent result.
//
// http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores
topScores
|httpOut('top_scores')
```
Finally we want to store the top scores over time so we can do in depth analysis to ensure the best game play.
But we do not want to store the scores every second as that is still too much data.
First we will sample the data and store scores only every 10 seconds.
Also let's do some basic analysis ahead of time since we already have a stream of all the data.
For now we will just do basic gap analysis where we will store the gap between the top player and the 15th player.
Add these lines to `top_scores.tick` to complete our task.
```javascript
// Sample the top scores and keep a score once every 10s
var topScoresSampled = topScores
|sample(10s)
// Store top fifteen player scores in InfluxDB.
topScoresSampled
|influxDBOut()
.database('game')
.measurement('top_scores')
// Calculate the max and min of the top scores.
var max = topScoresSampled
|max('top')
var min = topScoresSampled
|min('top')
// Join the max and min streams back together and calculate the gap.
max
|join(min)
.as('max', 'min')
// Calculate the difference between the max and min scores.
// Rename the max and min fields to more friendly names 'topFirst', 'topLast'.
|eval(lambda: "max.max" - "min.min", lambda: "max.max", lambda: "min.min")
.as('gap', 'topFirst', 'topLast')
// Store the fields: gap, topFirst and topLast in InfluxDB.
|influxDBOut()
.database('game')
.measurement('top_scores_gap')
```
Since we are writing data back to InfluxDB create a database `game` for our results.
{{< keep-url >}}
```
curl -G 'http://localhost:8086/query?' --data-urlencode 'q=CREATE DATABASE game'
```
Here is the complete task TICKscript if you don't want to copy paste as much :)
```javascript
dbrp "game"."autogen"
// Define a result that contains the most recent score per player.
var topPlayerScores = stream
|from()
.measurement('scores')
// Get the most recent score for each player per game.
// Not likely that a player is playing two games but just in case.
.groupBy('game', 'player')
|window()
// keep a buffer of the last 11s of scores
// just in case a player score hasn't updated in a while
.period(11s)
// Emit the current score per player every second.
.every(1s)
// Align the window boundaries to be on the second.
.align()
|last('value')
// Calculate the top 15 scores per game
var topScores = topPlayerScores
|groupBy('game')
|top(15, 'last', 'player')
// Expose top scores over the HTTP API at the 'top_scores' endpoint.
// Now your app can just request the top scores from Kapacitor
// and always get the most recent result.
//
// http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores
topScores
|httpOut('top_scores')
// Sample the top scores and keep a score once every 10s
var topScoresSampled = topScores
|sample(10s)
// Store top fifteen player scores in InfluxDB.
topScoresSampled
|influxDBOut()
.database('game')
.measurement('top_scores')
// Calculate the max and min of the top scores.
var max = topScoresSampled
|max('top')
var min = topScoresSampled
|min('top')
// Join the max and min streams back together and calculate the gap.
max
|join(min)
.as('max', 'min')
// calculate the difference between the max and min scores.
|eval(lambda: "max.max" - "min.min", lambda: "max.max", lambda: "min.min")
.as('gap', 'topFirst', 'topLast')
// store the fields: gap, topFirst, and topLast in InfluxDB.
|influxDBOut()
.database('game')
.measurement('top_scores_gap')
```
Define and enable our task to see it in action:
```bash
kapacitor define top_scores -tick top_scores.tick
kapacitor enable top_scores
```
First let's check that the HTTP output is working.
```bash
curl 'http://localhost:9092/kapacitor/v1/tasks/top_scores/top_scores'
```
You should have a JSON result of the top 15 players and their scores per game.
Hit the endpoint several times to see that the scores are updating once a second.
Now, let's check InfluxDB to see our historical data.
{{< keep-url >}}
```bash
curl \
-G 'http://localhost:8086/query?db=game' \
--data-urlencode 'q=SELECT * FROM top_scores WHERE time > now() - 5m GROUP BY game'
curl \
-G 'http://localhost:8086/query?db=game' \
--data-urlencode 'q=SELECT * FROM top_scores_gap WHERE time > now() - 5m GROUP BY game'
```
Great!
The hard work is done.
All that remains is configuring the game server to send score updates to Kapacitor and update the spectator dashboard to pull scores from Kapacitor.

View File

@ -0,0 +1,291 @@
---
title: Monitor InfluxDB Enterprise clusters
description: Use Chronograf dashboards with an InfluxDB OSS server to measure and monitor InfluxDB Enterprise clusters.
aliases:
- /chronograf/v1.9/guides/monitor-an-influxenterprise-cluster/
menu:
chronograf_1_9:
weight: 80
parent: Guides
---
[InfluxDB Enterprise](/{{< latest "enterprise_influxdb" >}}/) offers high availability and a highly scalable clustering solution for your time series data needs.
Use Chronograf to assess your cluster's health and to monitor the infrastructure behind your project.
This guide offers step-by-step instructions for using Chronograf, [InfluxDB](/{{< latest "influxdb" "v1" >}}/), and [Telegraf](/{{< latest "telegraf" >}}/) to monitor data nodes in your InfluxDB Enterprise cluster.
## Requirements
You have a fully-functioning InfluxDB Enterprise cluster with authentication enabled.
See the InfluxDB Enterprise documentation for
[detailed setup instructions](/{{< latest "enterprise_influxdb" >}}/production_installation/).
This guide uses an InfluxData Enterprise cluster with three meta nodes and three data nodes; the steps are also applicable to other cluster configurations.
InfluxData recommends using a separate server to store your monitoring data.
It is possible to store the monitoring data in your cluster and [connect the cluster to Chronograf](/chronograf/v1.9/troubleshooting/frequently-asked-questions/#how-do-i-connect-chronograf-to-an-influxenterprise-cluster), but, in general, your monitoring data should live on a separate server.
You're working on an Ubuntu installation.
Chronograf and the other components of the TICK stack are supported on several operating systems and hardware architectures. Check out the [downloads page](https://portal.influxdata.com/downloads) for links to the binaries of your choice.
## Architecture overview
Before we begin, here's an overview of the final monitoring setup:
![Architecture diagram](/img/chronograf/1-6-cluster-diagram.png)
The diagram above shows an InfluxDB Enterprise cluster that consists of three meta nodes (M) and three data nodes (D).
Each data node has its own [Telegraf](/{{< latest "telegraf" >}}/) instance (T).
Each Telegraf instance is configured to collect node CPU, disk, and memory data using the Telegraf [system stats](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin.
The Telegraf instances are also configured to send those data to a single [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/) instance that lives on a separate server.
When Telegraf sends data to InfluxDB, it automatically [tags](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag) the data with the hostname of the relevant data node.
The InfluxDB OSS instance that stores the Telegraf data is connected to Chronograf.
Chronograf uses the hostnames in the Telegraf data to populate the Host List page and provide other hostname-specific information in the user interface.
## Setup description
### InfluxDB OSS setup
#### Step 1: Download and install InfluxDB
InfluxDB can be downloaded from the [InfluxData downloads page](https://portal.influxdata.com/downloads).
#### Step 2: Enable authentication
For security purposes, enable authentication in the InfluxDB [configuration file (influxdb.conf)](/{{< latest "influxdb" "v1" >}}/administration/config/), which is located in `/etc/influxdb/influxdb.conf`.
In the `[http]` section of the configuration file, uncomment the `auth-enabled` option and set it to `true`:
```
[http]
# Determines whether HTTP endpoint is enabled.
# enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true #💥
```
#### Step 3: Start InfluxDB
Next, start the InfluxDB process:
```
~# sudo systemctl start influxdb
```
#### Step 4: Create an admin user
Create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
Because you enabled authentication, you must perform this step before moving on to the next section.
Run the command below to create an admin user, replacing `chronothan` and `supersecret` with your own username and password.
Note that the password requires single quotes.
{{< keep-url >}}
```
~# curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE USER chronothan WITH PASSWORD 'supersecret' WITH ALL PRIVILEGES"
```
A successful `CREATE USER` query returns a blank result:
```
{"results":[{"statement_id":0}]} <--- Success!
```
### Telegraf setup
Perform the following steps on each data node in your cluster.
You'll return to your InfluxDB instance at the end of this section.
#### Step 1: Download and install Telegraf
Telegraf can be downloaded from the [InfluxData downloads page](https://portal.influxdata.com/downloads).
#### Step 2: Configure Telegraf
Configure Telegraf to write monitoring data to your InfluxDB OSS instance.
The Telegraf configuration file is located in `/etc/telegraf/telegraf.conf`.
First, in the `[[outputs.influxdb]]` section, set the `urls` option to the IP address and port of your InfluxDB OSS instance.
InfluxDB runs on port `8086` by default.
This step ensures that Telegraf writes data to your InfluxDB OSS instance.
```
[[outputs.influxdb]]
## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://xxx.xx.xxx.xxx:8086"] #💥
```
Next, in the same `[[outputs.influxdb]]` section, uncomment and set the `username` and `password` options to the username and password that you created in the [previous section](#step-4-create-an-admin-user).
Telegraf must be aware your username and password to successfully write data to your InfluxDB OSS instance.
```
[[outputs.influxdb]]
## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://xxx.xx.xxx.xxx:8086"] # required
[...]
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
username = "chronothan" #💥
password = "supersecret" #💥
```
The [Telegraf System input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) is enabled by default and requires no additional configuration.
The input plugin automatically collects general statistics on system load, uptime, and the number of users logged in.
Enabled input plugins are configured in the `INPUT PLUGINS` section of the configuration file; for example, here's the section that controls the CPU data collection:
```
###############################################################################
# INPUT PLUGINS #
###############################################################################
# Read metrics about cpu usage
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
```
#### Step 3: Restart the Telegraf service
Restart the Telegraf service so that your configuration changes take effect:
**macOS**
```sh
telegraf --config telegraf.conf
```
**Linux (sysvinit and upstart installations)**
```sh
sudo service telegraf restart
```
**Linux (systemd installations)**
```sh
systemctl restart telegraf
```
Repeat steps one through four for each data node in your cluster.
#### Step 4: Confirm the Telegraf setup
To verify Telegraf is successfully collecting and writing data, use one of the following methods to query your InfluxDB OSS instance:
**InfluxDB CLI (`influx`)**
```sh
$ influx
> SHOW TAG VALUES FROM cpu WITH KEY=host
```
**`curl`**
Replace the `chronothan` and `supersecret` values with your actual username and password.
{{< keep-url >}}
```
~# curl -G "http://localhost:8086/query?db=telegraf&u=chronothan&p=supersecret&pretty=true" --data-urlencode "q=SHOW TAG VALUES FROM cpu WITH KEY=host"
```
The expected output is similar to the JSON code block below.
In this case, the `telegraf` database has three different [tag values](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-value) for the `host` [tag key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-key): `data-node-01`, `data-node-02`, and `data-node-03`.
Those values match the hostnames of the three data nodes in the cluster; this means Telegraf is successfully writing monitoring data from those hosts to the InfluxDB OSS instance!
```
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "cpu",
"columns": [
"key",
"value"
],
"values": [
[
"host",
"data-node-01"
],
[
"host",
"data-node-02"
],
[
"host",
"data-node-03"
]
]
}
]
}
]
}
```
### Chronograf Setup
#### Step 1: Download and install Chronograf
Download and install Chronograf on the same server as the InfluxDB instance.
This is not a requirement; you may host Chronograf on a separate server.
Chronograf can be downloaded from the [InfluxData downloads page](https://portal.influxdata.com/downloads).
#### Step 2: Start Chronograf
```
~# sudo systemctl start chronograf
```
### Step 3: Connect Chronograf to the InfluxDB OSS instance
To access Chronograf, go to http://localhost:8888.
The welcome page includes instructions for connecting Chronograf to that instance.
![Connect Chronograf to InfluxDB](/img/chronograf/1-6-cluster-welcome.png)
For the `Connection String`, enter the hostname or IP of your InfluxDB OSS instance, and be sure to include the default port: `8086`.
Next, name your data source; this can be anything you want.
Finally, enter your username and password and click `Add Source`.
### Step 4: Explore the monitoring data in Chronograf
Chronograf works with the Telegraf data in your InfluxDB OSS instance.
The `Host List` page shows your data node's hostnames, their statuses, CPU usage, load, and their configured applications.
In this case, you've only enabled the system stats input plugin so `system` is the single application that appears in the `Apps` column.
![Host List page](/img/chronograf/1-6-cluster-hostlist.png)
Click `system` to see the Chronograf canned dashboard for that application.
Keep an eye on your data nodes by viewing that dashboard for each hostname:
![Pre-created dashboard](/img/chronograf/1-6-cluster-predash.gif)
Next, check out the Data Explorer to create a customized graph with the monitoring data.
In the image below, the Chronograf query editor is used to visualize the idle CPU usage data for each data node:
![Data Explorer](/img/chronograf/1-6-cluster-de.png)
Create more customized graphs and save them to a dashboard on the Dashboard page in Chronograf.
See the [Creating Chronograf dashboards](/chronograf/v1.9/guides/create-a-dashboard/) guide for more information.
That's it! You've successfully configured Telegraf to collect and write data, InfluxDB to store those data, and Chronograf to use those data for monitoring and visualization purposes.

View File

@ -0,0 +1,25 @@
---
title: View Chronograf dashboards in presentation mode
description: View dashboards in full screen using presentation mode.
menu:
chronograf_1_9:
name: View dashboards in presentation mode
weight: 130
parent: Guides
---
Presentation mode allows you to view Chronograf in full screen, hiding the left and top navigation menus so only the cells appear. This mode might be helpful, for example, for stationary screens dedicated to monitoring visualizations.
## Enter presentation mode manually
To enter presentation mode manually, click the icon in the upper right:
<img src="/img/chronograf/1-6-presentation-mode.png" style="width:100%; max-width:500px"/>
To exit presentation mode, press `ESC`.
## Use the URL query parameter
To load the dashboard in presentation mode, add URL query parameter `present=true` to your dashboard URL. For example, your URL might look like this:
`http://example.com:8888/sources/1/dashboards/2?present=true`
Note that if you use this option, you won't be able to exit presentation mode using `ESC`.

View File

@ -0,0 +1,148 @@
---
title: Explore data in Chronograf
description: Query and visualize data in the Data Explorer.
menu:
chronograf_1_9:
name: Explore data in Chronograf
weight: 130
parent: Guides
---
Explore and visualize your data in the **Data Explorer**. For both InfluxQL and Flux, Chronograf allows you to move seamlessly between using the builder or templates and manually editing the query; when possible, the interface automatically populates the builder with the information from your raw query. Choose between [visualization types](/chronograf/v1.9/guides/visualization-types/) for your query.
To open the **Data Explorer**, click the **Explore** icon in the navigation bar:
<img src="/img/chronograf/1-7-data-explorer-icon.png" style="width:100%; max-width:400px; margin:2em 0; display: block;">
## Select local time or UTC (Coordinated Universal Time)
- In the upper-right corner of the page, select the time to view metrics and events by clicking one of the following:
- **UTC** for Coordinated Universal Time
- **Local** for the local time reported by your browser
{{% note %}}
**Note:** If your organization spans multiple time zones, we recommend using UTC (Coordinated Universal Time) to ensure that everyone sees metrics and events for the same time.
{{% /note %}}
## Explore data with InfluxQL
InfluxQL is a SQL-like query language you can use to interact with data in InfluxDB. For detailed tutorials and reference material, see our [InfluxQL documentation](/{{< latest "influxdb" "v1" >}}/query_language/).
{{% note %}}
#### Limited InfluxQL support in InfluxDB Cloud and OSS 2.x
Chronograf interacts with **InfluxDB Cloud** and **InfluxDB OSS 2.x** through the
[v1 compatibility API](/influxdb/cloud/reference/api/influxdb-1x/).
The v1 compatibility API provides limited InfluxQL support.
For more information, see [InfluxQL support](/influxdb/cloud/query-data/influxql/#influxql-support).
{{% /note %}}
1. Open the Data Explorer and click **Add a Query**.
2. To the right of the source dropdown above the graph placeholder, select **InfluxQL** as the source type.
3. Use the builder to select from your existing data and allow Chronograf to format the query for you. Alternatively, manually enter and edit a query.
4. You can also select from the dropdown list of **Metaquery Templates** that manage databases, retention policies, users, and more.
_See [Metaquery templates](#metaquery-templates)._
5. To display the templated values in the query, click **Show template values**.
6. Click **Submit Query**.
## Explore data with Flux
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started).
1. Open the Data Explorer and click **Add a Query**.
2. To the right of the source dropdown above the graph placeholder, select **Flux** as the source type.
The **Schema**, **Functions**, and **Script** panes appear.
3. Use the **Schema** pane to explore your available data. Click the **+** sign next to a bucket name to expand its content.
4. Use the **Functions** pane to view details about the available Flux functions.
5. Use the **Script** pane to enter your Flux query.
* To get started with your query, click the **Script Wizard**. In the wizard, you can select a bucket, measurement, fields and an aggregate.
<img src="/img/chronograf/1-7-flux-script-wizard.png" style="width:100%; max-width:400px; margin:2em 0; display:block;">
For example, if you make the above selections, the wizard inserts the following script:
```js
from(bucket: "telegraf/autogen")
|> range(start: dashboardTime)
|> filter(fn: (r) => r._measurement == "cpu" and (r._field == "usage_system"))
|> window(every: autoInterval)
|> toFloat()
|> percentile(percentile: 0.95)
|> group(except: ["_time", "_start", "_stop", "_value"])
```
* Alternatively, you can enter your entire script manually.
6. Click **Run Script** in the top bar of the **Script** pane. You can then preview your graph in the above pane.
## Visualize your query
Select the **Visualization** tab at the top of the **Data Explorer**. For details about all of the available visualization options, see [Visualization types in Chronograf](/chronograf/v1.9/guides/visualization-types/).
## Add queries to dashboards
To add your query and graph to a dashboard:
1. Click **Send to Dashboard** in the upper right.
2. In the **Target Dashboard(s)** dropdown, select at least one existing dashboard to send the cell to, or select **Send to a New Dashboard**.
3. Enter a name for the new cell and, if you created a new dashboard, the new dashboard.
4. If using an **InfluxQL** data source and you have multiple queries in the Data Explorer,
select which queries to send:
- **Active Query**: Send the currently viewed query.
- **All Queries**: Send all queries.
5. Click **Send to Dashboard(s)**.
## Metaquery templates
Metaquery templates provide templated InfluxQL queries manage databases, retention policies, users, and more.
Choose from the following options in the **Metaquery Templates** dropdown list:
###### Manage databases
- [Show Databases](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-databases)
- [Create Database](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-database)
- [Drop Database](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-a-database-with-drop-database)
###### Measurements, Tags, and Fields
- [Show Measurements](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-measurements)
- [Show Tag Keys](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-keys)
- [Show Tag Values](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-tag-values)
- [Show Field Keys](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-field-keys)
###### Cardinality
- [Show Field Key Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-field-key-cardinality)
- [Show Measurement Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-measurement-cardinality)
- [Show Series Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-series-cardinality)
- [Show Tag Key Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-tag-key-cardinality)
- [Show Tag Values Cardinality](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-tag-values-cardinality)
###### Manage retention policies
- [Show Retention Polices](/{{< latest "influxdb" "v1" >}}/query_language/explore-schema/#show-retention-policies)
- [Create Retention Policy](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-retention-policies-with-create-retention-policy)
- [Drop Retention Policy](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-retention-policies-with-drop-retention-policy)
###### Manage continuous queries
- [Show Continuous Queries](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#listing-continuous-queries)
- [Create Continuous Query](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#syntax)
- [Drop Continuous Query](/{{< latest "influxdb" "v1" >}}/query_language/continuous_queries/#deleting-continuous-queries)
###### Manage users and permissions
- [Show Users](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-users)
- [Show Grants](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-grants)
- [Create User](/{{< latest "influxdb" "v1" >}}/query_language/spec/#create-user)
- [Create Admin User](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#admin-user-management)
- [Drop User](/{{< latest "influxdb" "v1" >}}/query_language/spec/#drop-user)
###### Delete data
- [Drop Measurement](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-measurements-with-drop-measurement)
- [Drop Series](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#drop-series-from-the-index-with-drop-series)
- [Delete](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#delete-series-with-delete)
###### Analyze queries
- [Explain](/{{< latest "influxdb" "v1" >}}/query_language/spec/#explain)
- [Explain Analyze](/{{< latest "influxdb" "v1" >}}/query_language/spec/#explain-analyze)
###### Inspect InfluxDB internal metrics
- [Show Stats](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-stats)
- [Show Diagnostics](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-diagnostics)
- [Show Subscriptions](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-subscriptions)
- [Show Queries](/{{< latest "influxdb" "v1" >}}/troubleshooting/query_management/#list-currently-running-queries-with-show-queries)
- [Show Shards](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shards)
- [Show Shard Groups](/{{< latest "influxdb" "v1" >}}/query_language/spec/#show-shard-groups)

View File

@ -0,0 +1,15 @@
---
title: Edit TICKscripts in Chronograf
description: View and edit TICKscript logs in Chronograf.
menu:
chronograf_1_9:
weight: 20
parent: Guides
draft: true
---
TICKscript logs data to a log file for debugging purposes.
Notes:
* TICKscript logs data to a log file for debugging purposes. We have a bunch of hosts which post data to an external endpoint. the payload is logged before being sent.
A feature to show the list of hosts , and an ability to see the logs for each of them.

View File

@ -0,0 +1,482 @@
---
title: Use pre-created dashboards in Chronograf
description: >
Display metrics for popular third-party applications with preconfigured dashboards in Chronograf.
menu:
chronograf_1_9:
name: Use pre-created dashboards
weight: 10
parent: Guides
---
## Overview
Pre-created dashboards are delivered with Chronograf depending on which Telegraf input plugins you have enabled and are available from the Host List page. These dashboards, which are built in and not editable, include cells with data visualizations for metrics that are relevant to data sources you are likely to be using.
{{% note %}}
Note that these pre-created dashboards cannot be cloned or customized. They appear only as part of the Host List view and are associated with metrics gathered from a single host. Dashboard templates are also available and deliver a solid starting point for customizing your own unique dashboards based on the Telegraf plugins enabled and operate across one or more hosts. For details, see [Dashboard templates](/chronograf/v1.9/guides/create-a-dashboard/#dashboard-templates).
{{% /note %}}
## Requirements
The pre-created dashboards automatically appear in the Host List page to the right of hosts based on which Telegraf input plugins you have enabled. Check the list below for applications that you are interested in using and make sure that you have the required Telegraf input plugins enabled.
## Use pre-created dashboards
Pre-created dashboards are delivered in Chronograf installations and are ready to be used when you have the required Telegraf input plugins enabled.
**To view a pre-created dashboard:**
1. Open Chronograf in your web browser and click **Host List** in the navigation bar.
2. Select an application listed under **Apps**. By default, the system `app` should be listed next to a host listing. Other apps appear depending on the Telegraf input plugins that you have enabled.
The selected application appears showing pre-created cells, based on available measurements.
## Create or edit dashboards
Find a list of apps (pre-created dashboards) available to use with Chronograf below. For each app, you'll find:
- Required Telegraf input plugins for the app
- JSON files included in the app
- Cell titles included in each JSON file
The JSON files for apps are included in the `/usr/share/chronograf/canned` directory. Find information about the configuration option `--canned-path` on the [Chronograf configuration options](/chronograf/v1.9/administration/config-options/) page.
Enable and disable apps in your Telegraf configuration file (by default, `/etc/telegraf/telegraf.conf`). See [Configuring Telegraf](/telegraf/v1.13/administration/configuration/) for details.
## Apps (pre-created dashboards):
* [apache](#apache)
* [consul](#consul)
* [docker](#docker)
* [elasticsearch](#elasticsearch)
* [haproxy](#haproxy)
* [iis](#iis)
* [influxdb](#influxdb)
* [kubernetes](#kubernetes)
* [memcached](#memcached-memcached)
* [mesos](#mesos)
* [mysql](#mysql)
* [nginx](#nginx)
* [nsq](#nsq)
* [phpfpm](#phpfpm)
* [ping](#ping)
* [postgresql](#postgresql)
* [rabbitmq](#rabbitmq)
* [redis](#redis)
* [riak](#riak)
* [system](#system)
* [varnish](#varnish)
* [win_system](#win-system)
## apache
**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#apache-http-server)
`apache.json`
* "Apache Bytes/Second"
* "Apache - Requests/Second"
* "Apache - Total Accesses"
## consul
**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#consul)
`consul_http.json`
* "Consul - HTTP Request Time (ms)"
`consul_election.json`
* "Consul - Leadership Election"
`consul_cluster.json`
* "Consul - Number of Agents"
`consul_serf_events.json`
* "Consul - Number of serf events"
## docker
**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#docker)
`docker.json`
* "Docker - Container CPU %"
* "Docker - Container Memory (MB)"
* "Docker - Containers"
* "Docker - Images"
* "Docker - Container State"
`docker_blkio.json`
* "Docker - Container Block IO"
`docker_net.json`
* "Docker - Container Network"
## elasticsearch
**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch)
`elasticsearch.json`
* "ElasticSearch - Query Throughput"
* "ElasticSearch - Open Connections"
* "ElasticSearch - Query Latency"
* "ElasticSearch - Fetch Latency"
* "ElasticSearch - Suggest Latency"
* "ElasticSearch - Scroll Latency"
* "ElasticSearch - Indexing Latency"
* "ElasticSearch - JVM GC Collection Counts"
* "ElasticSearch - JVM GC Latency"
* "ElasticSearch - JVM Heap Usage"
## haproxy
**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#haproxy)
`haproxy.json`
* "HAProxy - Number of Servers"
* "HAProxy - Sum HTTP 2xx"
* "HAProxy - Sum HTTP 4xx"
* "HAProxy - Sum HTTP 5xx"
* "HAProxy - Frontend HTTP Requests/Second"
* "HAProxy - Frontend Sessions/Second"
* "HAProxy - Frontend Session Usage %"
* "HAProxy - Frontend Security Denials/Second"
* "HAProxy - Frontend Request Errors/Second"
* "HAProxy - Frontend Bytes/Second"
* "HAProxy - Backend Average Response Time (ms)"
* "HAProxy - Backend Connection Errors/Second"
* "HAProxy - Backend Queued Requests/Second"
* "HAProxy - Backend Average Request Queue Time (ms)"
* "HAProxy - Backend Error Responses/Second"
## iis
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#windows-performance-counters)
`win_websvc.json`
* "IIS - Service"
## influxdb
**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#influxdb)
`influxdb_database.json`
* "InfluxDB - Cardinality"
`influxdb_httpd.json`
* "InfluxDB - Write HTTP Requests"
* "InfluxDB - Query Requests"
* "InfluxDB - Client Failures"
`influxdb_queryExecutor.json`
* "InfluxDB - Query Performance"
`influxdb_write.json`
* "InfluxDB - Write Points"
* "InfluxDB - Write Errors"
## kubernetes
`kubernetes_node.json`
* "K8s - Node Millicores"
* "K8s - Node Memory Bytes"
`kubernetes_pod_container.json`
* "K8s - Pod Millicores"
* "K8s - Pod Memory Bytes"
`kubernetes_pod_network.json`
* "K8s - Pod TX Bytes/Second"
* "K8s - Pod RX Bytes/Second "
`kubernetes_system_container.json`
* "K8s - Kubelet Millicores"
* "K8s - Kubelet Memory Bytes"
## Memcached (`memcached`)
**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#memcached)
`memcached.json`
* "Memcached - Current Connections"
* "Memcached - Get Hits/Second"
* "Memcached - Get Misses/Second"
* "Memcached - Delete Hits/Second"
* "Memcached - Delete Misses/Second"
* "Memcached - Incr Hits/Second"
* "Memcached - Incr Misses/Second"
* "Memcached - Current Items"
* "Memcached - Total Items"
* "Memcached - Bytes Stored"
* "Memcached - Bytes Written/Sec"
* "Memcached - Evictions/10 Seconds"
## mesos
**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#mesos)
`mesos.json`
* "Mesos Active Slaves"
* "Mesos Tasks Active"
* "Mesos Tasks"
* "Mesos Outstanding offers"
* "Mesos Available/Used CPUs"
* "Mesos Available/Used Memory"
* "Mesos Master Uptime"
## mongodb
**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#mongodb)
`mongodb.json`
* "MongoDB - Read/Second"
* "MongoDB - Writes/Second"
* "MongoDB - Active Connections"
* "MongoDB - Reds/Writes Waiting in Queue"
* "MongoDB - Network Bytes/Second"
## mysql
**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#mysql)
`mysql.json`
* "MySQL - Reads/Second"
* "MySQL - Writes/Second"
* "MySQL - Connections/Second"
* "MySQL - Connection Errors/Second"
## nginx
**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#nginx)
`nginx.json`
* "NGINX - Client Connections"
* "NGINX - Client Errors"
* "NGINX - Client Requests"
* "NGINX - Active Client State"
## nsq
**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#nsq)
`nsq_channel.json`
* "NSQ - Channel Client Count"
* "NSQ - Channel Messages Count"
`nsq_server.json`
* "NSQ - Topic Count"
* "NSQ - Server Count"
`nsq_topic.json`
* "NSQ - Topic Messages"
* "NSQ - Topic Messages on Disk"
* "NSQ - Topic Ingress"
* "NSQ topic egress"
## phpfpm
**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#php-fpm)
`phpfpm.json`
* "phpfpm - Accepted Connections"
* "phpfpm - Processes"
* "phpfpm - Slow Requests"
* "phpfpm - Max Children Reached"
## ping
**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#ping)
`ping.json`
* "Ping - Packet Loss Percent"
* "Ping - Response Times (ms)"
## postgresql
**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#postgresql)
`postgresql.json`
* "PostgreSQL - Rows"
* "PostgreSQL - QPS"
* "PostgreSQL - Buffers"
* "PostgreSQL - Conflicts/Deadlocks"
## rabbitmq
**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq)
`rabbitmq.json`
* "RabbitMQ - Overview"
* "RabbitMQ - Published/Delivered per second"
* "RabbitMQ - Acked/Unacked per second"
## redis
**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#redis)
`redis.json`
* "Redis - Connected Clients"
* "Redis - Blocked Clients"
* "Redis - CPU"
* "Redis - Memory"
## riak
**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#riak)
`riak.json`
* "Riak - Toal Memory Bytes"
* "Riak - Object Byte Size"
* "Riak - Number of Siblings/Minute"
* "Riak - Latency (ms)"
* "Riak - Reads and Writes/Minute"
* "Riak - Active Connections"
* "Riak - Read Repairs/Minute"
## system
The `system` application includes metrics that require all of the listed plugins. If any of the following plugins aren't enabled, the metrics associated with the plugins will not display data.
### cpu
**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#cpu)
`cpu.json`
* "CPU Usage"
### disk
`disk.json`
**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#disk)
* "System - Disk used %"
### diskio
**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
`diskio.json`
* "System - Disk MB/s"
*
### mem
**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#mem)
`mem.json`
* "System - Memory Gigabytes Used"
### net
**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#net)
`net.json`
* "System - Network Mb/s"
* "System - Network Error Rate"
### netstat
**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#netstat)
`netstat.json`
* "System - Open Sockets"
* "System - Sockets Created/Second"
### processes
**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#processes)
`processes.json`
* "System - Total Processes"
### procstat
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat)
`procstat.json`
* "Processes - Resident Memory (MB)"
* "Processes CPU Usage %"
### system
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat)
`load.json`
* "System Load"
## varnish
**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#varnish)
`varnish.json`
* "Varnish - Cache Hits/Misses"
## win_system
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#windows-performance-counters)
`win_cpu.json`
* "System - CPU Usage"
`win_mem.json`
* "System - Available Bytes"
`win_net.json`
* "System - TX Bytes/Second"
* "RX Bytes/Second"
`win_system.json`
* "System - Load"

View File

@ -0,0 +1,278 @@
---
title: Visualization types in Chronograf
descriptions: >
Chronograf provides multiple visualization types to visualize your data in a format that makes to the most sense for your use case.
menu:
chronograf_1_9:
name: Visualization types
weight: 40
parent: Guides
---
Chronograf's dashboard views support the following visualization types, which can be selected in the **Visualization Type** selection view of the [Data Explorer](/chronograf/v1.9/guides/querying-data).
[Visualization Type selector](/img/chronograf/1-6-viz-types-selector.png)
Each of the available visualization types and available user controls are described below.
* [Line Graph](#line-graph)
* [Stacked Graph](#stacked-graph)
* [Step-Plot Graph](#step-plot-graph)
* [Single Stat](#single-stat)
* [Line Graph + Single Stat](#line-graph-single-stat)
* [Bar Graph](#bar-graph)
* [Gauge](#gauge)
* [Table](#table)
* [Note](#note)
For information on adding and displaying annotations in graph views, see [Adding annotations to Chronograf views](/chronograf/v1.9/guides/annotations/).
### Line Graph
The **Line Graph** view displays a time series in a line graph.
![Line Graph selector](/img/chronograf/1-6-viz-line-graph-selector.png)
#### Line Graph Controls
![Line Graph Controls](/img/chronograf/1-6-viz-line-graph-controls.png)
Use the **Line Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
* **Static Legend**: Toggle between **Show** and **Hide**.
#### Line Graph example
![Line Graph example](/img/chronograf/1-6-viz-line-graph-example.png)
### Stacked Graph
The **Stacked Graph** view displays multiple time series bars as segments stacked on top of each other.
![Stacked Graph selector](/img/chronograf/1-6-viz-stacked-graph-selector.png)
#### Stacked Graph Controls
![Stacked Graph Controls](/img/chronograf/1-6-viz-stacked-graph-controls.png)
Use the **Stacked Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
* **Static Legend**: Toggle between **Show** and **Hide**.
#### Stacked Graph example
![Stacked Graph example](/img/chronograf/1-6-viz-stacked-graph-example.png)
### Step-Plot Graph
The **Step-Plot Graph** view displays a time series in a staircase graph.
![Step-Plot Graph selector](/img/chronograf/1-6-viz-step-plot-graph-selector.png)
#### Step-Plot Graph Controls
![Step-Plot Graph Controls](/img/chronograf/1-6-viz-step-plot-graph-controls.png)
Use the **Step-Plot Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
#### Step-Plot Graph example
![Step-Plot Graph example](/img/chronograf/1-6-viz-step-plot-graph-example.png)
### Bar Graph
The **Bar Graph** view displays the specified time series using a bar chart.
To select this view, click the Bar Graph selector icon.
![Bar Graph selector](/img/chronograf/1-6-viz-bar-graph-selector.png)
#### Bar Graph Controls
![Bar Graph Controls](/img/chronograf/1-6-viz-bar-graph-controls.png)
Use the **Bar Graph Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
#### Bar Graph example
![Bar Graph example](/img/chronograf/1-6-viz-bar-graph-example.png)
### Line Graph + Single Stat
The **Line Graph + Single Stat** view displays the specified time series in a line graph and overlays the single most recent value as a large numeric value.
To select this view, click the **Line Graph + Single Stat** view option.
![Line Graph + Single Stat selector](/img/chronograf/1-6-viz-line-graph-single-stat-selector.png)
#### Line Graph + Single Stat Controls
![Line Graph + Single Stat Controls](/img/chronograf/1-6-viz-line-graph-single-stat-controls.png)
Use the **Line Graph + Single Stat Controls** to specify the following:
* **Title**: y-axis title. Enter title, if using a custom title.
- **auto**: Enable or disable auto-setting.
* **Min**: Minimum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Max**: Maximum y-axis value.
- **auto**: Enable or disable auto-setting.
* **Y-Value's Prefix**: Prefix to be added to y-value.
* **Y-Value's Suffix**: Suffix to be added to y-value.
* **Y-Value's Format**: Toggle between **K/M/B** (Thousand/Million/Billion) and **K/M/G** (Kilo/Mega/Giga).
* **Scale**: Toggle between **Linear** and **Logarithmic**.
#### Line Graph + Single Stat example
![Line Graph + Single Stat example](/img/chronograf/1-6-viz-line-graph-single-stat-example.png)
### Single Stat
The **Single Stat** view displays the most recent value of the specified time series as a numerical value.
![Single Stat view](/img/chronograf/1-6-viz-single-stat-selector.png)
If a cell's query includes a [`GROUP BY` tag](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#group-by-tags) clause, Chronograf sorts the different [series](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#series) lexicographically and shows the most recent [field value](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field-value) associated with the first series.
For example, if a query groups by the `name` [tag key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-key) and `name` has two [tag values](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#tag-value) (`chronelda` and `chronz`), Chronograf shows the most recent field value associated with the `chronelda` series.
If a cell's query includes more than one [field key](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field-key) in the [`SELECT` clause](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#select-clause), Chronograf returns the most recent field value associated with the first field key in the `SELECT` clause.
For example, if a query's `SELECT` clause is `SELECT "chronogiraffe","chronelda"`, Chronograf shows the most recent field value associated with the `chronogiraffe` field key.
#### Single Stat Controls
Use the **Single Stat Controls** panel to specify one or more thresholds:
* **Add Threshold**: Button to add a new threshold.
* **Base Color**: Select a base, or background, color from the selection list.
* Color options: Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, Honeydew, Rainforest, Viridian, Ocean, Pool, Laser (default), Planet, Star, Comet, Pepper, Graphite, White, and Castle.
* **Prefix**: Prefix. For example, `%`, `MPH`, etc.
* **Suffix**: Suffix. For example, `%`, `MPH`, etc.
* Threshold Coloring: Toggle between **Background** and **Text**
### Gauge
The **Gauge** view displays the single value most recent value for a time series in a gauge view.
To select this view, click the Gauge selector icon.
![Gauge selector](/img/chronograf/1-6-viz-gauge-selector.png)
#### Gauge Controls
![Gauge Controls](/img/chronograf/1-6-viz-gauge-controls.png)
Use the **Gauge Controls** to specify the following:
* **Add Threshold**: Click button to add a threshold.
* **Min**: Minimum value for the threshold.
- Select color to display. Selection list options include: Laser (default), Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, and Honeydew.
* **Max**: Maximum value for the threshold.
- Select color to display. Selection list options include: Laser (default), Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, and Honeydew.
* **Prefix**: Prefix. For example, `%`, `MPH`, etc.
* **Suffix**: Suffix. For example, `%`, `MPH`, etc.
#### Gauge example
![Gauge example](/img/chronograf/1-6-viz-gauge-example.png)
### Table
The **Table** panel displays the results of queries in a tabular view, which is sometimes easier to analyze than graph views of data.
![Table selector](/img/chronograf/1-6-viz-table-selector.png)
#### Table Controls
![Table Controls](/img/chronograf/1-6-viz-table-controls.png)
Use the **Table Controls** to specify the following:
- **Default Sort Field**: Select the default sort field. Default is **time**.
- **Decimal Places**: Enter the number of decimal places. Default (empty field) is **unlimited**.
- **Time Axis**: Select **Vertical** or **Horizontal**.
- **Time Format**: Select the time format.
- Options include: `MM/DD/YYYY HH:mm:ss` (default), `MM/DD/YYYY HH:mm:ss.SSS`, `YYYY-MM-DD HH:mm:ss`, `HH:mm:ss`, `HH:mm:ss.SSS`, `MMMM D, YYYY HH:mm:ss`, `dddd, MMMM D, YYYY HH:mm:ss`, or `Custom`.
- **Lock First Column**: Lock the first column so that the listings are always visible. Threshold settings do not apply in the first column when locked.
- **Customize Field**:
- **time**: Enter a new name to rename.
- [additional]: Enter name for each additional column.
- Change the order of the column by dragging to the desired position.
- **Thresholds**
{{% note %}}
**Note:** Threshold settings apply to any cells with values, except when they appear in the first column and **Lock First Column** is enabled.
{{% /note %}}
- **Add Threshold** (button): Click to add a threshold.
- **Base Color**: Select a base, or background, color from the selection list.
- Color options: Ruby, Fire, Curacao, Tiger, Pineapple, Thunder, Honeydew, Rainforest, Viridian, Ocean, Pool, Laser (default), Planet, Star, Comet, Pepper, Graphite, White, and Castle.
#### Table view example
![Table example](/img/chronograf/1-6-viz-table-example.png)
### Note
The **Note** panel displays Markdown-formatted text with your graph.
![Note selector](/img/chronograf/1-7-viz-note-selector.png)
#### Note Controls
![Note Controls](/img/chronograf/1-7-viz-note-controls.png)
Enter your text in the **Add a Note** panel, using Markdown to format the text.
Enable the **Display note in cell when query returns no results** option to display the note text in the cell instead of `No Results`.
#### Note view example
![Note example](/img/chronograf/1-7-viz-note-example.png)

View File

@ -0,0 +1,102 @@
---
title: Write data to InfluxDB
description:
Use Chronograf to write data to InfluxDB. Upload line protocol into the UI, use the
InfluxQL `INTO` clause, or use the Flux `to()` function to write data back to InfluxDB.
menu:
chronograf_1_9:
name: Write data to InfluxDB
parent: Guides
weight: 140
---
Use Chronograf to write data to InfluxDB.
Choose from the following methods:
- [Upload line protocol through the Chronograf UI](#upload-line-protocol-through-the-chronograf-ui)
- [Use the InfluxQL `INTO` clause in a query](#use-the-influxql-into-clause-in-a-query)
- [Use the Flux `to()` function in a query](#use-the-flux-to-function-in-a-query)
## Upload line protocol through the Chronograf UI
1. Select **{{< icon "data-explorer" >}} Explore** in the left navigation bar.
2. Click **Write Data** in the top right corner of the Data Explorer.
{{< img-hd src="/img/chronograf/1-9-write-data.png" alt="Write data to InfluxDB with Chronograf" />}}
3. Select the **database** _(if an InfluxQL data source is selected)_ or
**database and retention policy** _(if a Flux data source is selected)_ to write to.
{{< img-hd src="/img/chronograf/1-9-write-db-rp.png" alt="Select database and retention policy to write to" />}}
4. Select one of the following methods for uploading [line protocol](/{{< latest "influxdb" "v1" >}}/write_protocols/line_protocol_tutorial/):
- **Upload File**: Upload a file containing line protocol to write to InfluxDB.
Either drag and drop a file into the file uploader or click to use your
operating systems file selector and choose a file to upload.
- **Manual Entry**: Manually enter line protocol to write to InfluxDB.
5. Select the timestamp precision of your line protocol.
Chronograf supports the following units:
- `s` (seconds)
- `ms` (milliseconds)
- `u` (microseconds)
- `ns` (nanoseconds)
{{< img-hd src="/img/chronograf/1-9-write-precision.png" alt="Select write precision in Chronograf" />}}
5. Click **Write**.
## Use the InfluxQL `INTO` clause in a query
To write data back to InfluxDB with an InfluxQL query, include the
[`INTO` clause](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#the-into-clause)
in your query:
1. Select **{{< icon "data-explorer" >}} Explore** in the left navigation bar.
2. Select **InfluxQL** as your data source type.
3. Write an InfluxQL query that includes the `INTO` clause. Specify the database,
retention policy, and measurement to write to. For example:
```sql
SELECT *
INTO "mydb"."autogen"."example-measurement"
FROM "example-db"."example-rp"."example-measurement"
GROUP BY *
```
4. Click **Submit Query**.
{{% note %}}
#### Use InfluxQL to write to InfluxDB 2.x or InfluxDB Cloud
To use InfluxQL to write to an **InfluxDB 2.x** or **InfluxDB Cloud** instance,
[configure database and retention policy mappings](/{{< latest "influxdb" >}}/upgrade/v1-to-v2/manual-upgrade/#create-dbrp-mappings)
and ensure the current [InfluxDB connection](/chronograf/v1.9/administration/creating-connections/#manage-influxdb-connections-using-the-chronograf-ui)
includes the appropriate connection credentials.
{{% /note %}}
## Use the Flux `to()` function in a query
To write data back to InfluxDB with an InfluxQL query, include the
[`INTO` clause](/{{< latest "influxdb" "v1" >}}/query_language/explore-data/#the-into-clause)
in your query:
1. Select **{{< icon "data-explorer" >}} Explore** in the left navigation bar.
2. Select **Flux** as your data source type.
{{% note %}}
To query InfluxDB with Flux, [enable Flux](/{{< latest "influxdb" "v1" >}}/flux/installation/)
in your InfluxDB configuration.
{{% /note %}}
3. Write an Flux query that includes the `to()` function.
Provide the database and retention policy to write to.
Use the `db-name/rp-name` syntax:
```js
from(bucket: "example-db/example-rp")
|> range(start: -30d)
|> filter(fn: (r) => r._measurement == "example-measurement")
|> to(bucket: "mydb/autogen")
```
4. Click **Run Script**.

View File

@ -0,0 +1,14 @@
---
title: Introduction to Chronograf
description: >
An introduction to Chronograf, the user interface and data visualization component for the InfluxData Platform. Includes documentation on getting started, installation, and downloading.
menu:
chronograf_1_9:
name: Introduction
weight: 20
---
Follow the links below to get acquainted with Chronograf:
{{< children >}}

View File

@ -0,0 +1,10 @@
---
title: Download Chronograf
menu:
chronograf_1_9:
name: Download
weight: 10
parent: Introduction
---
Download the latest Chronograf release at the [InfluxData download page](https://portal.influxdata.com/downloads).

View File

@ -0,0 +1,29 @@
---
title: Get started with Chronograf
description: >
Overview of data visualization, alerting, and infrastructure monitoring features available in Chronograf.
aliases:
- /chronograf/v1.9/introduction/getting_started/
menu:
chronograf_1_9:
name: Get started
weight: 30
parent: Introduction
---
## Overview
Chronograf allows you to quickly see data you have stored in InfluxDB so you can build robust queries and alerts. After your administrator has set up Chronograf as described in [Installing Chronograf](/chronograf/v1.9/introduction/installation), get started with key features using the guides below.
### Data visualization
* Investigate your data by building queries using the [Data Explorer](/chronograf/v1.9/guides/querying-data/).
* Use [pre-created dashboards](/chronograf/v1.9/guides/using-precreated-dashboards/) to monitor your application data or [create your own dashboards](/chronograf/v1.9/guides/create-a-dashboard/).
* Customize dashboards using [template variables](/chronograf/v1.9/guides/dashboard-template-variables/).
### Alerting
* [Create alert rules](/chronograf/v1.9/guides/create-alert-rules/) to generate threshold, relative, and deadman alerts on your data.
* [View all active alerts](/chronograf/v1.9/guides/create-alert-rules/#step-2-view-the-alerts) on an alert dashboard.
* Use [alert endpoints](/chronograf/v1.9/guides/configuring-alert-endpoints/) in Chronograf to send alert messages to specific URLs and applications.
### Infrastructure monitoring
* [View all hosts](/chronograf/v1.9/guides/monitoring-influxenterprise-clusters/#step-4-explore-the-monitoring-data-in-chronograf) and their statuses in your infrastructure.
* [Use pre-created dashboards](/chronograf/v1.9/guides/using-precreated-dashboards/) to monitor your applications.

View File

@ -0,0 +1,96 @@
---
title: Install Chronograf
description: Download and install Chronograf.
menu:
chronograf_1_9:
name: Install
weight: 20
parent: Introduction
---
This page describes how to download and install Chronograf.
### Content
* [TICK overview](#tick-overview)
* [Download and install](#download-and-install)
* [Connect to your InfluxDB instance or InfluxDB Enterprise cluster](#connect-chronograf-to-your-influxdb-instance-or-influxdb-enterprise-cluster)
* [Connect to Kapacitor](#connect-chronograf-to-kapacitor)
## TICK overview
Chronograf is the user interface for InfluxData's [TICK stack](https://www.influxdata.com/time-series-platform/).
## Download and install
The latest Chronograf builds are available on InfluxData's [Downloads page](https://portal.influxdata.com/downloads).
1. Choose the download link for your operating system.
{{% note %}}
If your download includes a TAR package, save the underlying datastore `chronograf-v1.db` in directory outside of where you start Chronograf. This preserves and references your existing datastore, including configurations and dashboards, when you download future versions.
{{% /note %}}
2. Install Chronograf, replacing `<version#>` with the appropriate version:
{{% tabs-wrapper %}}
{{% tabs %}}
[macOS](#)
[Ubuntu & Debian](#)
[RedHat & CentOS](#)
{{% /tabs %}}
{{% tab-content %}}
```sh
tar zxvf chronograf-<version#>_darwin_amd64.tar.gz
```
{{% /tab-content %}}
{{% tab-content %}}
```sh
sudo dpkg -i chronograf_<version#>_amd64.deb
```
{{% /tab-content %}}
{{% tab-content %}}
```sh
sudo yum localinstall chronograf-<version#>.x86_64.rpm
```
{{% /tab-content %}}
{{% /tabs-wrapper %}}
3. Start Chronograf:
```sh
chronograf
```
## Connect Chronograf to your InfluxDB instance or InfluxDB Enterprise cluster
1. In a browser, navigate to [localhost:8888](http://localhost:8888).
2. Provide the following details:
- **Connection String**: InfluxDB hostname or IP, and port (default port is `8086`).
- **Connection Name**: Connection name.
- **Username** and **Password**: If you've enabled
[InfluxDB authentication](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization),
provide your InfluxDB username and password. Otherwise, leave blank.
{{% note %}}
To ensure distinct permissions can be applied, Chronograf user accounts and
credentials should be different than InfluxDB credentials.
For example, you may want to set up Chronograf to run as a service account
with read-only permissions to InfluxDB. For more information, see how to
[manage InfluxDB users in Chronograf](/chronograf/v1.9/administration/managing-influxdb-users/)
and [manage Chronograf users](/chronograf/v1.9/administration/managing-chronograf-users/).
{{% /note %}}
- **Telegraf Database Name**: _(Optional)_ Telegraf database name.
Default name is `telegraf`.
3. Click **Add Source**.
## Connect Chronograf to Kapacitor
1. In Chronograf, click the configuration (wrench) icon in the sidebar menu, then select **Add Config** in the **Active Kapacitator** column.
2. In the **Kapacitor URL** field, enter the hostname or IP of the machine that Kapacitor is running on. Be sure to include Kapacitor's default port: `9092`.
3. Enter a name for your connection.
4. Leave the **Username** and **Password** fields blank unless you've specifically enabled authorization in Kapacitor.
5. Click **Connect**.

View File

@ -0,0 +1,13 @@
---
title: Chronograf Tools
description: >
Chronograf provides command line tools designed to aid in managing and working with Chronograf from the command line.
menu:
chronograf_1_9:
name: Tools
weight: 40
---
Chronograf provides command line tools designed to aid in managing and working with Chronograf from the command line. The following command line interfaces (CLIs) are available:
{{< children hlevel="h2" >}}

View File

@ -0,0 +1,26 @@
---
title: chronoctl
description: >
The `chronoctl` command line interface (CLI) includes commands to interact with an instance of Chronograf's data store.
menu:
chronograf_1_9:
name: chronoctl
parent: Tools
weight: 10
---
The `chronoctl` command line interface (CLI) includes commands to interact with an instance of Chronograf's data store.
## Usage
```
chronoctl [command]
chronoctl [flags]
```
## Commands
| Command | Description |
|:------- |:----------- |
| [add-superadmin](/chronograf/v1.9/tools/chronoctl/add-superadmin/) | Create a new user with superadmin status |
| [list-users](/chronograf/v1.9/tools/chronoctl/list-users) | List all users in the Chronograf data store |
| [migrate](/chronograf/v1.9/tools/chronoctl/migrate) | Migrate your Chronograf configuration store |

View File

@ -0,0 +1,28 @@
---
title: chronoctl add-superadmin
description: >
The `add-superadmin` command creates a new user with superadmin status.
menu:
chronograf_1_9:
name: chronoctl add-superadmin
parent: chronoctl
weight: 20
---
The `add-superadmin` command creates a new user with superadmin status.
## Usage
```
chronoctl add-superadmin [flags]
```
## Flags
| Flag | | Description | Input type |
|:---- |:----------------- | :---------------------------------------------------------------------------------------------------- | :--------: |
| `-b` | `--bolt-path` | Full path to boltDB file (e.g. `./chronograf-v1.db`)" env:"BOLT_PATH" default:"chronograf-v1.db" | string |
| `-i` | `--id` | User ID for an existing user | uint64 |
| `-n` | `--name` | User's name. Must be Oauth-able email address or username. | |
| `-p` | `--provider` | Name of the Auth provider (e.g. Google, GitHub, auth0, or generic) | string |
| `-s` | `--scheme` | Authentication scheme that matches auth provider (default:oauth2) | string |
| `-o` | `--orgs` | A comma-separated list of organizations that the user should be added to (default:"default") | string |

View File

@ -0,0 +1,23 @@
---
title: chronoctl list-users
description: >
The `list-users` command lists all users in the Chronograf data store.
menu:
chronograf_1_9:
name: chronoctl list-users
parent: chronoctl
weight: 30
---
The `list-users` command lists all users in the Chronograf data store.
## Usage
```
chronoctl list-users [flags]
```
## Flags
| Flag | | Description | Input type |
| :---- |:----------- | :------------------------------------------------------------ | :--------: |
| `-b` | `--bolt-path` | Full path to boltDB file (e.g. `./chronograf-v1.db`)" env:"BOLT_PATH" (default:chronograf-v1.db) | string |

View File

@ -0,0 +1,46 @@
---
title: chronoctl migrate
description: >
The `migrate` command allows you to migrate your Chronograf configuration store.
menu:
chronograf_1_9:
name: chronoctl migrate
parent: chronoctl
weight: 40
---
The `migrate` command lets you migrate your Chronograf configuration store.
By default, Chronograf is delivered with BoltDB as a data store. For information on migrating from BoltDB to an etcd cluster as a data store,
see [Migrating to a Chronograf HA configuration](/chronograf/v1.9/administration/migrate-to-high-availability).
## Usage
```
chronoctl migrate [flags]
```
## Flags
| Flag | | Description | Input type |
|:---- |:--- |:----------- |:----------: |
| `-f` | `--from` | Full path to BoltDB file or etcd (e.g. `bolt:///path/to/chronograf-v1.db` or `etcd://user:pass@localhost:2379` (default: `chronograf-v1.db`) | string |
| `-t` | `--to` | Full path to BoltDB file or etcd (e.g. `bolt:///path/to/chronograf-v1.db` or `etcd://user:pass@localhost:2379` (default: `etcd://localhost:2379`) | string |
#### Provide etcd authentication credentials
If authentication is enabled on `etcd`, use the standard URI basic
authentication format to define a username and password. For example:
```sh
etcd://username:password@localhost:2379
```
#### Provide etcd TLS credentials
If TLS is enabled on `etcd`, provide your TLS certificate credentials using
the following query parameters in your etcd URL:
- **cert**: Path to client certificate file or PEM file
- **key**: Path to client key file
- **ca**: Path to trusted CA certificates
```sh
etcd://127.0.0.1:2379?cert=/tmp/client.crt&key=/tst/client.key&ca=/tst/ca.crt
```

View File

@ -0,0 +1,145 @@
---
title: chronograf CLI
description: >
The `chronograf` command line interface (CLI) includes options to manage many aspects of Chronograf security.
menu:
chronograf_1_9:
name: chronograf CLI
parent: Tools
weight: 10
---
The `chronograf` command line interface (CLI) includes options to manage Chronograf security.
## Usage
```
chronograf [flags]
```
## Chronograf service flags
| Flag | Description | Env. Variable |
|:-----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:---------------------|
| `--host` | IP the Chronograf service listens on. By default, `0.0.0.0` | `$HOST` |
| `--port` | Port the Chronograf service listens on for insecure connections. By default, `8888` | `$PORT` |
| `-b`,`--bolt-path` | File path to the BoltDB file. By default, `./chronograf-v1.db` | `$BOLT_PATH` |
| `-c`,`--canned-path` | File path to the directory of canned dashboard files. By default, `/usr/share/chronograf/canned` | `$CANNED_PATH` |
| `--resources-path` | Path to directory of canned dashboards, sources, Kapacitor connections, and organizations. By default, `/usr/share/chronograf/resources` | `$RESOURCES_PATH` |
| `-p`, `--basepath` | URL path prefix under which all Chronograf routes will be mounted. | `$BASE_PATH` |
| `--status-feed-url` | URL of JSON feed to display as a news feed on the client status page. By default, `https://www.influxdata.com/feed/json` | `$STATUS_FEED_URL` |
| `-v`, `--version` | Displays the version of the Chronograf service | |
| `-h`, `--host-page-disabled` | Disables the hosts page | `$HOST_PAGE_DISABLED`|
## InfluxDB connection flags
| Flag | Description | Env. Variable |
| :-------------------- | :-------------------------------------------------------------------------------------- | :------------------- |
| `--influxdb-url` | InfluxDB URL, including the protocol, IP address, and port | `$INFLUXDB_URL` |
| `--influxdb-username` | InfluxDB username | `$INFLUXDB_USERNAME` |
| `--influxdb-password` | InfluxDB password | `$INFLUXDB_PASSWORD` |
| `--influxdb-org` | InfluxDB 2.x or InfluxDB Cloud organization name | `$INFLUXDB_ORG` |
| `--influxdb-token` | InfluxDB 2.x or InfluxDB Cloud [authentication token](/influxdb/cloud/security/tokens/) | `$INFLUXDB_TOKEN` |
## Kapacitor connection flags
| Flag | Description | Env. Variable |
|:-----------------------|:-------------------------------------------------------------------------------|:----------------------|
| `--kapacitor-url` | Location of your Kapacitor instance, including `http://`, IP address, and port | `$KAPACITOR_URL` |
| `--kapacitor-username` | Username for your Kapacitor instance | `$KAPACITOR_USERNAME` |
| `--kapacitor-password` | Password for your Kapacitor instance | `$KAPACITOR_PASSWORD` |
## TLS (Transport Layer Security) flags
| Flag | Description | Env. Variable |
|:--------- |:------------------------------------------------------------ |:--------------------|
| `--cert` | File path to PEM-encoded public key certificate | `$TLS_CERTIFICATE` |
| `--key` | File path to private key associated with given certificate | `$TLS_PRIVATE_KEY` |
| `--tls-ciphers` | Comma-separated list of supported cipher suites. Use `help` to print available ciphers. | `$TLS_CIPHERS` |
| `--tls-min-version` | Minimum version of the TLS protocol that will be negotiated. (default: 1.2) | `$TLS_MIN_VERSION` |
| `--tls-max-version` | Maximum version of the TLS protocol that will be negotiated. | `$TLS_MAX_VERSION` |
## Other service option flags
| Flag | Description | Env. Variable |
| :--------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------- |
| `--custom-auto-refresh` | Add custom auto-refresh options using semicolon separated list of label=milliseconds pairs | `$CUSTOM-AUTO-REFRESH |
| `--custom-link` | Add a custom link to Chronograf user menu options using `<display_name>:<link_address>` syntax. For multiple custom links, include multiple flags. | |
| `-d`, `--develop` | Run the Chronograf service in developer mode | |
| `-h`, `--help` | Display command line help for Chronograf | |
| `-l`, `--log-level` | Set the logging level. Valid values include `info` (default), `debug`, and `error` | `$LOG_LEVEL` |
| `-r`, `--reporting-disabled` | Disable reporting of usage statistics. Usage statistics reported once every 24 hours include: `OS`, `arch`, `version`, `cluster_id`, and `uptime`. | `$REPORTING_DISABLED` |
## Authentication option flags
### General authentication flags
| Flag | Description | Env. Variable |
| :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- |
| `-t`, `--token-secret` | Secret for signing tokens | `$TOKEN_SECRET` |
| `--auth-duration` | Total duration, in hours, of cookie life for authentication. Default value is `720h`. | `$AUTH_DURATION` |
| `--public-url` | Public URL required to access Chronograf using a web browser. For example, if you access Chronograf using the default URL, the public URL value would be `http://localhost:8888`. Required for Google OAuth 2.0 authentication. Used for Auth0 and some generic OAuth 2.0 authentication providers. | `$PUBLIC_URL` |
| `—-htpasswd` | Path to password file for use with HTTP basic authentication. See [NGINX documentation](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/) for more on password files. | `$HTPASSWD` |
### GitHub-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------- | :------------------ |
| `--github-url` | Github base URL. Default is `https://github.com`. {{< req "Required if using Github Enterprise" >}} | `$GH_URL` |
| `-i`, `--github-client-id` | GitHub client ID value for OAuth 2.0 support | `$GH_CLIENT_ID` |
| `-s`, `--github-client-secret` | GitHub client secret value for OAuth 2.0 support | `$GH_CLIENT_SECRET` |
| `-o`, `--github-organization` | Restricts authorization to users from specified Github organizations. To add more than one organization, add multiple flags. Optional. | `$GH_ORGS` |
### Google-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
|:-------------------------|:--------------------------------------------------------------------------------|:------------------------|
| `--google-client-id` | Google client ID value for OAuth 2.0 support | `$GOOGLE_CLIENT_ID` |
| `--google-client-secret` | Google client secret value for OAuth 2.0 support | `$GOOGLE_CLIENT_SECRET` |
| `--google-domains` | Restricts authorization to users from specified Google email domain. To add more than one domain, add multiple flags. Optional. | `$GOOGLE_DOMAINS` |
### Auth0-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------|
| `--auth0-domain` | Subdomain of your Auth0 client. Available on the configuration page for your Auth0 client. | `$AUTH0_DOMAIN` |
| `--auth0-client-id` | Auth0 client ID value for OAuth 2.0 support | `$AUTH0_CLIENT_ID` |
| `--auth0-client-secret` | Auth0 client secret value for OAuth 2.0 support | `$AUTH0_CLIENT_SECRET` |
| `--auth0-organizations` | Restricts authorization to users specified Auth0 organization. To add more than one organization, add multiple flags. Optional. Organizations are set using an organization key in the users `app_metadata`. | `$AUTH0_ORGS` |
### Heroku-specific OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
|:------------------------|:-----------------------------------------------------------------------------------------|:--------------------|
| `--heroku-client-id` | Heroku client ID value for OAuth 2.0 support | `$HEROKU_CLIENT_ID` |
| `--heroku-secret` | Heroku secret for OAuth 2.0 support | `$HEROKU_SECRET` |
| `--heroku-organization` | Restricts authorization to users from specified Heroku organization. To add more than one organization, add multiple flags. Optional. | `$HEROKU_ORGS` |
### Generic OAuth 2.0 authentication flags
| Flag | Description | Env. Variable |
| :------------------------ | :----------------------------------------------------------------------------- | :----------------------- |
| `--generic-name` | Generic OAuth 2.0 name presented on the login page | `$GENERIC_NAME` |
| `--generic-client-id` | Generic OAuth 2.0 client ID value. Can be used for a custom OAuth 2.0 service. | `$GENERIC_CLIENT_ID` |
| `--generic-client-secret` | Generic OAuth 2.0 client secret value | `$GENERIC_CLIENT_SECRET` |
| `--generic-scopes` | Scopes requested by provider of web client | `$GENERIC_SCOPES` |
| `--generic-domains` | Email domain required for user email addresses | `$GENERIC_DOMAINS` |
| `--generic-auth-url` | Authorization endpoint URL for the OAuth 2.0 provider | `$GENERIC_AUTH_URL` |
| `--generic-token-url` | Token endpoint URL for the OAuth 2.0 provider | `$GENERIC_TOKEN_URL` |
| `--generic-api-url` | URL that returns OpenID UserInfo-compatible information | `$GENERIC_API_URL` |
| `--oauth-no-pkce` | Disable OAuth PKCE | `$OAUTH_NO_PKCE` |
### etcd flags
| Flag | Description | Env. Variable |
| :----------------------- | :--------------------------------------------------------------------------------------------------------- | :---------------------- |
| `-e`, `--etcd-endpoints` | etcd endpoint URL (include multiple flags for multiple endpoints) | `$ETCD_ENDPOINTS` |
| `--etcd-username` | etcd username | `$ETCD_USERNAME` |
| `--etcd-password` | etcd password | `$ETCD_PASSWORD` |
| `--etcd-dial-timeout` | Total time to wait before timing out while connecting to etcd endpoints (0 means no timeout, default: -1s) | `$ETCD_DIAL_TIMEOUT` |
| `--etcd-request-timeout` | Total time to wait before timing out the etcd view or update (0 means no timeout, default: -1s) | `$ETCD_REQUEST_TIMEOUT` |
| `--etcd-cert` | Path to PEM encoded TLS public key certificate for use with TLS | `$ETCD_CERTIFICATE` |
| `--etcd-key` | Path to private key associated with given certificate for use with TLS | `$ETCD_PRIVATE_KEY` |
| `--etcd-root-ca` | Path to root CA certificate for TLS verification | `$ETCD-ROOT-CA |

View File

@ -0,0 +1,12 @@
---
title: Troubleshoot Chronograf
Description: Troubleshoot Chronograf.
menu:
chronograf_1_9:
name: Troubleshoot
weight: 50
---
Follow the link below to access Chronograf's FAQ.
{{< children hlevel="h2" >}}

View File

@ -0,0 +1,23 @@
---
title: Chronograf frequently asked questions (FAQs)
description: Common issues with Chronograf
menu:
chronograf_1_9:
name: Frequently asked questions (FAQs)
weight: 10
parent: Troubleshoot
---
## How do I connect Chronograf to an InfluxDB Enterprise cluster?
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
Enter the HTTP bind address of one of your cluster's meta nodes into that input and Chronograf takes care of the rest.
![Cluster connection details](/img/chronograf/1-6-faq-cluster-connection.png)
Note that the example above assumes that you do not have authentication enabled.
If you have authentication enabled, the form requires username and password information.
For more details about monitoring an InfluxDB Enterprise cluster, see the [Monitor an InfluxDB Enterprise Cluster](/chronograf/v1.9/guides/monitoring-influxenterprise-clusters/) guide.

View File

@ -18,8 +18,8 @@ The primary use cases for backup and restore are:
* Debugging
* Restoring clusters to a consistent state
InfluxDB Enterprise supports backing up and restoring data in a cluster, a single database, a single database and retention policy, and
single [shard](/influxdb/v1.5/concepts/glossary/#shard).
InfluxDB Enterprise supports backing up and restoring data in a cluster,
a single database and retention policy, and single [shard](/influxdb/v1.5/concepts/glossary/#shard).
> **Note:** You can use the [new `backup` and `restore` utilities in InfluxDB OSS 1.5](/influxdb/v1.5/administration/backup_and_restore/) to:
> * Restore InfluxDB Enterprise 1.5 backup files to InfluxDB OSS 1.5.

View File

@ -17,8 +17,8 @@ The primary use cases for backup and restore are:
* Debugging
* Restoring clusters to a consistent state
InfluxDB Enterprise supports backing up and restoring data in a cluster, a single database, a single database and retention policy, and
single [shard](/influxdb/v1.6/concepts/glossary/#shard).
InfluxDB Enterprise supports backing up and restoring data in a cluster,
a single database and retention policy, and single [shard](/influxdb/v1.6/concepts/glossary/#shard).
> **Note:** You can use the [new `backup` and `restore` utilities in InfluxDB OSS 1.5](/influxdb/v1.5/administration/backup_and_restore/) to:
> * Restore InfluxDB Enterprise 1.5 backup files to InfluxDB OSS 1.5.

View File

@ -12,7 +12,11 @@ menu:
## v1.8.6 [2021-05-21]
{{% warn %}}
**Fine-grained authorization security update.** If you're on InfluxDB Enterprise {{< latest-patch >}}, we recommend immediately upgrading to this release. An issue was reported in {{< latest-patch >}} where grants with specified permissions for users were not enforced. Versions prior to InfluxDB Enterprise {{< latest-patch >}} are not affected. This security update ensures that only users with sufficient permissions can read and write to a measurement.
**Fine-grained authorization security update.**
If using **InfluxDB Enterprise 1.8.5**, we strongly recommend upgrading to **InfluxDB Enterprise 1.8.6** immediately.
1.8.5 does not correctly enforce grants with specified permissions for users.
Versions prior to InfluxDB Enterprise 1.8.5 are not affected.
1.8.6 ensures that only users with sufficient permissions can read and write to a measurement.
{{% /warn %}}
### Features
@ -32,9 +36,9 @@ menu:
- Previously, the Anti-Entropy service would loop trying to copy an empty shard to a data node missing that shard. Now, an empty shard is successfully created on a new node.
- Check for previously ignored errors in `DiffIterator.Next()`. Update to check before possible function exit and ensure handles are closed on error in digest diffs.
## v{{< latest-patch >}} [2020-04-20]
## v1.8.5 [2020-04-20]
The InfluxDB Enterprise {{< latest-patch >}} release builds on the InfluxDB OSS {{< latest-patch >}} release.
The InfluxDB Enterprise v1.8.5 release builds on the InfluxDB OSS v1.8.5 release.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.8/about_the_project/releasenotes-changelog/#v185-2021-04-20).

View File

@ -34,9 +34,13 @@ Depending on the volume of data to be protected and your application requirement
## Backup and restore utilities
InfluxDB Enterprise supports backing up and restoring data in a cluster, a single database, a single database and retention policy, and single shards. Most InfluxDB Enterprise applications can use the backup and restore utilities.
InfluxDB Enterprise supports backing up and restoring data in a cluster,
a single database and retention policy, and single shards.
Most InfluxDB Enterprise applications can use the backup and restore utilities.
Use the `backup` and `restore` utilities to back up and restore between `influxd` instances with the same versions or with only minor version differences. For example, you can backup from 1.7.3 and restore on 1.8.6.
Use the `backup` and `restore` utilities to back up and restore between `influxd`
instances with the same versions or with only minor version differences.
For example, you can backup from 1.7.3 and restore on 1.8.6.
### Backup utility

View File

@ -0,0 +1,41 @@
---
title: InfluxDB Enterprise 1.9 documentation
description: >
Documentation for InfluxDB Enterprise, which adds clustering, high availability, fine-grained authorization, and more to InfluxDB OSS.
aliases:
- /enterprise/v1.9/
menu:
enterprise_influxdb_1_9:
name: InfluxDB Enterprise v1.9
weight: 1
---
InfluxDB Enterprise provides a time series database designed to handle high write and query loads and offers highly scalable clusters on your infrastructure with a management UI. Use for DevOps monitoring, IoT sensor data, and real-time analytics. Check out the key features that make InfluxDB Enterprise a great choice for working with time series data.
If you're interested in working with InfluxDB Enterprise, visit
[InfluxPortal](https://portal.influxdata.com/) to sign up, get a license key,
and get started!
## Key features
- High performance datastore written specifically for time series data. High ingest speed and data compression.
- Provides high availability across your cluster and eliminates a single point of failure.
- Written entirely in Go. Compiles into a single binary with no external dependencies.
- Simple, high performing write and query HTTP APIs.
- Plugin support for other data ingestion protocols such as Graphite, collectd, and OpenTSDB.
- Expressive SQL-like query language tailored to easily query aggregated data.
- Continuous queries automatically compute aggregate data to make frequent queries more efficient.
- Tags let you index series for fast and efficient queries.
- Retention policies efficiently auto-expire stale data.
## Next steps
- [Install and deploy](/enterprise_influxdb/v1.9/install-and-deploy/)
- Review key [concepts](/enterprise_influxdb/v1.9/concepts/)
- [Get started](/enterprise_influxdb/v1.9/introduction/getting-started/)
<!-- Monitor your cluster
- Manage queries
- Manage users
- Explore and visualize your data
-->

View File

@ -0,0 +1,14 @@
---
title: About the project
description: >
Release notes, licenses, and third-party software details for InfluxDB Enterprise.
menu:
enterprise_influxdb_1_9_ref:
weight: 10
---
{{< children hlevel="h2" >}}
## Commercial license
InfluxDB Enterprise is available with a commercial license. [Contact sales for more information](https://www.influxdata.com/contact-sales/).

View File

@ -0,0 +1,971 @@
---
title: InfluxDB Enterprise 1.9 release notes
description: >
Important changes and what's new in each version InfluxDB Enterprise.
menu:
enterprise_influxdb_1_9_ref:
name: Release notes
weight: 10
parent: About the project
---
## v1.9.2 [2021-06-17]
The release of InfluxDB Enterprise 1.9 is different from previous InfluxDB Enterprise releases
in that there is no corresponding InfluxDB OSS release.
(InfluxDB 1.8.x will continue to receive maintenance updates.)
### Features
- Upgrade to Go 1.15.10.
- Support user-defined *node labels*.
Node labels let you assign arbitrary key-value pairs to meta and data nodes in a cluster.
For instance, an operator might want to label nodes with the availability zone in which they're located.
- Improve performance of `SHOW SERIES CARDINALITY` and `SHOW SERIES CARDINALITY from <measurement>` InfluxQL queries.
These queries now return a `cardinality estimation` column header where before they returned `count`.
- Improve diagnostics for license problems.
Add [license expiration date](/enterprise_influxdb/v1.9/features/clustering-features/#entitlements) to `debug/vars` metrics.
- Add improved [ingress metrics](/enterprise_influxdb/v1.9/administration/config-data-nodes/#ingress-metric-by-measurement-enabled--false) to track points written by measurement and by login.
Allow for collection of statistics regarding points, values, and new series written per measurement and by login.
This data is collected and exposed at the data node level.
With these metrics you can, for example:
aggregate the write requests across the entire cluster,
monitor the growth of series within a measurement,
and track what user credentials are being used to write data.
- Support authentication for Kapacitor via LDAP.
- Support for [configuring Flux query resource usage](/enterprise_influxdb/v1.9/administration/config-data-nodes/#flux-controller) (concurrency, memory, etc.).
- Upgrade to [Flux v0.113.0](/influxdb/v2.0/reference/release-notes/flux/#v01130-2021-04-21).
- Update Prometheus remote protocol to allow streamed reading.
- Improve performance of sorted merge iterator.
- Add arguments to Flux `to` function.
- Add meancount aggregation for WindowAggregate pushdown.
- Optimize series iteration in TSI.
- Add `WITH KEY` to `SHOW TAG KEYS`.
### Bug fixes
- `show databases` now checks read and write permissions.
- Anti-entropy: Update `tsm1.BlockCount()` call to match signature.
- Remove extraneous nil check from points writer.
- Ensure a newline is printed after a successful copy during [restoration](/enterprise_influxdb/v1.9/administration/backup-and-restore/).
- Make `entropy show` expiration times consistent with `show-shards`.
- Properly shutdown multiple HTTP servers.
- Allow CORS in v2 compatibility endpoints.
- Address staticcheck warnings SA4006, ST1006, S1039, and S1020.
- Fix Anti-Entropy looping endlessly with empty shard.
- Disable MergeFiltersRule until it is more stable.
- Fix data race and validation in cache ring.
- Return error on nonexistent shard ID.
- Add `User-Agent` to allowed CORS headers.
- Fix variables masked by a declaration.
- Fix key collisions when serializing `/debug/vars`.
- Fix temporary directory search bug.
- Grow tag index buffer if needed.
- Use native type for summation in new meancount iterator.
- Fix consistent error for missing shard.
- Properly read payload in `snapshotter`.
- Fix help text for `influx_inspect`.
- Allow `PATCH` in CORS.
- Fix `GROUP BY` returning multiple results per group in some circumstances.
- Add option to authenticate Prometheus remote read.
- Fix FGA enablement.
- Fix "snapshot in progress" error during backup.
- Fix cursor requests (`[start, stop]` instead of `[start, stop)`).
- Exclude stop time from array cursors.
- Fix Flux regression in buckets query.
- Fix redundant registration for Prometheus collector metrics.
- Re-add Flux CLI.
- Use non-nil `context.Context` value in client.
### Other changes
- Remove `influx_stress` tool (deprecated since version 1.2).
Instead, use [`inch`](https://github.com/influxdata/inch)
or [`influx-stress`](https://github.com/influxdata/influx-stress) (not to be confused with `influx_stress`).
{{% note %}}
**Note:** InfluxDB Enterprise 1.9.0 and 1.9.1 were not released.
Bug fixes intended for 1.9.0 and 1.9.1 were rolled into InfluxDB Enterprise 1.9.2.
{{% /note %}}
## v1.8.6 [2021-05-21]
{{% warn %}}
**Fine-grained authorization security update.**
If using **InfluxDB Enterprise 1.8.5**, we strongly recommend upgrading to **InfluxDB Enterprise 1.8.6** immediately.
1.8.5 does not correctly enforce grants with specified permissions for users.
Versions prior to InfluxDB Enterprise 1.8.5 are not affected.
1.8.6 ensures that only users with sufficient permissions can read and write to a measurement.
{{% /warn %}}
### Features
- **Enhanced Anti-Entropy (AE) logging**: When the [debug logging level](/enterprise_influxdb/v1.8/administration/config-data-nodes/#logging-settings) is set (`level="debug"`) in the data node configuration, the Anti-Entropy service reports reasons a shard is not idle, including:
- active Cache compactions
- active Level (Zero, One, Two) compactions
- active Full compactions
- active TSM Optimization compactions
- cache size is nonzero
- shard is not fully compacted
- **Enhanced `copy-shard` logging**. Add information to log messages in `copy-shard` functions and additional error tests.
### Bug fixes
- Use the proper TLS configuration when a meta node makes an remote procedure call (RPC) to a data node. Addresses RPC call issues using the following influxd-ctl commands: `copy-shard` `copy-shard-status` `kill-copy-shard` `remove-shard`
- Previously, the Anti-Entropy service would loop trying to copy an empty shard to a data node missing that shard. Now, an empty shard is successfully created on a new node.
- Check for previously ignored errors in `DiffIterator.Next()`. Update to check before possible function exit and ensure handles are closed on error in digest diffs.
## v1.8.5 [2020-04-20]
The InfluxDB Enterprise v1.8.5 release builds on the InfluxDB OSS v1.8.5 release.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.8/about_the_project/releasenotes-changelog/#v185-2021-04-20).
### Bug fixes
- Resolve TSM backup "snapshot in progress" error.
- SHOW DATABASES now only shows databases that the user has either read or write access to
- `influxd_ctl entropy show` now shows shard expiry times consistent with `influxd_ctl show-shards`
- Add labels to the values returned in SHOW SHARDS output to clarify the node ID and TCP address.
- Always forward repairs to the next data node (even if the current data node does not have to take action for the repair).
## v1.8.4 [2020-02-08]
The InfluxDB Enterprise 1.8.4 release builds on the InfluxDB OSS 1.8.4 release.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.8/about_the_project/releasenotes-changelog/#v1-8-4-unreleased).
> **Note:** InfluxDB Enterprise 1.8.3 was not released. Bug fixes intended for 1.8.3 were rolled into InfluxDB Enterprise 1.8.4.
### Features
#### Update your InfluxDB Enterprise license without restarting data nodes
Add the ability to [renew or update your license key or file](/enterprise_influxdb/v1.8/administration/renew-license/) without restarting data nodes.
### Bug fixes
- Wrap TCP muxbased HTTP server with a function that adds custom headers.
- Correct output for `influxd-ctl show shards`.
- Properly encode/decode `control.Shard.Err`.
## v1.8.2 [2020-08-24]
The InfluxDB Enterprise 1.8.2 release builds on the InfluxDB OSS 1.8.2 and 1.8.1 releases.
Due to a defect in InfluxDB OSS 1.8.1, InfluxDB Enterprise 1.8.1 was not released.
This release resolves the defect and includes the features and bug fixes listed below.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.8/about_the_project/releasenotes-changelog/).
### Features
#### Hinted handoff improvements
- Allow out-of-order writes. This change adds a configuration option `allow-out-of-order-writes` to the `[cluster]` section of the data node configuration file. This setting defaults to `false` to match the existing behavior. There are some important operational considerations to review before turning this on. But, the result is enabling this option reduces the time required to drain the hinted handoff queue and increase throughput during recovery. See [allow-out-of-order-writes](/enterprise_influxdb/v1.8/administration/config-data-nodes#allow-out-of-order-false) for more detail.
- Make the number of pending writes configurable. This change adds a configuration option in the `[hinted-handoff]` section called `max-pending-writes`, which defaults to `1024`. See [max-pending-writes](/enterprise_influxdb/v1.8/administration/config-data-nodes#max-pending-writes-1024) for more detail.
- Update the hinted handoff queue to ensure various entries to segment files occur atomically. Prior to this change, entries were written to disk in three separate writes (len, data, offset). If the process stopped in the middle of any of those writes, the hinted handoff segment file was left in an invalid state.
- In certain scenarios, the hinted-handoff queue would fail to drain. Upon node startup, the queue segment files are now verified and truncated if any are corrupted. Some additional logging has been added when a node starts writing to the hinted handoff queue as well.
#### `influxd-ctl` CLI improvements
- Add a verbose flag to [`influxd-ctl show-shards`](/enterprise_influxdb/v1.8/administration/cluster-commands/#show-shards). This option provides more information about each shard owner, including the state (hot/cold), last modified date and time, and size on disk.
### Bug fixes
- Resolve a cluster read service issue that caused a panic. Previously, if no tags keys or values were read, the cluster read service returned a nil cursor. Now, an empty cursor is returned.
- LDAP configuration: `GroupSearchBaseDNs`, `SearchFilter`, `GroupMembershipSearchFilter`, and `GroupSearchFilter` values in the LDAP section of the configuration file are now all escaped.
- Eliminate orphaned, temporary directories when an error occurs during `processCreateShardSnapshotRequest()` and provide useful log information regarding the reason a temporary directory is created.
## v1.8 [2020-04-27]
The InfluxDB Enterprise 1.8 release builds on the InfluxDB OSS 1.8 release.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.8/about_the_project/releasenotes-changelog/).
### Features
#### **Back up meta data only**
- Add option to back up **meta data only** (users, roles, databases, continuous queries, and retention policies) using the new `-strategy` flag and `only meta` option: `influx ctl backup -strategy only meta </your-backup-directory>`.
> **Note:** To restore a meta data backup, use the `restore -full` command and specify your backup manifest: `influxd-ctl restore -full </backup-directory/backup.manifest>`.
For more information, see [Perform a metastore only backup](/enterprise_influxdb/v1.8/administration/backup-and-restore/#perform-a-metastore-only-backup).
#### **Incremental and full backups**
- Add `incremental` and `full` backup options to the new `-strategy` flag in `influx ctl backup`:
- `influx ctl backup -strategy incremental`
- `influx ctl backup -strategy full`
For more information, see the [`influxd-ctl backup` syntax](/enterprise_influxdb/v1.8/administration/backup-and-restore/#syntax).
### Bug fixes
- Update the Anti-Entropy (AE) service to ignore expired shards.
## v1.7.10 [2020-02-07]
The InfluxDB Enterprise 1.7.10 release builds on the InfluxDB OSS 1.7.10 release.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Features
- Log when meta state file cannot be opened.
### Bugfixes
- Update `MaxShardGroupID` on meta update.
- Don't reassign shard ownership when removing a data node.
## v1.7.9 [2019-10-27]
The InfluxDB Enterprise 1.7.9 release builds on the InfluxDB OSS 1.7.9 release.
For details on changes incorporated from the InfluxDB OSS release, see
[InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Release notes
- This release is built using Go 1.12.10 which eliminates the
[HTTP desync vulnerability](https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn).
### Bug fixes
- Move `tsdb store open` to beginning of server initialization.
- Enable Meta client and Raft to use verified TLS.
- Fix RPC pool TLS configuration.
- Update example configuration file with new authorization options.
## 1.7.8 [2019-09-03]
{{% warn %}}
InfluxDB now rejects all non-UTF-8 characters.
To successfully write data to InfluxDB, use only UTF-8 characters in
database names, measurement names, tag sets, and field sets.
InfluxDB Enterprise customers can contact InfluxData support for more information.
{{% /warn %}}
The InfluxDB Enterprise 1.7.8 release builds on the InfluxDB OSS 1.7.8 release.
For details on changes incorporated from the InfluxDB OSS release, see [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Bug fixes
- Clarified `influxd-ctl` error message when the Anti-Entropy (AE) service is disabled.
- Ensure invalid, non-UTF-8 data is removed from hinted handoff.
- Added error messages for `INFLUXDB_LOGGING_LEVEL` if misconfigured.
- Added logging when data nodes connect to meta service.
### Features
- The Flux Technical Preview has advanced to version [0.36.2](/flux/v0.36/).
## 1.7.7 [2019-07-12]
The InfluxDB Enterprise 1.7.7 release builds on the InfluxDB OSS 1.7.7 release. For details on changes incorporated from the InfluxDB OSS release, see [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Known issues
- The Flux Technical Preview was not advanced and remains at version 0.24.0. Next month's maintenance release will update the preview.
- After upgrading, customers have experienced an excessively large output additional lines due to a `Println` statement introduced in this release. For a possible workaround, see https://github.com/influxdata/influxdb/issues/14265#issuecomment-508875853. Next month's maintenance release will address this issue.
### Features
- Adds TLS to RPC calls. If verifying certificates, uses the TLS setting in the configuration passed in with -config.
### Bug fixes
- Ensure retry-rate-limit configuration value is used for hinted handoff.
- Always forward AE repair to next node.
- Improve hinted handoff metrics.
## 1.7.6 [2019-05-07]
This InfluxDB Enterprise release builds on the InfluxDB OSS 1.7.6 release. For details on changes incorporated from the InfluxDB OSS release, see [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Bug fixes
- Reverts v1.7.5 InfluxQL regressions that removed parentheses and resulted in operator precedence causing changing results in complex queries and regular expressions.
## 1.7.5 [2019-03-26]
{{% warn %}}
**If you are currently on this release, roll back to v1.7.4 until a fix is available.**
After upgrading to this release, some customers have experienced regressions,
including parentheses being removed resulting in operator precedence causing changing results
in complex queries and regular expressions.
Examples:
- Complex WHERE clauses with parentheses. For example, `WHERE d > 100 AND (c = 'foo' OR v = 'bar'`).
- Conditions not including parentheses caysubg operator precedence to return `(a AND b) OR c` instead of `a AND (b OR c)`
{{% /warn %}}
This InfluxDB Enterprise release builds on the InfluxDB OSS 1.7.5 release. For details on changes incorporated from the InfluxDB OSS release, see [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Features
- Add `influx_tools` utility (for internal support use) to be part of the packaging.
### Bug fixes
- Anti-Entropy: fix `contains no .tsm files` error.
- `fix(cluster)`: account for nil result set when writing read response.
## 1.7.4 [2019-02-13]
This InfluxDB Enterprise release builds on the InfluxDB OSS 1.7.4 release. For details on changes incorporated from the InfluxDB OSS release, see [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Bug fixes
- Use `systemd` for Amazon Linux 2.
## 1.7.3 [2019-01-11]
This InfluxDB Enterprise release builds on the InfluxDB OSS 1.7.3 release. For details on changes incorporated from the InfluxDB OSS release, see the [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Important update [2019-02-13]
If you have not installed this release, then install the 1.7.4 release.
**If you are currently running this release, then upgrade to the 1.7.4 release as soon as possible.**
- A critical defect in the InfluxDB 1.7.3 release was discovered and our engineering team fixed the issue in the 1.7.4 release. Out of high concern for your data and projects, upgrade to the 1.7.4 release as soon as possible.
- **Critical defect:** Shards larger than 16GB are at high risk for data loss during full compaction. The full compaction process runs when a shard go "cold" no new data is being written into the database during the time range specified by the shard.
- **Post-mortem analysis:** InfluxData engineering is performing a post-mortem analysis to determine how this defect was introduced. Their discoveries will be shared in a blog post.
- A small percentage of customers experienced data node crashes with segmentation violation errors. We fixed this issue in 1.7.4.
### Breaking changes
- Fix invalid UTF-8 bytes preventing shard opening. Treat fields and measurements as raw bytes.
### Features
#### Anti-entropy service disabled by default
Prior to v.1.7.3, the anti-entropy (AE) service was enabled by default. When shards create large digests with lots of time ranges (10s of thousands), some customers experienced significant performance issues, including CPU usage spikes. If your shards include a small number of time ranges (most have 1 to 10, some have up to several hundreds) and you can benefit from the AE service, then you can enable AE and watch to see if performance is significantly impacted.
- Add user authentication and authorization support for Flux HTTP requests.
- Add support for optionally logging Flux queries.
- Add support for LDAP StartTLS.
- Flux 0.7 support.
- Implement TLS between data nodes.
- Update to Flux 0.7.1.
- Add optional TLS support to meta node Raft port.
- Anti-Entropy: memoize `DistinctCount`, `min`, & `max` time.
- Update influxdb dep for subquery auth update.
### Bug fixes
- Update sample configuration.
## 1.6.6 [2019-02-28]
-------------------
This release only includes the InfluxDB OSS 1.6.6 changes (no Enterprise-specific changes).
## 1.6.5 [2019-01-10]
This release builds off of the InfluxDB OSS 1.6.0 through 1.6.5 releases. For details about changes incorporated from InfluxDB OSS releases, see [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
## 1.6.4 [2018-10-23]
This release builds off of the InfluxDB OSS 1.6.0 through 1.6.4 releases. For details about changes incorporated from InfluxDB OSS releases, see the [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Breaking changes
#### Require `internal-shared-secret` if meta auth enabled
If `[meta] auth-enabled` is set to `true`, the `[meta] internal-shared-secret` value must be set in the configuration.
If it is not set, an error will be logged and `influxd-meta` will not start.
* Previously, authentication could be enabled without setting an `internal-shared-secret`. The security risk was that an unset (empty) value could be used for the `internal-shared-secret`, seriously weakening the JWT authentication used for internode communication.
#### Review production installation configurations
The [Production Installation](/enterprise_influxdb/v1.7/production_installation/)
documentation has been updated to fix errors in configuration settings, including changing `shared-secret` to `internal-shared-secret` and adding missing steps for configuration settings of data nodes and meta nodes. All Enterprise users should review their current configurations to ensure that the configuration settings properly enable JWT authentication for internode communication.
The following summarizes the expected settings for proper configuration of JWT authentication for internode communication:
##### Data node configuration files (`influxdb.conf`)
**[http] section**
* `auth-enabled = true`
- Enables authentication. Default value is false.
**[meta] section**
- `meta-auth-enabled = true`
- Must match for meta nodes' `[meta] auth-enabled` settings.
- `meta-internal-shared-secret = "<long-pass-phrase>"`
- Must be the same pass phrase on all meta nodes' `[meta] internal-shared-secret` settings.
- Used by the internal API for JWT authentication. Default value is `""`.
- A long pass phrase is recommended for stronger security.
##### Meta node configuration files (`meta-influxdb.conf`)
**[meta]** section
- `auth-enabled = true`
- Enables authentication. Default value is `false` .
- `internal-shared-secret = "<long-pass-phrase>"`
- Must same pass phrase on all data nodes' `[meta] meta-internal-shared-secret`
settings.
- Used by the internal API for JWT authentication. Default value is
`""`.
- A long pass phrase is recommended for better security.
>**Note:** To provide encrypted internode communication, you must enable HTTPS. Although the JWT signature is encrypted, the the payload of a JWT token is encoded, but is not encrypted.
### Bug fixes
- Only map shards that are reported ready.
- Fix data race when shards are deleted and created concurrently.
- Reject `influxd-ctl update-data` from one existing host to another.
- Require `internal-shared-secret` if meta auth enabled.
## 1.6.2 [08-27-2018]
This release builds off of the InfluxDB OSS 1.6.0 through 1.6.2 releases. For details about changes incorporated from InfluxDB OSS releases, see the [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/).
### Features
- Update Go runtime to `1.10`.
- Provide configurable TLS security options.
- Add LDAP functionality for authorization and authentication.
- Anti-Entropy (AE): add ability to repair shards.
- Anti-Entropy (AE): improve swagger doc for `/status` endpoint.
- Include the query task status in the show queries output.
#### Bug fixes
- TSM files not closed when shard is deleted.
- Ensure shards are not queued to copy if a remote node is unavailable.
- Ensure the hinted handoff (hh) queue makes forward progress when segment errors occur.
- Add hinted handoff (hh) queue back pressure.
## 1.5.4 [2018-06-21]
This release builds off of the InfluxDB OSS 1.5.4 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
## 1.5.3 [2018-05-25]
This release builds off of the InfluxDB OSS 1.5.3 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
### Features
* Include the query task status in the show queries output.
* Add hh writeBlocked counter.
### Bug fixes
* Hinted-handoff: enforce max queue size per peer node.
* TSM files not closed when shard deleted.
## v1.5.2 [2018-04-12]
This release builds off of the InfluxDB OSS 1.5.2 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
### Bug fixes
* Running backup snapshot with client's retryWithBackoff function.
* Ensure that conditions are encoded correctly even if the AST is not properly formed.
## v1.5.1 [2018-03-20]
This release builds off of the InfluxDB OSS 1.5.1 release. There are no Enterprise-specific changes.
Please see the [InfluxDB OSS release notes](/influxdb/v1.7/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
## v1.5.0 [2018-03-06]
> ***Note:*** This release builds off of the 1.5 release of InfluxDB OSS. Please see the [InfluxDB OSS release
> notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
For highlights of the InfluxDB 1.5 release, see [What's new in InfluxDB 1.5](/influxdb/v1.5/about_the_project/whats_new/).
### Breaking changes
The default logging format has been changed. See [Logging and tracing in InfluxDB](/influxdb/v1.6/administration/logs/) for details.
### Features
* Add `LastModified` fields to shard RPC calls.
* As of OSS 1.5 backup/restore interoperability is confirmed.
* Make InfluxDB Enterprise use OSS digests.
* Move digest to its own package.
* Implement distributed cardinality estimation.
* Add logging configuration to the configuration files.
* Add AE `/repair` endpoint and update Swagger doc.
* Update logging calls to take advantage of structured logging.
* Use actual URL when logging anonymous stats start.
* Fix auth failures on backup/restore.
* Add support for passive nodes
* Implement explain plan for remote nodes.
* Add message pack format for query responses.
* Teach show tag values to respect FGA
* Address deadlock in meta server on 1.3.6
* Add time support to `SHOW TAG VALUES`
* Add distributed `SHOW TAG KEYS` with time support
### Bug fixes
* Fix errors occurring when policy or shard keys are missing from the manifest when limited is set to true.
* Fix spurious `rpc error: i/o deadline exceeded` errors.
* Elide `stream closed` error from logs and handle `io.EOF` as remote iterator interrupt.
* Discard remote iterators that label their type as unknown.
* Do not queue partial write errors to hinted handoff.
* Segfault in `digest.merge`
* Meta Node CPU pegged on idle cluster.
* Data race on `(meta.UserInfo).acl)`
* Fix wildcard when one shard has no data for a measurement with partial replication.
* Add `X-Influxdb-Build` to http response headers so users can identify if a response is from an InfluxDB OSS or InfluxDB Enterprise service.
* Ensure that permissions cannot be set on non-existent databases.
* Switch back to using `cluster-tracing` config option to enable meta HTTP request logging.
* `influxd-ctl restore -newdb` can't restore data.
* Close connection for remote iterators after EOF to avoid writer hanging indefinitely.
* Data race reading `Len()` in connection pool.
* Use InfluxData fork of `yamux`. This update reduces overall memory usage when streaming large amounts of data.
* Fix group by marshaling in the IteratorOptions.
* Meta service data race.
* Read for the interrupt signal from the stream before creating the iterators.
* Show retention policies requires the `createdatabase` permission
* Handle UTF files with a byte order mark when reading the configuration files.
* Remove the pidfile after the server has exited.
* Resend authentication credentials on redirect.
* Updated yamux resolves race condition when SYN is successfully sent and a write timeout occurs.
* Fix no license message.
## v1.3.9 [2018-01-19]
### Upgrading -- for users of the TSI preview
If you have been using the TSI preview with 1.3.6 or earlier 1.3.x releases, you will need to follow the upgrade steps to continue using the TSI preview. Unfortunately, these steps cannot be executed while the cluster is operating --
so it will require downtime.
### Bugfixes
* Elide `stream closed` error from logs and handle `io.EOF` as remote iterator interrupt.
* Fix spurious `rpc error: i/o deadline exceeded` errors
* Discard remote iterators that label their type as unknown.
* Do not queue `partial write` errors to hinted handoff.
## v1.3.8 [2017-12-04]
### Upgrading -- for users of the TSI preview
If you have been using the TSI preview with 1.3.6 or earlier 1.3.x releases, you will need to follow the upgrade steps to continue using the TSI preview. Unfortunately, these steps cannot be executed while the cluster is operating -- so it will require downtime.
### Bugfixes
- Updated `yamux` resolves race condition when SYN is successfully sent and a write timeout occurs.
- Resend authentication credentials on redirect.
- Fix wildcard when one shard has no data for a measurement with partial replication.
- Fix spurious `rpc error: i/o deadline exceeded` errors.
## v1.3.7 [2017-10-26]
### Upgrading -- for users of the TSI preview
The 1.3.7 release resolves a defect that created duplicate tag values in TSI indexes See Issues
[#8995](https://github.com/influxdata/influxdb/pull/8995), and [#8998](https://github.com/influxdata/influxdb/pull/8998).
However, upgrading to 1.3.7 cause compactions to fail, see [Issue #9025](https://github.com/influxdata/influxdb/issues/9025).
We will provide a utility that will allow TSI indexes to be rebuilt,
resolving the corruption possible in releases prior to 1.3.7. If you are using the TSI preview,
**you should not upgrade to 1.3.7 until this utility is available**.
We will update this release note with operational steps once the utility is available.
#### Bugfixes
- Read for the interrupt signal from the stream before creating the iterators.
- Address Deadlock issue in meta server on 1.3.6
- Fix logger panic associated with anti-entropy service and manually removed shards.
## v1.3.6 [2017-09-28]
### Bugfixes
- Fix "group by" marshaling in the IteratorOptions.
- Address meta service data race condition.
- Fix race condition when writing points to remote nodes.
- Use InfluxData fork of yamux. This update reduces overall memory usage when streaming large amounts of data.
Contributed back to the yamux project via: https://github.com/hashicorp/yamux/pull/50
- Address data race reading Len() in connection pool.
## v1.3.5 [2017-08-29]
This release builds off of the 1.3.5 release of OSS InfluxDB.
Please see the OSS [release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/#v1-3-5-2017-08-29) for more information about the OSS releases.
## v1.3.4 [2017-08-23]
This release builds off of the 1.3.4 release of OSS InfluxDB. Please see the [OSS release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/) for more information about the OSS releases.
### Bugfixes
- Close connection for remote iterators after EOF to avoid writer hanging indefinitely
## v1.3.3 [2017-08-10]
This release builds off of the 1.3.3 release of OSS InfluxDB. Please see the [OSS release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/) for more information about the OSS releases.
### Bugfixes
- Connections are not closed when `CreateRemoteIterator` RPC returns no iterators, resolved memory leak
## v1.3.2 [2017-08-04]
### Bug fixes
- `influxd-ctl restore -newdb` unable to restore data.
- Improve performance of `SHOW TAG VALUES`.
- Show a subset of config settings in `SHOW DIAGNOSTICS`.
- Switch back to using cluster-tracing config option to enable meta HTTP request logging.
- Fix remove-data error.
## v1.3.1 [2017-07-20]
#### Bug fixes
- Show a subset of config settings in SHOW DIAGNOSTICS.
- Switch back to using cluster-tracing config option to enable meta HTTP request logging.
- Fix remove-data error.
## v1.3.0 [2017-06-21]
### Configuration Changes
#### `[cluster]` Section
* `max-remote-write-connections` is deprecated and can be removed.
* NEW: `pool-max-idle-streams` and `pool-max-idle-time` configure the RPC connection pool.
See `config.sample.toml` for descriptions of these new options.
### Removals
The admin UI is removed and unusable in this release. The `[admin]` configuration section will be ignored.
#### Features
- Allow non-admin users to execute SHOW DATABASES
- Add default config path search for influxd-meta.
- Reduce cost of admin user check for clusters with large numbers of users.
- Store HH segments by node and shard
- Remove references to the admin console.
- Refactor RPC connection pool to multiplex multiple streams over single connection.
- Report RPC connection pool statistics.
#### Bug fixes
- Fix security escalation bug in subscription management.
- Certain permissions should not be allowed at the database context.
- Make the time in `influxd-ctl`'s `copy-shard-status` argument human readable.
- Fix `influxd-ctl remove-data -force`.
- Ensure replaced data node correctly joins meta cluster.
- Delay metadata restriction on restore.
- Writing points outside of retention policy does not return error
- Decrement internal database's replication factor when a node is removed.
## v1.2.5 [2017-05-16]
This release builds off of the 1.2.4 release of OSS InfluxDB.
Please see the OSS [release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/#v1-2-4-2017-05-08) for more information about the OSS releases.
#### Bug fixes
- Fix issue where the [`ALTER RETENTION POLICY` query](/influxdb/v1.3/query_language/database_management/#modify-retention-policies-with-alter-retention-policy) does not update the default retention policy.
- Hinted-handoff: remote write errors containing `partial write` are considered droppable.
- Fix the broken `influxd-ctl remove-data -force` command.
- Fix security escalation bug in subscription management.
- Prevent certain user permissions from having a database-specific scope.
- Reduce the cost of the admin user check for clusters with large numbers of users.
- Fix hinted-handoff remote write batching.
## v1.2.2 [2017-03-15]
This release builds off of the 1.2.1 release of OSS InfluxDB.
Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.2/CHANGELOG.md#v121-2017-03-08) for more information about the OSS release.
### Configuration Changes
The following configuration changes may need to changed before [upgrading](/enterprise_influxdb/v1.3/administration/upgrading/) to 1.2.2 from prior versions.
#### shard-writer-timeout
We've removed the data node's `shard-writer-timeout` configuration option from the `[cluster]` section.
As of version 1.2.2, the system sets `shard-writer-timeout` internally.
The configuration option can be removed from the [data node configuration file](/enterprise_influxdb/v1.3/administration/configuration/#data-node-configuration).
#### retention-autocreate
In versions 1.2.0 and 1.2.1, the `retention-autocreate` setting appears in both the meta node and data node configuration files.
To disable retention policy auto-creation, users on version 1.2.0 and 1.2.1 must set `retention-autocreate` to `false` in both the meta node and data node configuration files.
In version 1.2.2, weve removed the `retention-autocreate` setting from the data node configuration file.
As of version 1.2.2, users may remove `retention-autocreate` from the data node configuration file.
To disable retention policy auto-creation, set `retention-autocreate` to `false` in the meta node configuration file only.
This change only affects users who have disabled the `retention-autocreate` option and have installed version 1.2.0 or 1.2.1.
#### Bug fixes
##### Backup and Restore
<br>
- Prevent the `shard not found` error by making [backups](/enterprise_influxdb/v1.3/guides/backup-and-restore/#backup) skip empty shards
- Prevent the `shard not found` error by making [restore](/enterprise_influxdb/v1.3/guides/backup-and-restore/#restore) handle empty shards
- Ensure that restores from an incremental backup correctly handle file paths
- Allow incremental backups with restrictions (for example, they use the `-db` or `rp` flags) to be stores in the same directory
- Support restores on meta nodes that are not the raft leader
##### Hinted handoff
<br>
- Fix issue where dropped writes were not recorded when the [hinted handoff](/enterprise_influxdb/v1.3/concepts/clustering/#hinted-handoff) queue reached the maximum size
- Prevent the hinted handoff from becoming blocked if it encounters field type errors
##### Other
<br>
- Return partial results for the [`SHOW TAG VALUES` query](/influxdb/v1.3/query_language/schema_exploration/#show-tag-values) even if the cluster includes an unreachable data node
- Return partial results for the [`SHOW MEASUREMENTS` query](/influxdb/v1.3/query_language/schema_exploration/#show-measurements) even if the cluster includes an unreachable data node
- Prevent a panic when the system files to process points
- Ensure that cluster hostnames can be case insensitive
- Update the `retryCAS` code to wait for a newer snapshot before retrying
- Serialize access to the meta client and meta store to prevent raft log buildup
- Remove sysvinit package dependency for RPM packages
- Make the default retention policy creation an atomic process instead of a two-step process
- Prevent `influxd-ctl`'s [`join` argument](/enterprise_influxdb/v1.3/features/cluster-commands/#join) from completing a join when the command also specifies the help flag (`-h`)
- Fix the `influxd-ctl`'s [force removal](/enterprise_influxdb/v1.3/features/cluster-commands/#remove-meta) of meta nodes
- Update the meta node and data node sample configuration files
## v1.2.1 [2017-01-25]
#### Cluster-specific Bugfixes
- Fix panic: Slice bounds out of range
&emsp;Fix how the system removes expired shards.
- Remove misplaced newlines from cluster logs
## v1.2.0 [2017-01-24]
This release builds off of the 1.2.0 release of OSS InfluxDB.
Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.2/CHANGELOG.md#v120-2017-01-24) for more information about the OSS release.
### Upgrading
* The `retention-autocreate` configuration option has moved from the meta node configuration file to the [data node configuration file](/enterprise_influxdb/v1.3/administration/configuration/#retention-autocreate-true).
To disable the auto-creation of retention policies, set `retention-autocreate` to `false` in your data node configuration files.
* The previously deprecated `influxd-ctl force-leave` command has been removed. The replacement command to remove a meta node which is never coming back online is [`influxd-ctl remove-meta -force`](/enterprise_influxdb/v1.3/features/cluster-commands/).
#### Cluster-specific Features
- Improve the meta store: any meta store changes are done via a compare and swap
- Add support for [incremental backups](/enterprise_influxdb/v1.3/guides/backup-and-restore/)
- Automatically remove any deleted shard groups from the data store
- Uncomment the section headers in the default [configuration file](/enterprise_influxdb/v1.3/administration/configuration/)
- Add InfluxQL support for [subqueries](/influxdb/v1.3/query_language/data_exploration/#subqueries)
#### Cluster-specific Bugfixes
- Update dependencies with Godeps
- Fix a data race in meta client
- Ensure that the system removes the relevant [user permissions and roles](/enterprise_influxdb/v1.3/features/users/) when a database is dropped
- Fix a couple typos in demo [configuration file](/enterprise_influxdb/v1.3/administration/configuration/)
- Make optional the version protobuf field for the meta store
- Remove the override of GOMAXPROCS
- Remove an unused configuration option (`dir`) from the backend
- Fix a panic around processing remote writes
- Return an error if a remote write has a field conflict
- Drop points in the hinted handoff that (1) have field conflict errors (2) have [`max-values-per-tag`](/influxdb/v1.3/administration/config/#max-values-per-tag-100000) errors
- Remove the deprecated `influxd-ctl force-leave` command
- Fix issue where CQs would stop running if the first meta node in the cluster stops
- Fix logging in the meta httpd handler service
- Fix issue where subscriptions send duplicate data for [Continuous Query](/influxdb/v1.3/query_language/continuous_queries/) results
- Fix the output for `influxd-ctl show-shards`
- Send the correct RPC response for `ExecuteStatementRequestMessage`
## v1.1.5 [2017-04-28]
### Bug fixes
- Prevent certain user permissions from having a database-specific scope.
- Fix security escalation bug in subscription management.
## v1.1.3 [2017-02-27]
This release incorporates the changes in the 1.1.4 release of OSS InfluxDB.
Please see the OSS [changelog](https://github.com/influxdata/influxdb/blob/v1.1.4/CHANGELOG.md) for more information about the OSS release.
### Bug fixes
- Delay when a node listens for network connections until after all requisite services are running. This prevents queries to the cluster from failing unnecessarily.
- Allow users to set the `GOMAXPROCS` environment variable.
## v1.1.2 [internal]
This release was an internal release only.
It incorporates the changes in the 1.1.3 release of OSS InfluxDB.
Please see the OSS [changelog](https://github.com/influxdata/influxdb/blob/v1.1.3/CHANGELOG.md) for more information about the OSS release.
## v1.1.1 [2016-12-06]
This release builds off of the 1.1.1 release of OSS InfluxDB.
Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#v111-2016-12-06) for more information about the OSS release.
This release is built with Go (golang) 1.7.4.
It resolves a security vulnerability reported in Go (golang) version 1.7.3 which impacts all
users currently running on the macOS platform, powered by the Darwin operating system.
#### Cluster-specific bug fixes
- Fix hinted-handoff issue: Fix record size larger than max size
&emsp;If a Hinted Handoff write appended a block that was larger than the maximum file size, the queue would get stuck because the maximum size was not updated. When reading the block back out during processing, the system would return an error because the block size was larger than the file size -- which indicates a corrupted block.
## v1.1.0 [2016-11-14]
This release builds off of the 1.1.0 release of InfluxDB OSS.
Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#v110-2016-11-14) for more information about the OSS release.
### Upgrading
* The 1.1.0 release of OSS InfluxDB has some important [configuration changes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#configuration-changes) that may affect existing clusters.
* The `influxd-ctl join` command has been renamed to `influxd-ctl add-meta`. If you have existing scripts that use `influxd-ctl join`, they will need to use `influxd-ctl add-meta` or be updated to use the new cluster setup command.
#### Cluster setup
The `influxd-ctl join` command has been changed to simplify cluster setups. To join a node to a cluster, you can run `influxd-ctl join <meta:8091>`, and we will attempt to detect and add any meta or data node process running on the hosts automatically. The previous `join` command exists as `add-meta` now. If it's the first node of a cluster, the meta address argument is optional.
#### Logging
Switches to journald logging for on systemd systems. Logs are no longer sent to `/var/log/influxdb` on systemd systems.
#### Cluster-specific features
- Add a configuration option for setting gossiping frequency on data nodes
- Allow for detailed insight into the Hinted Handoff queue size by adding `queueBytes` to the hh\_processor statistics
- Add authentication to the meta service API
- Update Go (golang) dependencies: Fix Go Vet and update circle Go Vet command
- Simplify the process for joining nodes to a cluster
- Include the node's version number in the `influxd-ctl show` output
- Return and error if there are additional arguments after `influxd-ctl show`
&emsp;Fixes any confusion between the correct command for showing detailed shard information (`influxd-ctl show-shards`) and the incorrect command (`influxd-ctl show shards`)
#### Cluster-specific bug fixes
- Return an error if getting latest snapshot takes longer than 30 seconds
- Remove any expired shards from the `/show-shards` output
- Respect the [`pprof-enabled` configuration setting](/enterprise_influxdb/v1.3/administration/configuration/#pprof-enabled-true) and enable it by default on meta nodes
- Respect the [`pprof-enabled` configuration setting](/enterprise_influxdb/v1.3/administration/configuration/#pprof-enabled-true-1) on data nodes
- Use the data reference instead of `Clone()` during read-only operations for performance purposes
- Prevent the system from double-collecting cluster statistics
- Ensure that the Meta API redirects to the cluster leader when it gets the `ErrNotLeader` error
- Don't overwrite cluster users with existing OSS InfluxDB users when migrating an OSS instance into a cluster
- Fix a data race in the raft store
- Allow large segment files (> 10MB) in the Hinted Handoff
- Prevent `copy-shard` from retrying if the `copy-shard` command was killed
- Prevent a hanging `influxd-ctl add-data` command by making data nodes check for meta nodes before they join a cluster
## v1.0.4 [2016-10-19]
#### Cluster-specific bug fixes
- Respect the [Hinted Handoff settings](/enterprise_influxdb/v1.3/administration/configuration/#hinted-handoff) in the configuration file
- Fix expanding regular expressions when all shards do not exist on node that's handling the request
## v1.0.3 [2016-10-07]
#### Cluster-specific bug fixes
- Fix a panic in the Hinted Handoff: `lastModified`
## v1.0.2 [2016-10-06]
This release builds off of the 1.0.2 release of OSS InfluxDB. Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v102-2016-10-05) for more information about the OSS release.
#### Cluster-specific bug fixes
- Prevent double read-lock in the meta client
- Fix a panic around a corrupt block in Hinted Handoff
- Fix issue where `systemctl enable` would throw an error if the symlink already exists
## v1.0.1 [2016-09-28]
This release builds off of the 1.0.1 release of OSS InfluxDB.
Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v101-2016-09-26)
for more information about the OSS release.
#### Cluster-specific bug fixes
* Balance shards correctly with a restore
* Fix a panic in the Hinted Handoff: `runtime error: invalid memory address or nil pointer dereference`
* Ensure meta node redirects to leader when removing data node
* Fix a panic in the Hinted Handoff: `runtime error: makeslice: len out of range`
* Update the data node configuration file so that only the minimum configuration options are uncommented
## v1.0.0 [2016-09-07]
This release builds off of the 1.0.0 release of OSS InfluxDB.
Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v100-2016-09-07) for more information about the OSS release.
Breaking Changes:
* The keywords `IF`, `EXISTS`, and `NOT` were removed for this release. This means you no longer need to specify `IF NOT EXISTS` for `DROP DATABASE` or `IF EXISTS` for `CREATE DATABASE`. Using these keywords will return a query error.
* `max-series-per-database` was added with a default of 1M but can be disabled by setting it to `0`. Existing databases with series that exceed this limit will continue to load, but writes that would create new series will fail.
### Hinted handoff
A number of changes to hinted handoff are included in this release:
* Truncating only the corrupt block in a corrupted segment to minimize data loss
* Immediately queue writes in hinted handoff if there are still writes pending to prevent inconsistencies in shards
* Remove hinted handoff queues when data nodes are removed to eliminate manual cleanup tasks
### Performance
* `SHOW MEASUREMENTS` and `SHOW TAG VALUES` have been optimized to work better for multiple nodes and shards
* `DROP` and `DELETE` statements run in parallel and more efficiently and should not leave the system in an inconsistent state
### Security
The Cluster API used by `influxd-ctl` can not be protected with SSL certs.
### Cluster management
Data nodes that can no longer be restarted can now be forcefully removed from the cluster using `influxd-ctl remove-data -force <addr>`. This should only be run if a grace removal is not possible.
Backup and restore has been updated to fix issues and refine existing capabilities.
#### Cluster-specific features
- Add the Users method to control client
- Add a `-force` option to the `influxd-ctl remove-data` command
- Disable the logging of `stats` service queries
- Optimize the `SHOW MEASUREMENTS` and `SHOW TAG VALUES` queries
- Update the Go (golang) package library dependencies
- Minimize the amount of data-loss in a corrupted Hinted Handoff file by truncating only the last corrupted segment instead of the entire file
- Log a write error when the Hinted Handoff queue is full for a node
- Remove Hinted Handoff queues on data nodes when the target data nodes are removed from the cluster
- Add unit testing around restore in the meta store
- Add full TLS support to the cluster API, including the use of self-signed certificates
- Improve backup/restore to allow for partial restores to a different cluster or to a database with a different database name
- Update the shard group creation logic to be balanced
- Keep raft log to a minimum to prevent replaying large raft logs on startup
#### Cluster-specific bug fixes
- Remove bad connections from the meta executor connection pool
- Fix a panic in the meta store
- Fix a panic caused when a shard group is not found
- Fix a corrupted Hinted Handoff
- Ensure that any imported OSS admin users have all privileges in the cluster
- Ensure that `max-select-series` is respected
- Handle the `peer already known` error
- Fix Hinted handoff panic around segment size check
- Drop Hinted Handoff writes if they contain field type inconsistencies
<br>
# Web Console
## DEPRECATED: Enterprise Web Console
The Enterprise Web Console has officially been deprecated and will be eliminated entirely by the end of 2017.
No additional features will be added and no additional bug fix releases are planned.
For browser-based access to InfluxDB Enterprise, [Chronograf](/{{< latest "chronograf" >}}/introduction) is now the recommended tool to use.

View File

@ -0,0 +1,51 @@
---
title: Third party software
description: >
InfluxData products contain third-party software that is copyrighted,
patented, or otherwise legally protected software of third parties
incorporated in InfluxData products.
menu:
enterprise_influxdb_1_9_ref:
name: Third party software
weight: 20
parent: About the project
---
InfluxData products contain third party software, which means the copyrighted,
patented, or otherwise legally protected software of third parties that is
incorporated in InfluxData products.
Third party suppliers make no representation nor warranty with respect to
such third party software or any portion thereof.
Third party suppliers assume no liability for any claim that might arise with
respect to such third party software, nor for a
customers use of or inability to use the third party software.
InfluxDB Enterprise 1.9 includes the following third party software components, which are maintained on a version by version basis.
| Component | License | Integration |
| :-------- | :-------- | :-------- |
| [ASN1 BER Encoding / Decoding Library for the GO programming language (go-asn1-ber/ans1-ber)](https://github.com/go-asn1-ber/asn1-ber) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Cobra is a commander for modern Go CLI interactions (spf13/cobra)](https://github.com/spf13/cobra) | [BSD 2-Clause](https://opensource.org/licenses/BSD-2-Clause) | Statically linked |
| [A golang registry for global request variables (gorilla/context)](https://github.com/gorilla/context) | [BSD 3-Clause](https://opensource.org/licenses/BSD-3-Clause) | Statically linked |
| [FlatBuffers: Memory Efficient Serialization Library (google/flatbuffers)](https://github.com/google/flatbuffers) | [Apache License 2.0](https://opensource.org/licenses/Apache-2.0) | Statically linked |
| [Flux is a lightweight scripting language for querying databases (like InfluxDB) and working with data (influxdata/flux)](https://github.com/influxdata/flux) | [Apache License 2.0](https://opensource.org/licenses/Apache-2.0) | Statically linked |
| [GoConvey is a yummy Go testing tool for gophers (glycerine/goconvey)](https://github.com/glycerine/goconvey) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [An immutable radix tree implementation in Golang (hashicorp/go-immutable-radix)](https://github.com/hashicorp/go-immutable-radix)| [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) | Statically linked |
| [Some helpful packages for writing Go apps (markbates/going)](https://github.com/markbates/going)| [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Golang LRU cache implements a fixed-size thread safe LRU cache (hashicorp/golang-lru)](https://github.com/hashicorp/golang-lru) |[Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) | Statically linked |
| [Codec - a high performance and feature-rich Idiomatic encode/decode and rpc library for msgpack and Binc (hashicorp/go-msgpack)](https://github.com/hashicorp/go-msgpack)| [BSD 3-Clause](https://opensource.org/licenses/BSD-3-Clause) | Statically linked |
| [A Golang library for exporting performance and runtime metrics to external metrics systems, i.e. statsite, statsd (armon/go-metrics)](https://github.com/armon/go-metrics) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Generates UUID-format strings using purely high quality random bytes (hashicorp/go-uuid)](https://github.com/hashicorp/go-uuid) | [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) | Statically linked |
| [Collection of useful handlers for Go net/http package (gorilla/handlers)](https://github.com/gorilla/handlers) | [BSD 2-Clause](https://opensource.org/licenses/BSD-2-Clause) | Statically linked |
| [Golang implementation of JavaScript Object (dvsekhvalnov/jose2go)](https://github.com/dvsekhvalnov/jose2go) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Basic LDAP v3 functionality for the Go programming language (go-ldap/ldap)](https://github.com/go-ldap/ldap) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Basic LDAP v3 functionality for the Go programming language (mark-rushakoff/ldapserver)](https://github.com/mark-rushakoff/ldapserver) | [BSD 3-Clause](https://opensource.org/licenses/BSD-3-Clause) | Statically linked |
| [A powerful URL router and dispatcher for golang (gorilla/mux)](https://github.com/gorilla/mux) | [BSD 2-Clause](https://opensource.org/licenses/BSD-2-Clause) | Statically linked |
| [pkcs7 implements parsing and creating signed and enveloped messages (fullsailor/pkcs7)](https://github.com/fullsailor/pkcs7) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Pretty printing for Go values (kr/pretty)](https://github.com/kr/pretty) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |Statically linked|
|[Go language implementation of the Raft consensus protocol (hashicorp/raft)](https://github.com/hashicorp/raft) | [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) | Statically linked |
| [Raft backend implementation using BoltDB (hashicorp/raft-boltdb)](https://github.com/hashicorp/raft-boltdb) | [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) | Statically linked |
| [General purpose extensions to golang's database/sql (jmoiron/sqlx)](https://github.com/jmoiron/sqlx) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |Statically linked|
| [Miscellaneous functions for formatting text (kr/text)](https://github.com/kr/text) | [MIT](https://opensource.org/licenses/MIT) | Statically linked |
| [Golang connection multiplexing library (hashicorp/yamux)](https://github.com/hashicorp/yamux/) | [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) | Statically linked |

View File

@ -0,0 +1,10 @@
---
title: Administer InfluxDB Enterprise
description: Configuration, security, and logging in InfluxDB enterprise.
menu:
enterprise_influxdb_1_9:
name: Administration
weight: 70
---
{{< children >}}

View File

@ -0,0 +1,348 @@
---
title: Use Anti-Entropy service in InfluxDB Enterprise
description: The Anti-Entropy service monitors and repairs shards in InfluxDB.
aliases:
- /enterprise_influxdb/v1.9/guides/Anti-Entropy/
menu:
enterprise_influxdb_1_9:
name: Use Anti-entropy service
weight: 60
parent: Administration
---
{{% warn %}}
Prior to InfluxDB Enterprise 1.7.2, the Anti-Entropy (AE) service was enabled by default. When shards create digests with lots of time ranges (10s of thousands), some customers have experienced significant performance issues, including CPU usage spikes. If your shards include a small number of time ranges (most have 1 to 10, some have up to several hundreds) and you can benefit from the AE service, enable AE and monitor it closely to see if your performance is adversely impacted.
{{% /warn %}}
## Introduction
Shard entropy refers to inconsistency among shards in a shard group.
This can be due to the "eventually consistent" nature of data stored in InfluxDB
Enterprise clusters or due to missing or unreachable shards.
The Anti-Entropy (AE) service ensures that each data node has all the shards it
owns according to the metastore and that all shards in a shard group are consistent.
Missing shards are automatically repaired without operator intervention while
out-of-sync shards can be manually queued for repair.
This topic covers how the Anti-Entropy service works and some of the basic situations where it takes effect.
## Concepts
The Anti-Entropy service is a component of the `influxd` service available on each of your data nodes. Use this service to ensure that each data node has all of the shards that the metastore says it owns and ensure all shards in a shard group are in sync.
If any shards are missing, the Anti-Entropy service will copy existing shards from other shard owners.
If data inconsistencies are detected among shards in a shard group, [invoke the Anti-Entropy service](#command-line-tools-for-managing-entropy) and queue the out-of-sync shards for repair.
In the repair process, the Anti-Entropy service will sync the necessary updates from other shards
within a shard group.
By default, the service performs consistency checks every 5 minutes. This interval can be modified in the [`anti-entropy.check-interval`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#check-interval-5m) configuration setting.
The Anti-Entropy service can only address missing or inconsistent shards when
there is at least one copy of the shard available.
In other words, as long as new and healthy nodes are introduced, a replication
factor of 2 can recover from one missing or inconsistent node;
a replication factor of 3 can recover from two missing or inconsistent nodes, and so on.
A replication factor of 1, which is not recommended, cannot be recovered by the Anti-Entropy service.
## Symptoms of entropy
The Anti-Entropy service automatically detects and fixes missing shards, but shard inconsistencies
must be [manually detected and queued for repair](#detecting-and-repairing-entropy).
There are symptoms of entropy that, if seen, would indicate an entropy repair is necessary.
### Different results for the same query
When running queries against an InfluxDB Enterprise cluster, each query may be routed to a different data node.
If entropy affects data within the queried range, the same query will return different
results depending on which node the query runs against.
_**Query attempt 1**_
```sql
SELECT mean("usage_idle") WHERE time > '2018-06-06T18:00:00Z' AND time < '2018-06-06T18:15:00Z' GROUP BY time(3m) FILL(0)
name: cpu
time mean
---- ----
1528308000000000000 99.11867392974537
1528308180000000000 99.15410822137049
1528308360000000000 99.14927494363032
1528308540000000000 99.1980535465783
1528308720000000000 99.18584290492262
```
_**Query attempt 2**_
```sql
SELECT mean("usage_idle") WHERE time > '2018-06-06T18:00:00Z' AND time < '2018-06-06T18:15:00Z' GROUP BY time(3m) FILL(0)
name: cpu
time mean
---- ----
1528308000000000000 99.11867392974537
1528308180000000000 0
1528308360000000000 0
1528308540000000000 0
1528308720000000000 99.18584290492262
```
The results indicate that data is missing in the queried time range and entropy is present.
### Flapping dashboards
A "flapping" dashboard means data visualizations change when data is refreshed
and pulled from a node with entropy (inconsistent data).
It is the visual manifestation of getting [different results from the same query](#different-results-for-the-same-query).
<img src="/img/enterprise/1-6-flapping-dashboard.gif" alt="Flapping dashboard" style="width:100%; max-width:800px">
## Technical details
### Detecting entropy
The Anti-Entropy service runs on each data node and periodically checks its shards' statuses
relative to the next data node in the ownership list.
The service creates a "digest" or summary of data in the shards on the node.
For example, assume there are two data nodes in your cluster: `node1` and `node2`.
Both `node1` and `node2` own `shard1` so `shard1` is replicated across each.
When a status check runs, `node1` will ask `node2` when `shard1` was last modified.
If the reported modification time differs from the previous check, then
`node1` asks `node2` for a new digest of `shard1`, checks for differences (performs a "diff") between the `shard1` digest for `node2` and the local `shard1` digest.
If a difference exists, `shard1` is flagged as having entropy.
### Repairing entropy
If during a status check a node determines the next node is completely missing a shard,
it immediately adds the missing shard to the repair queue.
A background routine monitors the queue and begins the repair process as new shards are added to it.
Repair requests are pulled from the queue by the background process and repaired using a `copy shard` operation.
> Currently, shards that are present on both nodes but contain different data are not automatically queued for repair.
> A user must make the request via `influxd-ctl entropy repair <shard ID>`.
> For more information, see [Detecting and repairing entropy](#detecting-and-repairing-entropy) below.
Using `node1` and `node2` from the [earlier example](#detecting-entropy), `node1` asks `node2` for a digest of `shard1`.
`node1` diffs its own local `shard1` digest and `node2`'s `shard1` digest,
then creates a new digest containing only the differences (the diff digest).
The diff digest is used to create a patch containing only the data `node2` is missing.
`node1` sends the patch to `node2` and instructs it to apply it.
Once `node2` finishes applying the patch, it queues a repair for `shard1` locally.
The "node-to-node" shard repair continues until it runs on every data node that owns the shard in need of repair.
### Repair order
Repairs between shard owners happen in a deterministic order.
This doesn't mean repairs always start on node 1 and then follow a specific node order.
Repairs are viewed at the shard level.
Each shard has a list of owners and the repairs for a particular shard will happen
in a deterministic order among its owners.
When the Anti-Entropy service on any data node receives a repair request for a shard, it determines which
owner node is the first in the deterministic order and forwards the request to that node.
The request is now queued on the first owner.
The first owner's repair processor pulls it from the queue, detects the differences
between the local copy of the shard with the copy of the same shard on the next
owner in the deterministic order, then generates a patch from that difference.
The first owner then makes an RPC call to the next owner instructing it to apply
the patch to its copy of the shard.
Once the next owner has successfully applied the patch, it adds that shard to the Anti-Entropy repair queue.
A list of "visited" nodes follows the repair through the list of owners.
Each owner will check the list to detect when the repair has cycled through all owners,
at which point the repair is finished.
### Hot shards
The Anti-Entropy service does its best to avoid hot shards (shards that are currently receiving writes)
because they change quickly.
While write replication between shard owner nodes (with a
[replication factor](/enterprise_influxdb/v1.9/concepts/glossary/#replication-factor)
greater than 1) typically happens in milliseconds, this slight difference is
still enough to cause the appearance of entropy where there is none.
Because the Anti-Entropy service repairs only cold shards, unexpected effects can occur.
Consider the following scenario:
1. A shard goes cold.
2. Anti-Entropy detects entropy.
3. Entropy is reported by the [Anti-Entropy `/status` API](/enterprise_influxdb/v1.9/administration/anti-entropy-api/#get-status) or with the `influxd-ctl entropy show` command.
4. Shard takes a write, gets compacted, or something else causes it to go hot.
_These actions are out of Anti-Entropy's control._
5. A repair is requested, but is ignored because the shard is now hot.
In this example, you would have to periodically request a repair of the shard
until it either shows as being in the queue, being repaired, or no longer in the list of shards with entropy.
## Configuration
The configuration settings for the Anti-Entropy service are described in [Anti-Entropy settings](/enterprise_influxdb/v1.9/administration/config-data-nodes#anti-entropy) section of the data node configuration.
To enable the Anti-Entropy service, change the default value of the `[anti-entropy].enabled = false` setting to `true` in the `influxdb.conf` file of each of your data nodes.
## Command line tools for managing entropy
>**Note:** The Anti-Entropy service is disabled by default and must be enabled before using these commands.
The `influxd-ctl entropy` command enables you to manage entropy among shards in a cluster.
It includes the following subcommands:
#### `show`
Lists shards that are in an inconsistent state and in need of repair as well as
shards currently in the repair queue.
```bash
influxd-ctl entropy show
```
#### `repair`
Queues a shard for repair.
It requires a Shard ID which is provided in the [`show`](#show) output.
```bash
influxd-ctl entropy repair <shardID>
```
Repairing entropy in a shard is an asynchronous operation.
This command will return quickly as it only adds a shard to the repair queue.
Queuing shards for repair is idempotent.
There is no harm in making multiple requests to repair the same shard even if
it is already queued, currently being repaired, or not in need of repair.
#### `kill-repair`
Removes a shard from the repair queue.
It requires a Shard ID which is provided in the [`show`](#show) output.
```bash
influxd-ctl entropy kill-repair <shardID>
```
This only applies to shards in the repair queue.
It does not cancel repairs on nodes that are in the process of being repaired.
Once a repair has started, requests to cancel it are ignored.
> Stopping a entropy repair for a **missing** shard operation is not currently supported.
> It may be possible to stop repairs for missing shards with the
> [`influxd-ctl kill-copy-shard`](/enterprise_influxdb/v1.9/tools/influxd-ctl/#kill-copy-shard) command.
## InfluxDB Anti-Entropy API
The Anti-Entropy service uses an API for managing and monitoring entropy.
Details on the available API endpoints can be found in [The InfluxDB Anti-Entropy API](/enterprise_influxdb/v1.9/administration/anti-entropy-api).
## Use cases
Common use cases for the Anti-Entropy service include detecting and repairing entropy, replacing unresponsive data nodes, replacing data nodes for upgrades and maintenance, and eliminating entropy in active shards.
### Detecting and repairing entropy
Periodically, you may want to see if shards in your cluster have entropy or are
inconsistent with other shards in the shard group.
Use the `influxd-ctl entropy show` command to list all shards with detected entropy:
```bash
influxd-ctl entropy show
Entropy
==========
ID Database Retention Policy Start End Expires Status
21179 statsdb 1hour 2017-10-09 00:00:00 +0000 UTC 2017-10-16 00:00:00 +0000 UTC 2018-10-22 00:00:00 +0000 UTC diff
25165 statsdb 1hour 2017-11-20 00:00:00 +0000 UTC 2017-11-27 00:00:00 +0000 UTC 2018-12-03 00:00:00 +0000 UTC diff
```
Then use the `influxd-ctl entropy repair` command to add the shards with entropy
to the repair queue:
```bash
influxd-ctl entropy repair 21179
Repair Shard 21179 queued
influxd-ctl entropy repair 25165
Repair Shard 25165 queued
```
Check on the status of the repair queue with the `influxd-ctl entropy show` command:
```bash
influxd-ctl entropy show
Entropy
==========
ID Database Retention Policy Start End Expires Status
21179 statsdb 1hour 2017-10-09 00:00:00 +0000 UTC 2017-10-16 00:00:00 +0000 UTC 2018-10-22 00:00:00 +0000 UTC diff
25165 statsdb 1hour 2017-11-20 00:00:00 +0000 UTC 2017-11-27 00:00:00 +0000 UTC 2018-12-03 00:00:00 +0000 UTC diff
Queued Shards: [21179 25165]
```
### Replacing an unresponsive data node
If a data node suddenly disappears due to a catastrophic hardware failure or for any other reason, as soon as a new data node is online, the Anti-Entropy service will copy the correct shards to the new replacement node. The time it takes for the copying to complete is determined by the number of shards to be copied and how much data is stored in each.
_View the [Replacing Data Nodes](/enterprise_influxdb/v1.9/guides/replacing-nodes/#replace-data-nodes-in-an-influxdb-enterprise-cluster) documentation for instructions on replacing data nodes in your InfluxDB Enterprise cluster._
### Replacing a machine that is running a data node
Perhaps you are replacing a machine that is being decommissioned, upgrading hardware, or something else entirely.
The Anti-Entropy service will automatically copy shards to the new machines.
Once you have successfully run the `influxd-ctl update-data` command, you are free
to shut down the retired node without causing any interruption to the cluster.
The Anti-Entropy process will continue copying the appropriate shards from the
remaining replicas in the cluster.
### Fixing entropy in active shards
In rare cases, the currently active shard, or the shard to which new data is
currently being written, may find itself with inconsistent data.
Because the Anti-Entropy process can't write to hot shards, you must stop writes to the new
shard using the [`influxd-ctl truncate-shards` command](/enterprise_influxdb/v1.9/tools/influxd-ctl/#truncate-shards),
then add the inconsistent shard to the entropy repair queue:
```bash
# Truncate hot shards
influxd-ctl truncate-shards
# Show shards with entropy
influxd-ctl entropy show
Entropy
==========
ID Database Retention Policy Start End Expires Status
21179 statsdb 1hour 2018-06-06 12:00:00 +0000 UTC 2018-06-06 23:44:12 +0000 UTC 2018-12-06 00:00:00 +0000 UTC diff
# Add the inconsistent shard to the repair queue
influxd-ctl entropy repair 21179
```
## Troubleshooting
### Queued repairs are not being processed
The primary reason a repair in the repair queue isn't being processed is because
it went "hot" after the repair was queued.
The Anti-Entropy service only repairs cold shards or shards that are not currently being written to.
If the shard is hot, the Anti-Entropy service will wait until it goes cold again before performing the repair.
If the shard is "old" and writes to it are part of a backfill process, you simply
have to wait until the backfill process is finished. If the shard is the active
shard, run `truncate-shards` to stop writes to active shards. This process is
outlined [above](#fixing-entropy-in-active-shards).
### Anti-Entropy log messages
Below are common messages output by Anti-Entropy along with what they mean.
#### `Checking status`
Indicates that the Anti-Entropy process has begun the [status check process](#detecting-entropy).
#### `Skipped shards`
Indicates that the Anti-Entropy process has skipped a status check on shards because they are currently [hot](#hot-shards).

View File

@ -0,0 +1,238 @@
---
title: InfluxDB Anti-Entropy API
description: >
Monitor and repair shards on InfluxDB Enterprise data nodes the InfluxDB Anti-Entropy API.
menu:
enterprise_influxdb_1_9:
name: Anti-entropy API
weight: 70
parent: Use Anti-entropy service
aliases:
- /enterprise_influxdb/v1.9/administration/anti-entropy-api/
---
>**Note:** The Anti-Entropy API is available from the meta nodes and is only available when the Anti-Entropy service is enabled in the data node configuration settings. For information on the configuration settings, see
> [Anti-Entropy settings](/enterprise_influxdb/v1.9/administration/config-data-nodes/#anti-entropy-ae-settings).
Use the [Anti-Entropy service](/enterprise_influxdb/v1.9/administration/anti-entropy) in InfluxDB Enterprise to monitor and repair entropy in data nodes and their shards. To access the Anti-Entropy API and work with this service, use [`influx-ctl entropy`](/enterprise_influxdb/v1.9/tools/influxd-ctl/#entropy) (also available on meta nodes).
The base URL is:
```text
http://localhost:8086/shard-repair
```
## GET `/status`
### Description
Lists shards that are in an inconsistent state and in need of repair.
### Parameters
| Name | Located in | Description | Required | Type |
| ---- | ---------- | ----------- | -------- | ---- |
| `local` | query | Limits status check to local shards on the data node handling this request | No | boolean |
### Responses
#### Headers
| Header name | Value |
|-------------|--------------------|
| `Accept` | `application/json` |
#### Status codes
| Code | Description | Type |
| ---- | ----------- | ------ |
| `200` | `Successful operation` | object |
### Examples
#### cURL request
```bash
curl -X GET "http://localhost:8086/shard-repair/status?local=true" -H "accept: application/json"
```
#### Request URL
```text
http://localhost:8086/shard-repair/status?local=true
```
### Responses
Example of server response value:
```json
{
"shards": [
{
"id": "1",
"database": "ae",
"retention_policy": "autogen",
"start_time": "-259200000000000",
"end_time": "345600000000000",
"expires": "0",
"status": "diff"
},
{
"id": "3",
"database": "ae",
"retention_policy": "autogen",
"start_time": "62640000000000000",
"end_time": "63244800000000000",
"expires": "0",
"status": "diff"
}
],
"queued_shards": [
"3",
"5",
"9"
],
"processing_shards": [
"3",
"9"
]
}
```
## POST `/repair`
### Description
Queues the specified shard for repair of the inconsistent state.
### Parameters
| Name | Located in | Description | Required | Type |
| ---- | ---------- | ----------- | -------- | ---- |
| `id` | query | ID of shard to queue for repair | Yes | integer |
### Responses
#### Headers
| Header name | Value |
| ----------- | ----- |
| `Accept` | `application/json` |
#### Status codes
| Code | Description |
| ---- | ----------- |
| `204` | `Successful operation` |
| `400` | `Bad request` |
| `500` | `Internal server error` |
### Examples
#### cURL request
```bash
curl -X POST "http://localhost:8086/shard-repair/repair?id=1" -H "accept: application/json"
```
#### Request URL
```text
http://localhost:8086/shard-repair/repair?id=1
```
## POST `/cancel-repair`
### Description
Removes the specified shard from the repair queue on nodes.
### Parameters
| Name | Located in | Description | Required | Type |
| ---- | ---------- | ----------- | -------- | ---- |
| `id` | query | ID of shard to remove from repair queue | Yes | integer |
| `local` | query | Only remove shard from repair queue on node receiving the request | No | boolean |
### Responses
#### Headers
| Header name | Value |
|-------------|--------------------|
| `Accept` | `application/json` |
#### Status codes
| Code | Description |
| ---- | ----------- |
| `204` | `Successful operation` |
| `400` | `Bad request` |
| `500` | `Internal server error` |
### Examples
#### cURL request
```bash
curl -X POST "http://localhost:8086/shard-repair/cancel-repair?id=1&local=false" -H "accept: application/json"
```
#### Request URL
```text
http://localhost:8086/shard-repair/cancel-repair?id=1&local=false
```
## Models
### ShardStatus
| Name | Type | Required |
| ---- | ---- | -------- |
| `id` | string | No |
| `database` | string | No |
| `retention_policy` | string | No |
| `start_time` | string | No |
| `end_time` | string | No |
| `expires` | string | No |
| `status` | string | No |
### Examples
```json
{
"shards": [
{
"id": "1",
"database": "ae",
"retention_policy": "autogen",
"start_time": "-259200000000000",
"end_time": "345600000000000",
"expires": "0",
"status": "diff"
},
{
"id": "3",
"database": "ae",
"retention_policy": "autogen",
"start_time": "62640000000000000",
"end_time": "63244800000000000",
"expires": "0",
"status": "diff"
}
],
"queued_shards": [
"3",
"5",
"9"
],
"processing_shards": [
"3",
"9"
]
}
```

View File

@ -0,0 +1,500 @@
---
title: Authentication and authorization in InfluxDB Enterprise
description: >
Set up and manage authentication and authorization in InfluxDB Enterprise.
menu:
enterprise_influxdb_1_9:
name: Manage authentication and authorization
weight: 20
parent: Administration
---
This document covers setting up and managing authentication and authorization in InfluxDB Enterprise.
- [Authentication](#authentication)
- [Set up Authentication](#set-up-authentication)
- [Authenticate Requests](#authenticate-requests)
- [Authorization](#authorization)
- [User Types and Privileges](#user-types-and-privileges)
- [User Management Commands](#user-management-commands)
- [HTTP Errors](#authentication-and-authorization-http-errors)
{{% note %}}
Authentication and authorization should not be relied upon to prevent access and protect data from malicious actors.
If additional security or compliance features are desired, InfluxDB Enterprise should be run behind a third-party service.
If InfluxDB Enterprise is being deployed on a publicly accessible endpoint, we strongly recommend authentication be enabled. Otherwise the data will be
publicly available to any unauthenticated user.
{{% /note %}}
## Authentication
The InfluxDB API and the [`influx` CLI](/enterprise_influxdb/v1.9/tools/influx-cli/),
which connects to the database using the API,
include built-in authentication based on user credentials.
When you enable authentication, InfluxDB Enterprise only executes HTTP requests that are sent with valid credentials.
{{% note %}}
Authentication only occurs at the HTTP request scope.
Plugins do not currently have the ability to authenticate requests and service
endpoints (for example, Graphite, collectd, etc.) are not authenticated.
{{% /note %}}
### Set up authentication
1. **Create at least one [admin user](#admin-users)**.
See the [authorization section](#authorization) for how to create an admin user.
{{% note %}}
If you enable authentication and have no users, InfluxDB Enterprise will **not** enforce authentication
and will only accept the [query](#user-management-commands) that creates a new admin user.
{{% /note %}}
InfluxDB Enterprise will enforce authentication once there is an admin user.
2. **Enable authentication in your configuration file**
by setting the `auth-enabled` option to `true` in the `[http]` section:
```toml
[http]
enabled = true
bind-address = ":8086"
auth-enabled = true # Set to true
log-enabled = true
write-tracing = false
pprof-enabled = true
pprof-auth-enabled = true
debug-pprof-enabled = false
ping-auth-enabled = true
https-enabled = true
https-certificate = "/etc/ssl/influxdb.pem"
```
{{% note %}}
If `pprof-enabled` is set to `true`, set `pprof-auth-enabled` and `ping-auth-enabled`
to `true` to require authentication on profiling and ping endpoints.
{{% /note %}}
3. **Restart InfluxDB Enterprise**.
Once restarted, InfluxDB Enterprise checks user credentials on every request and only
processes requests that have valid credentials for an existing user.
### Authenticate requests
#### Authenticate with the InfluxDB API
There are two options for authenticating with the [InfluxDB API](/enterprise_influxdb/v1.9/tools/api/).
If you authenticate with both Basic Authentication **and** the URL query parameters,
the user credentials specified in the query parameters take precedence.
The queries in the following examples assume that the user is an [admin user](#admin-users).
See the section on [authorization](#authorization) for the different user types, their privileges, and more on user management.
> **Note:** InfluxDB Enterprise redacts passwords when you enable authentication.
##### Authenticate with Basic Authentication
```bash
curl -G http://localhost:8086/query \
-u todd:influxdb4ever \
--data-urlencode "q=SHOW DATABASES"
```
##### Authenticate with query parameters in the URL or request body
Set `u` as the username and `p` as the password.
###### Credentials as query parameters
```bash
curl -G "http://localhost:8086/query?u=todd&p=influxdb4ever" \
--data-urlencode "q=SHOW DATABASES"
```
###### Credentials in the request body
```bash
curl -G http://localhost:8086/query \
--data-urlencode "u=todd" \
--data-urlencode "p=influxdb4ever" \
--data-urlencode "q=SHOW DATABASES"
```
#### Authenticate with the CLI
There are three options for authenticating with the [CLI](/influxdb/v1.8/tools/shell/).
##### Authenticate with environment variables
Use the `INFLUX_USERNAME` and `INFLUX_PASSWORD` environment variables to provide
authentication credentials to the `influx` CLI.
```bash
export INFLUX_USERNAME=todd
export INFLUX_PASSWORD=influxdb4ever
echo $INFLUX_USERNAME $INFLUX_PASSWORD
todd influxdb4ever
influx
Connected to http://localhost:8086 version 1.4.x
InfluxDB shell 1.4.x
```
##### Authenticate with CLI flags
Use the `-username` and `-password` flags to provide authentication credentials
to the `influx` CLI.
```bash
influx -username todd -password influxdb4ever
Connected to http://localhost:8086 version 1.4.x
InfluxDB shell 1.4.x
```
##### Authenticate with credentials in the influx shell
Start the `influx` shell and run the `auth` command.
Enter your username and password when prompted.
```bash
> influx
Connected to http://localhost:8086 version 1.4.x
InfluxDB shell 1.8.x
> auth
username: todd
password:
>
```
#### Authenticate using JWT tokens
For a more secure alternative to using passwords, include JWT tokens with requests to the InfluxDB API.
This is currently only possible through the [InfluxDB HTTP API](/influxdb/v1.8/tools/api/).
1. [Add a shared secret in your InfluxDB configuration file](#add-a-shared-secret-in-your-influxdb-configuration-file)
2. [Generate your JWT token](#generate-your-jwt-token)
3. [Include the token in HTTP requests](#include-the-token-in-http-requests)
##### Add a shared secret in your InfluxDB Enterprise configuration file
InfluxDB Enterprise uses the shared secret to encode the JWT signature.
By default, `shared-secret` is set to an empty string, in which case no JWT authentication takes place.
Add a custom shared secret in your [InfluxDB configuration file](/influxdb/v1.8/administration/config/#shared-secret).
The longer the secret string, the more secure it is:
```toml
[http]
shared-secret = "my super secret pass phrase"
```
Alternatively, to avoid keeping your secret phrase as plain text in your InfluxDB configuration file, set the value with the `INFLUXDB_HTTP_SHARED_SECRET` environment variable.
##### Generate your JWT token
Use an authentication service to generate a secure token using your InfluxDB username, an expiration time, and your shared secret.
There are online tools, such as [https://jwt.io/](https://jwt.io/), that will do this for you.
The payload (or claims) of the token must be in the following format:
```json
{
"username": "myUserName",
"exp": 1516239022
}
```
- **username** - The name of your InfluxDB user.
- **exp** - The expiration time of the token in UNIX epoch time.
For increased security, keep token expiration periods short.
For testing, you can manually generate UNIX timestamps using [https://www.unixtimestamp.com/index.php](https://www.unixtimestamp.com/index.php).
Encode the payload using your shared secret.
You can do this with either a JWT library in your own authentication server or by hand at [https://jwt.io/](https://jwt.io/).
The generated token follows this format: `<header>.<payload>.<signature>`
##### Include the token in HTTP requests
Include your generated token as part of the ``Authorization`` header in HTTP requests.
Use the ``Bearer`` authorization scheme:
```
Authorization: Bearer <myToken>
```
{{% note %}}
Only unexpired tokens will successfully authenticate.
Be sure your token has not expired.
{{% /note %}}
###### Example query request with JWT authentication
```bash
curl -G "http://localhost:8086/query?db=demodb" \
--data-urlencode "q=SHOW DATABASES" \
--header "Authorization: Bearer <header>.<payload>.<signature>"
```
## Authenticate Telegraf requests to InfluxDB
Authenticating [Telegraf](/{{< latest "telegraf" >}}/) requests to an InfluxDB instance with
authentication enabled requires some additional steps.
In the Telegraf configuration file (`/etc/telegraf/telegraf.conf`), uncomment
and edit the `username` and `password` settings.
```toml
###############################################################################
# OUTPUT PLUGINS #
###############################################################################
# ...
[[outputs.influxdb]]
# ...
username = "example-username" # Provide your username
password = "example-password" # Provide your password
# ...
```
Restart Telegraf and you're all set!
## Authorization
Authorization is only enforced once you've [enabled authentication](#set-up-authentication).
By default, authentication is disabled, all credentials are silently ignored, and all users have all privileges.
### User types and privileges
#### Admin users
Admin users have `READ` and `WRITE` access to all databases and full access to the following administrative queries:
##### Database management
- `CREATE DATABASE`
- `DROP DATABASE`
- `DROP SERIES`
- `DROP MEASUREMENT`
- `CREATE RETENTION POLICY`
- `ALTER RETENTION POLICY`
- `DROP RETENTION POLICY`
- `CREATE CONTINUOUS QUERY`
- `DROP CONTINUOUS QUERY`
For more information about these commands, see [Database management](/influxdb/v1.8/query_language/manage-database/) and
[Continuous queries](/influxdb/v1.8/query_language/continuous_queries/).
##### User management
- Admin user management
- [`CREATE USER`](#user-management-commands)
- [`GRANT ALL PRIVILEGES`](#grant-administrative-privileges-to-an-existing-user)
- [`REVOKE ALL PRIVILEGES`](#revoke-administrative-privileges-from-an-admin-user)
- [`SHOW USERS`](#show-all-existing-users-and-their-admin-status)
- Non-admin user management:
- [`CREATE USER`](#user-management-commands)
- [`GRANT [READ,WRITE,ALL]`](#grant-read-write-or-all-database-privileges-to-an-existing-user)
- [`REVOKE [READ,WRITE,ALL]`](#revoke-read-write-or-all-database-privileges-from-an-existing-user)
- General user management:
- [`SET PASSWORD`](#reset-a-users-password)
- [`DROP USER`](#drop-a-user)
See [below](#user-management-commands) for a complete discussion of the user management commands.
#### Non-admin users
Non-admin users can have one of the following three privileges per database:
- `READ`
- `WRITE`
- `ALL` (both `READ` and `WRITE` access)
`READ`, `WRITE`, and `ALL` privileges are controlled per user per database. A new non-admin user has no access to any database until they are specifically [granted privileges to a database](#grant-read-write-or-all-database-privileges-to-an-existing-user) by an admin user.
Non-admin users can [`SHOW`](/influxdb/v1.8/query_language/explore-schema/#show-databases) the databases on which they have `READ` and/or `WRITE` permissions.
### User management commands
#### Admin user management
When you enable HTTP authentication, InfluxDB requires you to create at least one admin user before you can interact with the system.
```sql
CREATE USER admin WITH PASSWORD '<password>' WITH ALL PRIVILEGES
```
##### Create another admin user
```sql
CREATE USER <username> WITH PASSWORD '<password>' WITH ALL PRIVILEGES
```
{{% note %}}
Repeating the exact `CREATE USER` statement is idempotent.
If any values change the database will return a duplicate user error.
```sql
> CREATE USER todd WITH PASSWORD '123456' WITH ALL PRIVILEGES
> CREATE USER todd WITH PASSWORD '123456' WITH ALL PRIVILEGES
> CREATE USER todd WITH PASSWORD '123' WITH ALL PRIVILEGES
ERR: user already exists
> CREATE USER todd WITH PASSWORD '123456'
ERR: user already exists
> CREATE USER todd WITH PASSWORD '123456' WITH ALL PRIVILEGES
>
```
{{% /note %}}
##### `GRANT` administrative privileges to an existing user
```sql
GRANT ALL PRIVILEGES TO <username>
```
##### `REVOKE` administrative privileges from an admin user
```sql
REVOKE ALL PRIVILEGES FROM <username>
```
##### `SHOW` all existing users and their admin status
```sql
SHOW USERS
```
###### CLI Example
```sql
> SHOW USERS
user admin
todd false
paul true
hermione false
dobby false
```
#### Non-admin user management
##### `CREATE` a new non-admin user
```sql
CREATE USER <username> WITH PASSWORD '<password>'
```
###### CLI example
```js
> CREATE USER todd WITH PASSWORD 'influxdb41yf3'
> CREATE USER alice WITH PASSWORD 'wonder\'land'
> CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
> CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
> CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
>
```
{{% note %}}
##### Important notes about providing user credentials
- The user value must be wrapped in double quotes if it starts with a digit, is an InfluxQL keyword, contains a hyphen and or includes any special characters, for example: `!@#$%^&*()-`
- The password [string](/influxdb/v1.8/query_language/spec/#strings) must be wrapped in single quotes.
Do not include the single quotes when authenticating requests.
We recommend avoiding the single quote (`'`) and backslash (`\`) characters in passwords.
For passwords that include these characters, escape the special character with a backslash (e.g. (`\'`) when creating the password and when submitting authentication requests.
- Repeating the exact `CREATE USER` statement is idempotent. If any values change the database will return a duplicate user error. See GitHub Issue [#6890](https://github.com/influxdata/influxdb/pull/6890) for details.
###### CLI example
```sql
> CREATE USER "todd" WITH PASSWORD '123456'
> CREATE USER "todd" WITH PASSWORD '123456'
> CREATE USER "todd" WITH PASSWORD '123'
ERR: user already exists
> CREATE USER "todd" WITH PASSWORD '123456'
> CREATE USER "todd" WITH PASSWORD '123456' WITH ALL PRIVILEGES
ERR: user already exists
> CREATE USER "todd" WITH PASSWORD '123456'
>
```
{{% /note %}}
##### `GRANT` `READ`, `WRITE` or `ALL` database privileges to an existing user
```sql
GRANT [READ,WRITE,ALL] ON <database_name> TO <username>
```
CLI examples:
`GRANT` `READ` access to `todd` on the `NOAA_water_database` database:
```sql
> GRANT READ ON "NOAA_water_database" TO "todd"
>
```
`GRANT` `ALL` access to `todd` on the `NOAA_water_database` database:
```sql
> GRANT ALL ON "NOAA_water_database" TO "todd"
>
```
##### `REVOKE` `READ`, `WRITE`, or `ALL` database privileges from an existing user
```
REVOKE [READ,WRITE,ALL] ON <database_name> FROM <username>
```
CLI examples:
`REVOKE` `ALL` privileges from `todd` on the `NOAA_water_database` database:
```sql
> REVOKE ALL ON "NOAA_water_database" FROM "todd"
>
```
`REVOKE` `WRITE` privileges from `todd` on the `NOAA_water_database` database:
```sql
> REVOKE WRITE ON "NOAA_water_database" FROM "todd"
>
```
>**Note:** If a user with `ALL` privileges has `WRITE` privileges revoked, they are left with `READ` privileges, and vice versa.
##### `SHOW` a user's database privileges
```sql
SHOW GRANTS FOR <user_name>
```
CLI example:
```sql
> SHOW GRANTS FOR "todd"
database privilege
NOAA_water_database WRITE
another_database_name READ
yet_another_database_name ALL PRIVILEGES
one_more_database_name NO PRIVILEGES
```
#### General admin and non-admin user management
##### Reset a user's password
```sql
SET PASSWORD FOR <username> = '<password>'
```
CLI example:
```sql
> SET PASSWORD FOR "todd" = 'influxdb4ever'
>
```
{{% note %}}
**Note:** The password [string](/influxdb/v1.8/query_language/spec/#strings) must be wrapped in single quotes.
Do not include the single quotes when authenticating requests.
We recommend avoiding the single quote (`'`) and backslash (`\`) characters in passwords
For passwords that include these characters, escape the special character with a backslash (e.g. (`\'`) when creating the password and when submitting authentication requests.
{{% /note %}}
##### `DROP` a user
```sql
DROP USER <username>
```
CLI example:
```sql
> DROP USER "todd"
>
```
## Authentication and authorization HTTP errors
Requests with no authentication credentials or incorrect credentials yield the `HTTP 401 Unauthorized` response.
Requests by unauthorized users yield the `HTTP 403 Forbidden` response.

View File

@ -0,0 +1,471 @@
---
title: Back up and restore InfluxDB Enterprise clusters
description: >
Back up and restore InfluxDB enterprise clusters in case of unexpected data loss.
aliases:
- /enterprise/v1.8/guides/backup-and-restore/
menu:
enterprise_influxdb_1_9:
name: Back up and restore
weight: 80
parent: Administration
---
## Overview
When deploying InfluxDB Enterprise in production environments, you should have a strategy and procedures for backing up and restoring your InfluxDB Enterprise clusters to be prepared for unexpected data loss.
The tools provided by InfluxDB Enterprise can be used to:
- Provide disaster recovery due to unexpected events
- Migrate data to new environments or servers
- Restore clusters to a consistent state
- Debugging
Depending on the volume of data to be protected and your application requirements, InfluxDB Enterprise offers two methods, described below, for managing backups and restoring data:
- [Backup and restore utilities](#backup-and-restore-utilities) — For most applications
- [Exporting and importing data](#exporting-and-importing-data) — For large datasets
> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/{{< latest "influxdb" "v1" >}}/administration/backup_and_restore/) to:
>
> - Restore InfluxDB Enterprise backup files to InfluxDB OSS instances.
> - Back up InfluxDB OSS data that can be restored in InfluxDB Enterprise clusters.
## Backup and restore utilities
InfluxDB Enterprise supports backing up and restoring data in a cluster,
a single database and retention policy, and single shards.
Most InfluxDB Enterprise applications can use the backup and restore utilities.
Use the `backup` and `restore` utilities to back up and restore between `influxd`
instances with the same versions or with only minor version differences.
For example, you can backup from 1.7.3 and restore on 1.8.2.
### Backup utility
A backup creates a copy of the [metastore](/enterprise_influxdb/v1.9/concepts/glossary/#metastore) and [shard](/enterprise_influxdb/v1.9/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.
Or, back up **only the cluster metastore** using the `-strategy only-meta` backup option. For more information, see [perform a metastore only backup](#perform-a-metastore-only-backup).
All backups include a manifest, a JSON file describing what was collected during the backup.
The filenames reflect the UTC timestamp of when the backup was created, for example:
- Metastore backup: `20060102T150405Z.meta` (includes usernames and passwords)
- Shard data backup: `20060102T150405Z.<shard_id>.tar.gz`
- Manifest: `20060102T150405Z.manifest`
Backups can be full, metastore only, or incremental, and they are incremental by default:
- **Full backup**: Creates a copy of the metastore and shard data.
- **Incremental backup**: Creates a copy of the metastore and shard data that have changed since the last incremental backup. If there are no existing incremental backups, the system automatically performs a complete backup.
- **Metastore only backup**: Creates a copy of the metastore data only.
Restoring different types of backups requires different syntax.
To prevent issues with [restore](#restore-utility), keep full backups, metastore only backups, and incremental backups in separate directories.
>**Note:** The backup utility copies all data through the meta node that is used to
execute the backup. As a result, performance of a backup and restore is typically limited by the network IO of the meta node. Increasing the resources available to this meta node (such as resizing the EC2 instance) can significantly improve backup and restore performance.
#### Syntax
```bash
influxd-ctl [global-options] backup [backup-options] <path-to-backup-directory>
```
> **Note:** The `influxd-ctl backup` command exits with `0` for success and `1` for failure. If the backup fails, output can be directed to a log file to troubleshoot.
##### Global options
See the [`influxd-ctl` documentation](/enterprise_influxdb/v1.9/tools/influxd-ctl/#global-options)
for a complete list of the global `influxd-ctl` options.
##### Backup options
- `-db <string>`: name of the single database to back up
- `-from <TCP-address>`: the data node TCP address to prefer when backing up
- `-strategy`: select the backup strategy to apply during backup
- `incremental`: _**(Default)**_ backup only data added since the previous backup.
- `full` perform a full backup. Same as `-full`
- `only-meta` perform a backup for meta data only: users, roles,
databases, continuous queries, retention policies. Shards are not exported.
- `-full`: perform a full backup. Deprecated in favour of `-strategy=full`
- `-rp <string>`: the name of the single retention policy to back up (must specify `-db` with `-rp`)
- `-shard <unit>`: the ID of the single shard to back up
### Backup examples
Store the following incremental backups in different directories.
The first backup specifies `-db myfirstdb` and the second backup specifies
different options: `-db myfirstdb` and `-rp autogen`.
```bash
influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
influxd-ctl backup -db myfirstdb -rp autogen ./myfirstdb-autogen-backup
```
Store the following incremental backups in the same directory.
Both backups specify the same `-db` flag and the same database.
```bash
influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
```
#### Perform an incremental backup
Perform an incremental backup into the current directory with the command below.
If there are any existing backups the current directory, the system performs an incremental backup.
If there aren't any existing backups in the current directory, the system performs a backup of all data in InfluxDB.
```bash
# Syntax
influxd-ctl backup .
# Example
$ influxd-ctl backup .
Backing up meta data... Done. 421 bytes transferred
Backing up node 7ba671c7644b:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 903.539567ms, 307712 bytes transferred
Backing up node bf5a5f73bad8:8088, db _internal, rp monitor, shard 1... Done. Backed up in 138.694402ms, 53760 bytes transferred
Backing up node 9bf0fa0c302a:8088, db _internal, rp monitor, shard 2... Done. Backed up in 101.791148ms, 40448 bytes transferred
Backing up node 7ba671c7644b:8088, db _internal, rp monitor, shard 3... Done. Backed up in 144.477159ms, 39424 bytes transferred
Backed up to . in 1.293710883s, transferred 441765 bytes
$ ls
20160803T222310Z.manifest 20160803T222310Z.s1.tar.gz 20160803T222310Z.s3.tar.gz
20160803T222310Z.meta 20160803T222310Z.s2.tar.gz 20160803T222310Z.s4.tar.gz
```
#### Perform a full backup
Perform a full backup into a specific directory with the command below.
The directory must already exist.
```bash
# Sytnax
influxd-ctl backup -full <path-to-backup-directory>
# Example
$ influxd-ctl backup -full backup_dir
Backing up meta data... Done. 481 bytes transferred
Backing up node <hostname>:8088, db _internal, rp monitor, shard 1... Done. Backed up in 33.207375ms, 238080 bytes transferred
Backing up node <hostname>:8088, db telegraf, rp autogen, shard 2... Done. Backed up in 15.184391ms, 95232 bytes transferred
Backed up to backup_dir in 51.388233ms, transferred 333793 bytes
$ ls backup_dir
20170130T184058Z.manifest
20170130T184058Z.meta
20170130T184058Z.s1.tar.gz
20170130T184058Z.s2.tar.gz
```
#### Perform an incremental backup on a single database
Point at a remote meta server and back up only one database into a given directory (the directory must already exist):
```bash
# Syntax
influxd-ctl -bind <metahost>:8091 backup -db <db-name> <path-to-backup-directory>
# Example
$ influxd-ctl -bind 2a1b7a338184:8091 backup -db telegraf ./telegrafbackup
Backing up meta data... Done. 318 bytes transferred
Backing up node 7ba671c7644b:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 997.168449ms, 399872 bytes transferred
Backed up to ./telegrafbackup in 1.002358077s, transferred 400190 bytes
$ ls ./telegrafbackup
20160803T222811Z.manifest 20160803T222811Z.meta 20160803T222811Z.s4.tar.gz
```
#### Perform a metastore only backup
Perform a meta store only backup into a specific directory with the command below.
The directory must already exist.
```bash
# Syntax
influxd-ctl backup -strategy only-meta <path-to-backup-directory>
# Example
$ influxd-ctl backup -strategy only-meta backup_dir
Backing up meta data... Done. 481 bytes transferred
Backed up to backup_dir in 51.388233ms, transferred 481 bytes
~# ls backup_dir
20170130T184058Z.manifest
20170130T184058Z.meta
```
### Restore utility
#### Disable anti-entropy (AE) before restoring a backup
> Before restoring a backup, stop the anti-entropy (AE) service (if enabled) on **each data node in the cluster, one at a time**.
>
> 1. Stop the `influxd` service.
> 2. Set `[anti-entropy].enabled` to `false` in the influx configuration file (by default, influx.conf).
> 3. Restart the `influxd` service and wait for the data node to receive read and write requests and for the [hinted handoff queue](/enterprise_influxdb/v1.9/concepts/clustering/#hinted-handoff) to drain.
> 4. Once AE is disabled on all data nodes and each node returns to a healthy state, you're ready to restore the backup. For details on how to restore your backup, see examples below.
> 5. After restoring the backup, restart AE services on each data node.
##### Restore a backup
Restore a backup to an existing cluster or a new cluster.
By default, a restore writes to databases using the backed-up data's [replication factor](/enterprise_influxdb/v1.9/concepts/glossary/#replication-factor).
An alternate replication factor can be specified with the `-newrf` flag when restoring a single database.
Restore supports both `-full` backups and incremental backups; the syntax for
a restore differs depending on the backup type.
##### Restores from an existing cluster to a new cluster
Restores from an existing cluster to a new cluster restore the existing cluster's
[users](/enterprise_influxdb/v1.9/concepts/glossary/#user), roles,
[databases](/enterprise_influxdb/v1.9/concepts/glossary/#database), and
[continuous queries](/enterprise_influxdb/v1.9/concepts/glossary/#continuous-query-cq) to
the new cluster.
They do not restore Kapacitor [subscriptions](/enterprise_influxdb/v1.9/concepts/glossary/#subscription).
In addition, restores to a new cluster drop any data in the new cluster's
`_internal` database and begin writing to that database anew.
The restore does not write the existing cluster's `_internal` database to
the new cluster.
#### Syntax to restore from incremental and metadata backups
Use the syntax below to restore an incremental or metadata backup to a new cluster or an existing cluster.
**The existing cluster must contain no data in the affected databases.**
Performing a restore from an incremental backup requires the path to the incremental backup's directory.
```bash
influxd-ctl [global-options] restore [restore-options] <path-to-backup-directory>
```
{{% note %}}
The existing cluster can have data in the `_internal` database (the database InfluxDB creates if
[internal monitoring](/platform/monitoring/influxdata-platform/tools/measurements-internal) is enabled).
The system automatically drops the `_internal` database when it performs a complete restore.
{{% /note %}}
##### Global options
See the [`influxd-ctl` documentation](/enterprise_influxdb/v1.9/tools/influxd-ctl/#global-options)
for a complete list of the global `influxd-ctl` options.
##### Restore options
- `-db <string>`: the name of the single database to restore
- `-list`: shows the contents of the backup
- `-newdb <string>`: the name of the new database to restore to (must specify with `-db`)
- `-newrf <int>`: the new replication factor to restore to (this is capped to the number of data nodes in the cluster)
- `-newrp <string>`: the name of the new retention policy to restore to (must specify with `-rp`)
- `-rp <string>`: the name of the single retention policy to restore
- `-shard <unit>`: the shard ID to restore
#### Syntax to restore from a full or manifest only backup
Use the syntax below to restore a full or manifest only backup to a new cluster or an existing cluster.
Note that the existing cluster must contain no data in the affected databases.*
Performing a restore requires the `-full` flag and the path to the backup's manifest file.
```bash
influxd-ctl [global-options] restore [options] -full <path-to-manifest-file>
```
\* The existing cluster can have data in the `_internal` database, the database
that the system creates by default.
The system automatically drops the `_internal` database when it performs a
complete restore.
##### Global options
See the [`influxd-ctl` documentation](/enterprise_influxdb/v1.9/tools/influxd-ctl/#global-options)
for a complete list of the global `influxd-ctl` options.
##### Restore options
- `-db <string>`: the name of the single database to restore
- `-list`: shows the contents of the backup
- `-newdb <string>`: the name of the new database to restore to (must specify with `-db`)
- `-newrf <int>`: the new replication factor to restore to (this is capped to the number of data nodes in the cluster)
- `-newrp <string>`: the name of the new retention policy to restore to (must specify with `-rp`)
- `-rp <string>`: the name of the single retention policy to restore
- `-shard <unit>`: the shard ID to restore
#### Examples
##### Restore from an incremental backup
```bash
# Syntax
influxd-ctl restore <path-to-backup-directory>
# Example
$ influxd-ctl restore my-incremental-backup/
Using backup directory: my-incremental-backup/
Using meta backup: 20170130T231333Z.meta
Restoring meta data... Done. Restored in 21.373019ms, 1 shards mapped
Restoring db telegraf, rp autogen, shard 2 to shard 2...
Copying data to <hostname>:8088... Copying data to <hostname>:8088... Done. Restored shard 2 into shard 2 in 61.046571ms, 588800 bytes transferred
Restored from my-incremental-backup/ in 83.892591ms, transferred 588800 bytes
```
##### Restore from a metadata backup
In this example, the `restore` command restores an metadata backup stored
in the `metadata-backup/` directory.
```bash
# Syntax
influxd-ctl restore <path-to-backup-directory>
# Example
$ influxd-ctl restore metadata-backup/
Using backup directory: metadata-backup/
Using meta backup: 20200101T000000Z.meta
Restoring meta data... Done. Restored in 21.373019ms, 1 shards mapped
Restored from my-incremental-backup/ in 19.2311ms, transferred 588 bytes
```
##### Restore from a `-full` backup
```bash
# Syntax
influxd-ctl restore -full <path-to-manifest-file>
# Example
$ influxd-ctl restore -full my-full-backup/20170131T020341Z.manifest
Using manifest: my-full-backup/20170131T020341Z.manifest
Restoring meta data... Done. Restored in 9.585639ms, 1 shards mapped
Restoring db telegraf, rp autogen, shard 2 to shard 2...
Copying data to <hostname>:8088... Copying data to <hostname>:8088... Done. Restored shard 2 into shard 2 in 48.095082ms, 569344 bytes transferred
Restored from my-full-backup in 58.58301ms, transferred 569344 bytes
```
{{% note %}}
Restoring from a full backup **does not** restore metadata.
To restore metadata, [restore a metadata backup](#restore-from-a-metadata-backup) separately.
{{% /note %}}
##### Restore from an incremental backup for a single database and give the database a new name
```bash
# Syntax
influxd-ctl restore -db <src> -newdb <dest> <path-to-backup-directory>
# Example
$ influxd-ctl restore -db telegraf -newdb restored_telegraf my-incremental-backup/
Using backup directory: my-incremental-backup/
Using meta backup: 20170130T231333Z.meta
Restoring meta data... Done. Restored in 8.119655ms, 1 shards mapped
Restoring db telegraf, rp autogen, shard 2 to shard 4...
Copying data to <hostname>:8088... Copying data to <hostname>:8088... Done. Restored shard 2 into shard 4 in 57.89687ms, 588800 bytes transferred
Restored from my-incremental-backup/ in 66.715524ms, transferred 588800 bytes
```
##### Restore from an incremental backup for a database and merge that database into an existing database
Your `telegraf` database was mistakenly dropped, but you have a recent backup so you've only lost a small amount of data.
If Telegraf is still running, it will recreate the `telegraf` database shortly after the database is dropped.
You might try to directly restore your `telegraf` backup just to find that you can't restore:
```bash
$ influxd-ctl restore -db telegraf my-incremental-backup/
Using backup directory: my-incremental-backup/
Using meta backup: 20170130T231333Z.meta
Restoring meta data... Error.
restore: operation exited with error: problem setting snapshot: database already exists
```
To work around this, you can restore your telegraf backup into a new database by specifying the `-db` flag for the source and the `-newdb` flag for the new destination:
```bash
$ influxd-ctl restore -db telegraf -newdb restored_telegraf my-incremental-backup/
Using backup directory: my-incremental-backup/
Using meta backup: 20170130T231333Z.meta
Restoring meta data... Done. Restored in 19.915242ms, 1 shards mapped
Restoring db telegraf, rp autogen, shard 2 to shard 7...
Copying data to <hostname>:8088... Copying data to <hostname>:8088... Done. Restored shard 2 into shard 7 in 36.417682ms, 588800 bytes transferred
Restored from my-incremental-backup/ in 56.623615ms, transferred 588800 bytes
```
Then, in the [`influx` client](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/), use an [`INTO` query](/enterprise_influxdb/v1.9/query_language/explore-data/#the-into-clause) to copy the data from the new database into the existing `telegraf` database:
```bash
$ influx
> USE restored_telegraf
Using database restored_telegraf
> SELECT * INTO telegraf..:MEASUREMENT FROM /.*/ GROUP BY *
name: result
------------
time written
1970-01-01T00:00:00Z 471
```
#### Common issues with restore
##### Restore writes information not part of the original backup
If a [restore from an incremental backup](#syntax-to-restore-from-incremental-and-metadata-backups)
does not limit the restore to the same database, retention policy, and shard specified by the backup command,
the restore may appear to restore information that was not part of the original backup.
Backups consist of a shard data backup and a metastore backup.
The **shard data backup** contains the actual time series data: the measurements, tags, fields, and so on.
The **metastore backup** contains user information, database names, retention policy names, shard metadata, continuous queries, and subscriptions.
When the system creates a backup, the backup includes:
* the relevant shard data determined by the specified backup options
* all of the metastore information in the cluster regardless of the specified backup options
Because a backup always includes the complete metastore information, a restore that doesn't include the same options specified by the backup command may appear to restore data that were not targeted by the original backup.
The unintended data, however, include only the metastore information, not the shard data associated with that metastore information.
##### Restore a backup created prior to version 1.2.0
InfluxDB Enterprise introduced incremental backups in version 1.2.0.
To restore a backup created prior to version 1.2.0, be sure to follow the syntax
for [restoring from a full backup](#restore-from-a-full-backup).
## Exporting and importing data
For most InfluxDB Enterprise applications, the [backup and restore utilities](#backup-and-restore-utilities) provide the tools you need for your backup and restore strategy. However, in some cases, the standard backup and restore utilities may not adequately handle the volumes of data in your application.
As an alternative to the standard backup and restore utilities, use the InfluxDB `influx_inspect export` and `influx -import` commands to create backup and restore procedures for your disaster recovery and backup strategy. These commands can be executed manually or included in shell scripts that run the export and import operations at scheduled intervals (example below).
### Exporting data
Use the [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include:
- Exporting all, or specific, databases
- Filtering with starting and ending timestamps
- Using gzip compression for smaller files and faster exports
For details on optional settings and usage, see [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export).
In the following example, the database is exported filtered to include only one day and compressed for optimal speed and file size.
```bash
influx_inspect export -database myDB -compress -start 2019-05-19T00:00:00.000Z -end 2019-05-19T23:59:59.999Z
```
### Importing data
After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/{{< latest "influxdb" "v1" >}}/tools/use-influx/#import).
In the following example, the compressed data file is imported into the specified database.
```bash
influx -import -database myDB -compress
```
For details on using the `influx -import` command, see [Import data from a file with -import](/{{< latest "influxdb" "v1" >}}/tools/use-influx/#import-data-from-a-file-with-import).
### Example
For an example of using the exporting and importing data approach for disaster recovery, see the Capital One presentation from Influxdays 2019 on ["Architecting for Disaster Recovery."](https://www.youtube.com/watch?v=LyQDhSdnm4A). In this presentation, Capital One discusses the following:
- Exporting data every 15 minutes from an active cluster to an AWS S3 bucket.
- Replicating the export file in the S3 bucket using the AWS S3 copy command.
- Importing data every 15 minutes from the AWS S3 bucket to a cluster available for disaster recovery.
- Advantages of the export-import approach over the standard backup and restore utilities for large volumes of data.
- Managing users and scheduled exports and imports with a custom administration tool.

View File

@ -0,0 +1,33 @@
---
title: Manage InfluxDB Enterprise clusters
description: >
Use the `influxd-ctl` and `influx` command line tools to manage InfluxDB Enterprise clusters and data.
aliases:
- /enterprise/v1.8/features/cluster-commands/
- /enterprise_influxdb/v1.9/features/cluster-commands/
menu:
enterprise_influxdb_1_9:
name: Manage clusters
weight: 40
parent: Administration
---
Use the following tools to manage and interact with your InfluxDB Enterprise clusters:
- To manage clusters and nodes, back up and restore data, and rebalance clusters, use the [`influxd-ctl` cluster management utility](#influxd-ctl-cluster-management-utility)
- To write and query data, use the [`influx` command line interface (CLI)](#influx-command-line-interface-cli)
## `influxd-ctl` cluster management utility
The [`influxd-ctl`](/enterprise_influxdb/v1.9/tools/influxd-ctl/) utility provides commands for managing your InfluxDB Enterprise clusters.
Use the `influxd-ctl` cluster management utility to manage your cluster nodes, back up and restore data, and rebalance clusters.
The `influxd-ctl` utility is available on all [meta nodes](/enterprise_influxdb/v1.9/concepts/glossary/#meta-node).
For more information, see [`influxd-ctl`](/enterprise_influxdb/v1.9/tools/influxd-ctl/).
## `influx` command line interface (CLI)
Use the `influx` command line interface (CLI) to write data to your cluster, query data interactively, and view query output in different formats.
The `influx` CLI is available on all [data nodes](/enterprise_influxdb/v1.9/concepts/glossary/#data-node).
See [InfluxDB command line interface (CLI/shell)](/enterprise_influxdb/v1.9/tools/use-influx/) for details on using the `influx` command line interface.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,286 @@
---
title: Configure InfluxDB Enterprise meta modes
description: >
Configure InfluxDB Enterprise data node settings and environmental variables.
menu:
enterprise_influxdb_1_9:
name: Configure meta nodes
weight: 30
parent: Administration
---
* [Meta node configuration settings](#meta-node-configuration-settings)
* [Global options](#global-options)
* [Enterprise license `[enterprise]`](#enterprise)
* [Meta node `[meta]`](#meta)
* [TLS `[tls]`](#tls-settings)
## Meta node configuration settings
### Global options
#### `reporting-disabled = false`
InfluxData, the company, relies on reported data from running nodes primarily to
track the adoption rates of different InfluxDB versions.
These data help InfluxData support the continuing development of InfluxDB.
The `reporting-disabled` option toggles the reporting of data every 24 hours to
`usage.influxdata.com`.
Each report includes a randomly-generated identifier, OS, architecture,
InfluxDB version, and the number of databases, measurements, and unique series.
To disable reporting, set this option to `true`.
> **Note:** No data from user databases are ever transmitted.
#### `bind-address = ""`
This setting is not intended for use.
It will be removed in future versions.
#### `hostname = ""`
The hostname of the [meta node](/enterprise_influxdb/v1.9/concepts/glossary/#meta-node).
This must be resolvable and reachable by all other members of the cluster.
Environment variable: `INFLUXDB_HOSTNAME`
-----
### Enterprise license settings
#### `[enterprise]`
The `[enterprise]` section contains the parameters for the meta node's
registration with the [InfluxData portal](https://portal.influxdata.com/).
#### `license-key = ""`
The license key created for you on [InfluxData portal](https://portal.influxdata.com).
The meta node transmits the license key to
[portal.influxdata.com](https://portal.influxdata.com) over port 80 or port 443
and receives a temporary JSON license file in return.
The server caches the license file locally.
If your server cannot communicate with [https://portal.influxdata.com](https://portal.influxdata.com), you must use the [`license-path` setting](#license-path).
Use the same key for all nodes in the same cluster.
{{% warn %}}The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
{{% /warn %}}
> **Note:** You must restart meta nodes to update your configuration. For more information, see how to [renew or update your license key](/enterprise_influxdb/v1.9/administration/renew-license/).
Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY`
#### `license-path = ""`
The local path to the permanent JSON license file that you received from InfluxData
for instances that do not have access to the internet.
To obtain a license file, contact [sales@influxdb.com](mailto:sales@influxdb.com).
The license file must be saved on every server in the cluster, including meta nodes
and data nodes.
The file contains the JSON-formatted license, and must be readable by the `influxdb` user.
Each server in the cluster independently verifies its license.
{{% warn %}}
The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
{{% /warn %}}
> **Note:** You must restart meta nodes to update your configuration. For more information, see how to [renew or update your license key](/enterprise_influxdb/v1.9/administration/renew-license/).
Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_PATH`
-----
### Meta node settings
#### `[meta]`
#### `dir = "/var/lib/influxdb/meta"`
The directory where cluster meta data is stored.
Environment variable: `INFLUXDB_META_DIR`
#### `bind-address = ":8089"`
The bind address(port) for meta node communication.
For simplicity, InfluxData recommends using the same port on all meta nodes,
but this is not necessary.
Environment variable: `INFLUXDB_META_BIND_ADDRESS`
#### `http-bind-address = ":8091"`
The default address to bind the API to.
Environment variable: `INFLUXDB_META_HTTP_BIND_ADDRESS`
#### `https-enabled = false`
Determines whether meta nodes use HTTPS to communicate with each other. By default, HTTPS is disabled. We strongly recommend enabling HTTPS.
To enable HTTPS, set https-enabled to `true`, specify the path to the SSL certificate `https-certificate = " "`, and specify the path to the SSL private key `https-private-key = ""`.
Environment variable: `INFLUXDB_META_HTTPS_ENABLED`
#### `https-certificate = ""`
If HTTPS is enabled, specify the path to the SSL certificate.
Use either:
* PEM-encoded bundle with both the certificate and key (`[bundled-crt-and-key].pem`)
* Certificate only (`[certificate].crt`)
Environment variable: `INFLUXDB_META_HTTPS_CERTIFICATE`
#### `https-private-key = ""`
If HTTPS is enabled, specify the path to the SSL private key.
Use either:
* PEM-encoded bundle with both the certificate and key (`[bundled-crt-and-key].pem`)
* Private key only (`[private-key].key`)
Environment variable: `INFLUXDB_META_HTTPS_PRIVATE_KEY`
#### `https-insecure-tls = false`
Whether meta nodes will skip certificate validation communicating with each other over HTTPS.
This is useful when testing with self-signed certificates.
Environment variable: `INFLUXDB_META_HTTPS_INSECURE_TLS`
#### `data-use-tls = false`
Whether to use TLS to communicate with data nodes.
#### `data-insecure-tls = false`
Whether meta nodes will skip certificate validation communicating with data nodes over TLS.
This is useful when testing with self-signed certificates.
#### `gossip-frequency = "5s"`
The default frequency with which the node will gossip its known announcements.
#### `announcement-expiration = "30s"`
The default length of time an announcement is kept before it is considered too old.
#### `retention-autocreate = true`
Automatically create a default retention policy when creating a database.
#### `election-timeout = "1s"`
The amount of time in candidate state without a leader before we attempt an election.
#### `heartbeat-timeout = "1s"`
The amount of time in follower state without a leader before we attempt an election.
#### `leader-lease-timeout = "500ms"`
The leader lease timeout is the amount of time a Raft leader will remain leader
if it does not hear from a majority of nodes.
After the timeout the leader steps down to the follower state.
Clusters with high latency between nodes may want to increase this parameter to
avoid unnecessary Raft elections.
Environment variable: `INFLUXDB_META_LEADER_LEASE_TIMEOUT`
#### `commit-timeout = "50ms"`
The commit timeout is the amount of time a Raft node will tolerate between
commands before issuing a heartbeat to tell the leader it is alive.
The default setting should work for most systems.
Environment variable: `INFLUXDB_META_COMMIT_TIMEOUT`
#### `consensus-timeout = "30s"`
Timeout waiting for consensus before getting the latest Raft snapshot.
Environment variable: `INFLUXDB_META_CONSENSUS_TIMEOUT`
#### `cluster-tracing = false`
Cluster tracing toggles the logging of Raft logs on Raft nodes.
Enable this setting when debugging Raft consensus issues.
Environment variable: `INFLUXDB_META_CLUSTER_TRACING`
#### `logging-enabled = true`
Meta logging toggles the logging of messages from the meta service.
Environment variable: `INFLUXDB_META_LOGGING_ENABLED`
#### `pprof-enabled = true`
Enables the `/debug/pprof` endpoint for troubleshooting.
To disable, set the value to `false`.
Environment variable: `INFLUXDB_META_PPROF_ENABLED`
#### `lease-duration = "1m0s"`
The default duration of the leases that data nodes acquire from the meta nodes.
Leases automatically expire after the `lease-duration` is met.
Leases ensure that only one data node is running something at a given time.
For example, [continuous queries](/enterprise_influxdb/v1.9/concepts/glossary/#continuous-query-cq)
(CQs) use a lease so that all data nodes aren't running the same CQs at once.
For more details about `lease-duration` and its impact on continuous queries, see
[Configuration and operational considerations on a cluster](/enterprise_influxdb/v1.9/features/clustering-features/#configuration-and-operational-considerations-on-a-cluster).
Environment variable: `INFLUXDB_META_LEASE_DURATION`
#### `auth-enabled = false`
If true, HTTP endpoints require authentication.
This setting must have the same value as the data nodes' meta.meta-auth-enabled configuration.
#### `ldap-allowed = false`
Whether LDAP is allowed to be set.
If true, you will need to use `influxd ldap set-config` and set enabled=true to use LDAP authentication.
#### `shared-secret = ""`
The shared secret to be used by the public API for creating custom JWT authentication.
If you use this setting, set [`auth-enabled`](#auth-enabled-false) to `true`.
Environment variable: `INFLUXDB_META_SHARED_SECRET`
#### `internal-shared-secret = ""`
The shared secret used by the internal API for JWT authentication for
inter-node communication within the cluster.
Set this to a long pass phrase.
This value must be the same value as the
[`[meta] meta-internal-shared-secret`](/enterprise_influxdb/v1.9/administration/config-data-nodes#meta-internal-shared-secret) in the data node configuration file.
To use this option, set [`auth-enabled`](#auth-enabled-false) to `true`.
Environment variable: `INFLUXDB_META_INTERNAL_SHARED_SECRET`
### TLS settings
For more information, see [TLS settings for data nodes](/enterprise_influxdb/v1.9/administration/config-data-nodes#tls-settings).
#### Recommended "modern compatibility" cipher settings
```toml
ciphers = [ "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
]
min-version = "tls1.2"
max-version = "tls1.2"
```

View File

@ -0,0 +1,182 @@
---
title: Configure InfluxDB Enterprise clusters
description: >
Learn about global options, meta node options, data node options and other InfluxDB Enterprise configuration settings, including
aliases:
- /enterprise/v1.8/administration/configuration/
menu:
enterprise_influxdb_1_9:
name: Configure clusters
weight: 10
parent: Administration
---
This page contains general information about configuring InfluxDB Enterprise clusters.
For complete listings and descriptions of the configuration settings, see:
* [Configure data nodes](/enterprise_influxdb/v1.9/administration/config-data-nodes)
* [Configure meta nodes](/enterprise_influxdb/v1.9/administration/config-meta-nodes)
## Use configuration files
### Display the default configurations
The following commands print out a TOML-formatted configuration with all
available options set to their default values.
#### Meta node configuration
```bash
influxd-meta config
```
#### Data node configuration
```bash
influxd config
```
#### Create a configuration file
On POSIX systems, generate a new configuration file by redirecting the output
of the command to a file.
New meta node configuration file:
```
influxd-meta config > /etc/influxdb/influxdb-meta-generated.conf
```
New data node configuration file:
```
influxd config > /etc/influxdb/influxdb-generated.conf
```
Preserve custom settings from older configuration files when generating a new
configuration file with the `-config` option.
For example, this overwrites any default configuration settings in the output
file (`/etc/influxdb/influxdb.conf.new`) with the configuration settings from
the file (`/etc/influxdb/influxdb.conf.old`) passed to `-config`:
```
influxd config -config /etc/influxdb/influxdb.conf.old > /etc/influxdb/influxdb.conf.new
```
#### Launch the process with a configuration file
There are two ways to launch the meta or data processes using your customized
configuration file.
* Point the process to the desired configuration file with the `-config` option.
To start the meta node process with `/etc/influxdb/influxdb-meta-generate.conf`:
influxd-meta -config /etc/influxdb/influxdb-meta-generate.conf
To start the data node process with `/etc/influxdb/influxdb-generated.conf`:
influxd -config /etc/influxdb/influxdb-generated.conf
* Set the environment variable `INFLUXDB_CONFIG_PATH` to the path of your
configuration file and start the process.
To set the `INFLUXDB_CONFIG_PATH` environment variable and launch the data
process using `INFLUXDB_CONFIG_PATH` for the configuration file path:
export INFLUXDB_CONFIG_PATH=/root/influxdb.generated.conf
echo $INFLUXDB_CONFIG_PATH
/root/influxdb.generated.conf
influxd
If set, the command line `-config` path overrides any environment variable path.
If you do not supply a configuration file, InfluxDB uses an internal default
configuration (equivalent to the output of `influxd config` and `influxd-meta
config`).
{{% warn %}} Note for 1.3, the influxd-meta binary, if no configuration is specified, will check the INFLUXDB_META_CONFIG_PATH.
If that environment variable is set, the path will be used as the configuration file.
If unset, the binary will check the ~/.influxdb and /etc/influxdb folder for an influxdb-meta.conf file.
If it finds that file at either of the two locations, the first will be loaded as the configuration file automatically.
<br>
This matches a similar behavior that the open source and data node versions of InfluxDB already follow.
{{% /warn %}}
Configure InfluxDB using the configuration file (`influxdb.conf`) and environment variables.
The default value for each configuration setting is shown in the documentation.
Commented configuration options use the default value.
Configuration settings with a duration value support the following duration units:
- `ns` _(nanoseconds)_
- `us` or `µs` _(microseconds)_
- `ms` _(milliseconds)_
- `s` _(seconds)_
- `m` _(minutes)_
- `h` _(hours)_
- `d` _(days)_
- `w` _(weeks)_
### Environment variables
All configuration options can be specified in the configuration file or in
environment variables.
Environment variables override the equivalent options in the configuration
file.
If a configuration option is not specified in either the configuration file
or in an environment variable, InfluxDB uses its internal default
configuration.
In the sections below we name the relevant environment variable in the
description for the configuration setting.
Environment variables can be set in `/etc/default/influxdb-meta` and
`/etc/default/influxdb`.
> **Note:**
To set or override settings in a config section that allows multiple
configurations (any section with double_brackets (`[[...]]`) in the header supports
multiple configurations), the desired configuration must be specified by ordinal
number.
For example, for the first set of `[[graphite]]` environment variables,
prefix the configuration setting name in the environment variable with the
relevant position number (in this case: `0`):
>
INFLUXDB_GRAPHITE_0_BATCH_PENDING
INFLUXDB_GRAPHITE_0_BATCH_SIZE
INFLUXDB_GRAPHITE_0_BATCH_TIMEOUT
INFLUXDB_GRAPHITE_0_BIND_ADDRESS
INFLUXDB_GRAPHITE_0_CONSISTENCY_LEVEL
INFLUXDB_GRAPHITE_0_DATABASE
INFLUXDB_GRAPHITE_0_ENABLED
INFLUXDB_GRAPHITE_0_PROTOCOL
INFLUXDB_GRAPHITE_0_RETENTION_POLICY
INFLUXDB_GRAPHITE_0_SEPARATOR
INFLUXDB_GRAPHITE_0_TAGS
INFLUXDB_GRAPHITE_0_TEMPLATES
INFLUXDB_GRAPHITE_0_UDP_READ_BUFFER
>
For the Nth Graphite configuration in the configuration file, the relevant
environment variables would be of the form `INFLUXDB_GRAPHITE_(N-1)_BATCH_PENDING`.
For each section of the configuration file the numbering restarts at zero.
### `GOMAXPROCS` environment variable
{{% note %}}
_**Note:**_ `GOMAXPROCS` cannot be set using the InfluxDB configuration file.
It can only be set as an environment variable.
{{% /note %}}
The `GOMAXPROCS` [Go language environment variable](https://golang.org/pkg/runtime/#hdr-Environment_Variables)
can be used to set the maximum number of CPUs that can execute simultaneously.
The default value of `GOMAXPROCS` is the number of CPUs
that are visible to the program *on startup*
(based on what the operating system considers to be a CPU).
For a 32-core machine, the `GOMAXPROCS` value would be `32`.
You can override this value to be less than the maximum value,
which can be useful in cases where you are running the InfluxDB
along with other processes on the same machine
and want to ensure that the database doesn't negatively affect those processes.
{{% note %}}
_**Note:**_ Setting `GOMAXPROCS=1` eliminates all parallelization.
{{% /note %}}

View File

@ -0,0 +1,211 @@
---
title: Configure LDAP authentication in InfluxDB Enterprise
description: >
Configure LDAP authentication in InfluxDB Enterprise and test LDAP connectivity.
menu:
enterprise_influxdb_1_9:
name: Configure LDAP authentication
weight: 40
parent: Administration
---
Configure InfluxDB Enterprise to use LDAP (Lightweight Directory Access Protocol) to:
- Validate user permissions
- Synchronize InfluxDB and LDAP so each LDAP request doesn't need to be queried
{{% note %}}
To configure InfluxDB Enterprise to support LDAP, all users must be managed in the remote LDAP service.
If LDAP is configured and enabled, users **must** authenticate through LDAP, including users who may have existed before enabling LDAP.
{{% /note %}}
## Configure LDAP for an InfluxDB Enterprise cluster
To use LDAP with an InfluxDB Enterprise cluster, do the following:
1. [Configure data nodes](#configure-data-nodes)
2. [Configure meta nodes](#configure-meta-nodes)
3. [Create, verify, and upload the LDAP configuration file](#create-verify-and-upload-the-ldap-configuration-file)
4. [Restart meta and data nodes](#restart-meta-and-data-nodes)
### Configure data nodes
Update the following settings in each data node configuration file (`/etc/influxdb/influxdb.conf`):
1. Under `[http]`, enable HTTP authentication by setting `auth-enabled` to `true`.
(Or set the corresponding environment variable `INFLUXDB_HTTP_AUTH_ENABLED` to `true`.)
2. Configure the HTTP shared secret to validate requests using JSON web tokens (JWT) and sign each HTTP payload with the secret and username.
Set the `[http]` configuration setting for `shared-secret`, or the corresponding environment variable `INFLUXDB_HTTP_SHARED_SECRET`.
3. If you're enabling authentication on meta nodes, you must also include the following configurations:
- `INFLUXDB_META_META_AUTH_ENABLED` environment variable, or `[http]` configuration setting `meta-auth-enabled`, is set to `true`.
This value must be the same value as the meta node's `meta.auth-enabled` configuration.
- `INFLUXDB_META_META_INTERNAL_SHARED_SECRET`, or the corresponding `[meta]` configuration setting `meta-internal-shared-secret`, is set to `true`.
This value must be the same value as the meta node's `meta.internal-shared-secret`.
### Configure meta nodes
Update the following settings in each meta node configuration file (`/etc/influxdb/influxdb-meta.conf`):
1. Configure the meta node META shared secret to validate requests using JSON web tokens (JWT) and sign each HTTP payload with the username and shared secret.
2. Set the `[meta]` configuration setting `internal-shared-secret` to `"<internal-shared-secret>"`.
(Or set the `INFLUXDB_META_INTERNAL_SHARED_SECRET` environment variable.)
3. Set the `[meta]` configuration setting `meta.ldap-allowed` to `true` on all meta nodes in your cluster.
(Or set the `INFLUXDB_META_LDAP_ALLOWED`environment variable.)
### Authenticate your connection to InfluxDB
To authenticate your InfluxDB connection, run the following command, replacing `username:password` with your credentials:
{{< keep-url >}}
```bash
curl -u username:password -XPOST "http://localhost:8086/..."
```
For more detail on authentication, see [Authentication and authorization in InfluxDB](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/).
### Create, verify, and upload the LDAP configuration file
1. To create a sample LDAP configuration file, run the following command:
```bash
influxd-ctl ldap sample-config
```
2. Save the sample file and edit as needed for your LDAP server.
For detail, see the [sample LDAP configuration file](#sample-ldap-configuration) below.
> To use fine-grained authorization (FGA) with LDAP, you must map InfluxDB Enterprise roles to key-value pairs in the LDAP database.
For more information, see [Fine-grained authorization in InfluxDB Enterprise](/enterprise_influxdb/v1.9/guides/fine-grained-authorization/).
The InfluxDB admin user doesn't include permissions for InfluxDB Enterprise roles.
3. Restart all meta and data nodes in your InfluxDB Enterprise cluster to load your updated configuration.
On each **meta** node, run:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[sysvinit](#)
[systemd](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
service influxdb-meta restart
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
sudo systemctl restart influxdb-meta
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
On each **data** node, run:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[sysvinit](#)
[systemd](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
service influxdb restart
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
sudo systemctl restart influxdb
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
4. To verify your LDAP configuration, run:
```bash
influxd-ctl ldap verify -ldap-config /path/to/ldap.toml
```
5. To load your LDAP configuration file, run the following command:
```bash
influxd-ctl ldap set-config /path/to/ldap.toml
```
## Sample LDAP configuration
The following is a sample configuration file that connects to a publicly available LDAP server.
A `DN` ("distinguished name") uniquely identifies an entry and describes its position in the directory information tree (DIT) hierarchy.
The DN of an LDAP entry is similar to a file path on a file system.
`DNs` refers to multiple DN entries.
{{% truncate %}}
```toml
enabled = true
[[servers]]
enabled = true
[[servers]]
host = "<LDAPserver>"
port = 389
# Security mode for LDAP connection to this server.
# The recommended security is set "starttls" by default. This uses an initial unencrypted connection
# and upgrades to TLS as the first action against the server,
# per the LDAPv3 standard.
# Other options are "starttls+insecure" to behave the same as starttls
# but skip server certificate verification, or "none" to use an unencrypted connection.
security = "starttls"
# Credentials to use when searching for a user or group.
bind-dn = "cn=read-only-admin,dc=example,dc=com"
bind-password = "password"
# Base DNs to use when applying the search-filter to discover an LDAP user.
search-base-dns = [
"dc=example,dc=com",
]
# LDAP filter to discover a user's DN.
# %s will be replaced with the provided username.
search-filter = "(uid=%s)"
# On Active Directory you might use "(sAMAccountName=%s)".
# Base DNs to use when searching for groups.
group-search-base-dns = ["dc=example,dc=com"]
# LDAP filter to identify groups that a user belongs to.
# %s will be replaced with the user's DN.
group-membership-search-filter = "(&(objectClass=groupOfUniqueNames)(uniqueMember=%s))"
# On Active Directory you might use "(&(objectClass=group)(member=%s))".
# Attribute to use to determine the "group" in the group-mappings section.
group-attribute = "ou"
# On Active Directory you might use "cn".
# LDAP filter to search for a group with a particular name.
# This is used when warming the cache to load group membership.
group-search-filter = "(&(objectClass=groupOfUniqueNames)(cn=%s))"
# On Active Directory you might use "(&(objectClass=group)(cn=%s))".
# Attribute of a group that contains the DNs of the group's members.
group-member-attribute = "uniqueMember"
# On Active Directory you might use "member".
# Create an administrator role in InfluxDB and then log in as a member of the admin LDAP group. Only members of a group with the administrator role can complete admin tasks.
# For example, if tesla is the only member of the `italians` group, you must log in as tesla/password.
admin-groups = ["italians"]
# These two roles would have to be created by hand if you want these LDAP group memberships to do anything.
[[servers.group-mappings]]
group = "mathematicians"
role = "arithmetic"
[[servers.group-mappings]]
group = "scientists"
role = "laboratory"
```
{{% /truncate %}}

View File

@ -0,0 +1,133 @@
---
title: Log and trace InfluxDB Enterprise operations
description: >
Learn about logging locations, redirecting HTTP request logging, structured logging, and tracing.
menu:
enterprise_influxdb_1_9:
name: Log and trace
weight: 90
parent: Administration
---
* [Logging locations](#logging-locations)
* [Redirect HTTP request logging](#redirect-http-access-logging)
* [Structured logging](#structured-logging)
* [Tracing](#tracing)
InfluxDB writes log output, by default, to `stderr`.
Depending on your use case, this log information can be written to another location.
Some service managers may override this default.
## Logging locations
### Run InfluxDB directly
If you run InfluxDB directly, using `influxd`, all logs will be written to `stderr`.
You may redirect this log output as you would any output to `stderr` like so:
```bash
influxdb-meta 2>$HOME/my_log_file # Meta nodes
influxd 2>$HOME/my_log_file # Data nodes
influx-enterprise 2>$HOME/my_log_file # Enterprise Web
```
### Launched as a service
#### sysvinit
If InfluxDB was installed using a pre-built package, and then launched
as a service, `stderr` is redirected to
`/var/log/influxdb/<node-type>.log`, and all log data will be written to
that file. You can override this location by setting the variable
`STDERR` in the file `/etc/default/<node-type>`.
For example, if on a data node `/etc/default/influxdb` contains:
```bash
STDERR=/dev/null
```
all log data will be discarded. You can similarly direct output to
`stdout` by setting `STDOUT` in the same file. Output to `stdout` is
sent to `/dev/null` by default when InfluxDB is launched as a service.
InfluxDB must be restarted to pick up any changes to `/etc/default/<node-type>`.
##### Meta nodes
For meta nodes, the <node-type> is `influxdb-meta`.
The default log file is `/var/log/influxdb/influxdb-meta.log`
The service configuration file is `/etc/default/influxdb-meta`.
##### Data nodes
For data nodes, the <node-type> is `influxdb`.
The default log file is `/var/log/influxdb/influxdb.log`
The service configuration file is `/etc/default/influxdb`.
##### Enterprise Web
For Enterprise Web nodes, the <node-type> is `influx-enterprise`.
The default log file is `/var/log/influxdb/influx-enterprise.log`
The service configuration file is `/etc/default/influx-enterprise`.
#### systemd
Starting with version 1.0, InfluxDB on systemd systems no longer
writes files to `/var/log/<node-type>.log` by default, and now uses the
system configured default for logging (usually `journald`). On most
systems, the logs will be directed to the systemd journal and can be
accessed with the command:
```
sudo journalctl -u <node-type>.service
```
Please consult the systemd journald documentation for configuring
journald.
##### Meta nodes
For data nodes the <node-type> is `influxdb-meta`.
The default log command is `sudo journalctl -u influxdb-meta.service`
The service configuration file is `/etc/default/influxdb-meta`.
##### Data nodes
For data nodes the <node-type> is `influxdb`.
The default log command is `sudo journalctl -u influxdb.service`
The service configuration file is `/etc/default/influxdb`.
##### Enterprise Web
For data nodes the <node-type> is `influx-enterprise`.
The default log command is `sudo journalctl -u influx-enterprise.service`
The service configuration file is `/etc/default/influx-enterprise`.
### Use logrotate
You can use [logrotate](http://manpages.ubuntu.com/manpages/cosmic/en/man8/logrotate.8.html)
to rotate the log files generated by InfluxDB on systems where logs are written to flat files.
If using the package install on a sysvinit system, the config file for logrotate is installed in `/etc/logrotate.d`.
You can view the file [here](https://github.com/influxdb/influxdb/blob/master/scripts/logrotate).
## Redirect HTTP access logging
InfluxDB 1.5 introduces the option to log HTTP request traffic separately from the other InfluxDB log output. When HTTP request logging is enabled, the HTTP logs are intermingled by default with internal InfluxDB logging. By redirecting the HTTP request log entries to a separate file, both log files are easier to read, monitor, and debug.
See [Redirecting HTTP request logging](/enterprise_influxdb/v1.9/administration/logs/#redirecting-http-access-logging) in the InfluxDB OSS documentation.
## Structured logging
With InfluxDB 1.5, structured logging is supported and enable machine-readable and more developer-friendly log output formats. The two new structured log formats, `logfmt` and `json`, provide easier filtering and searching with external tools and simplifies integration of InfluxDB logs with Splunk, Papertrail, Elasticsearch, and other third party tools.
See [Structured logging](/enterprise_influxdb/v1.9/administration/logs/#structured-logging) in the InfluxDB OSS documentation.
## Tracing
Logging has been enhanced, starting in InfluxDB 1.5, to provide tracing of important InfluxDB operations. Tracing is useful for error reporting and discovering performance bottlenecks.
See [Tracing](/enterprise_influxdb/v1.9/administration/logs/#tracing) in the InfluxDB OSS documentation.

Some files were not shown because too many files have changed in this diff Show More