Merge branch 'master' of github.com:influxdata/docs-v2 into bitbucket
commit
e8dc189921
|
@ -4,7 +4,7 @@ jobs:
|
|||
docker:
|
||||
- image: circleci/node:erbium
|
||||
environment:
|
||||
HUGO_VERSION: "0.59.1"
|
||||
HUGO_VERSION: "0.81.0"
|
||||
S3DEPLOY_VERSION: "2.3.5"
|
||||
steps:
|
||||
- checkout
|
||||
|
@ -23,7 +23,7 @@ jobs:
|
|||
command: ./deploy/ci-install-s3deploy.sh
|
||||
- run:
|
||||
name: Install NodeJS dependencies
|
||||
command: sudo yarn global add postcss-cli@8.3.0 autoprefixer@9.8.6 redoc-cli
|
||||
command: sudo yarn global add postcss-cli@8.3.0 autoprefixer@9.8.6 redoc-cli@0.9.13
|
||||
- run:
|
||||
name: Generate API documentation
|
||||
command: cd api-docs && bash generate-api-docs.sh
|
||||
|
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Report an issue with the InfluxData documentation.
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
_Describe the issue here._
|
||||
|
||||
##### Relevant URLs
|
||||
- _Provide relevant URLs (the doc page in question, project issues, community threads, etc.)_
|
||||
|
||||
<!--
|
||||
For software issues (bugs, unexpected behavior, etc.) in specific projects,
|
||||
create an issue in the appropriate repository:
|
||||
|
||||
- InfluxDB OSS issues at https://github.com/influxdata/influxdb
|
||||
- Telegraf issues at https://github.com/influxdata/telegraf
|
||||
- Chronograf issues at https://github.com/influxdata/chronograf
|
||||
- Kapacitor issues at https://github.com/influxdata/kapacitor
|
||||
- Flux issues at https://github.com/influxdata/flux
|
||||
|
||||
For issues with InfluxDB Cloud or InfluxDB Enterprise, contact support@influxdata.com.
|
||||
-->
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
name: New feature
|
||||
about: Submit new InfluxData product features that need to be documented.
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**PR:** _Provide PR URL(s) for this feature (if available)_
|
||||
|
||||
_Describe the new feature here._
|
||||
|
||||
<!--
|
||||
Include pertinent details, such as:
|
||||
|
||||
- What the feature does and why it is useful
|
||||
- How to use the feature (via CLI, UI, API)
|
||||
- Specific code examples (used/tested)
|
||||
- Tips or tricks (hot keys/shortcuts)
|
||||
-->
|
||||
|
||||
##### Relevant URLs
|
||||
- _Provide relevant URLs (issues, community threads, existing doc pages, etc.)_
|
||||
|
||||
<!--
|
||||
IMPORTANT
|
||||
1. Apply product labels to this issue as applicable. For example, if a feature
|
||||
is included in both InfluxDB open source (OSS) and Cloud, add both
|
||||
`InfluxDB v2` and `InfluxDB Cloud` labels.
|
||||
|
||||
2. For features tied to a specific product release, add a milestone using the
|
||||
following convention:
|
||||
|
||||
<product-name> <semantic-version>
|
||||
|
||||
Examples: InfluxDB 2.0.5, Telegraf 1.18.0
|
||||
-->
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
name: Proposal
|
||||
about: Propose changes to InfluxData documentation content, structure, layout, etc.
|
||||
title: ''
|
||||
labels: Proposal
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
_Describe your proposal here._
|
||||
|
||||
##### Relevant URLs
|
||||
- _Provide relevant URLs_
|
|
@ -480,19 +480,34 @@ Each expandable block needs a label that users can click to expand or collpase t
|
|||
Pass the label as a string to the shortcode.
|
||||
|
||||
```md
|
||||
{{% expand "Lable 1"}}
|
||||
{{% expand "Label 1" %}}
|
||||
Markdown content associated with label 1.
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Lable 2"}}
|
||||
{{% expand "Label 2" %}}
|
||||
Markdown content associated with label 2.
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Lable 3"}}
|
||||
{{% expand "Label 3" %}}
|
||||
Markdown content associated with label 3.
|
||||
{{% /expand %}}
|
||||
```
|
||||
|
||||
Use the optional `{{< expand-wrapper >}}` shortcode around a group of `{{% expand %}}`
|
||||
shortcodes to ensure proper spacing around the expandable elements:
|
||||
|
||||
```md
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "Label 1" %}}
|
||||
Markdown content associated with label 1.
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Label 2" %}}
|
||||
Markdown content associated with label 2.
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
```
|
||||
|
||||
### Generate a list of children articles
|
||||
Section landing pages often contain just a list of articles with links and descriptions for each.
|
||||
This can be cumbersome to maintain as content is added.
|
||||
|
|
16
README.md
16
README.md
|
@ -23,16 +23,22 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
|
|||
2. **Install Hugo**
|
||||
|
||||
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site generator built in Go.
|
||||
The InfluxData documentation utilizes Hugo's asset pipeline and requires the extended version of Hugo.
|
||||
See the Hugo documentation for information about how to [download and install Hugo](https://gohugo.io/getting-started/installing/).
|
||||
|
||||
3. **Install NodeJS & Asset Pipeline Tools**
|
||||
_**Note:** The most recent version of Hugo tested with this documentation is **0.81.0**._
|
||||
|
||||
3. **Install NodeJS, Yarn, & Asset Pipeline Tools**
|
||||
|
||||
This project uses tools written in NodeJS to build and process stylesheets and javascript.
|
||||
In order for assets to build correctly, [install NodeJS](https://nodejs.org/en/download/)
|
||||
and run the following command to install the necessary tools:
|
||||
To successfully build assets:
|
||||
|
||||
```
|
||||
npm i -g postcss-cli autoprefixer
|
||||
1. [Install NodeJS](https://nodejs.org/en/download/)
|
||||
2. [Install Yarn](https://classic.yarnpkg.com/en/docs/install/)
|
||||
3. Run the following command to install the necessary tools:
|
||||
|
||||
```sh
|
||||
sudo yarn global add postcss-cli@8.3.0 autoprefixer@9.8.6
|
||||
```
|
||||
|
||||
4. **Start the Hugo server**
|
||||
|
|
|
@ -6749,6 +6749,7 @@ components:
|
|||
Expression:
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/ArrayExpression"
|
||||
- $ref: "#/components/schemas/DictExpression"
|
||||
- $ref: "#/components/schemas/FunctionExpression"
|
||||
- $ref: "#/components/schemas/BinaryExpression"
|
||||
- $ref: "#/components/schemas/CallExpression"
|
||||
|
@ -6781,6 +6782,27 @@ components:
|
|||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/Expression"
|
||||
DictExpression:
|
||||
description: Used to create and directly specify the elements of a dictionary
|
||||
type: object
|
||||
properties:
|
||||
type:
|
||||
$ref: "#/components/schemas/NodeType"
|
||||
elements:
|
||||
description: Elements of the dictionary
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DictItem"
|
||||
DictItem:
|
||||
description: A key/value pair in a dictionary
|
||||
type: object
|
||||
properties:
|
||||
type:
|
||||
$ref: "#/components/schemas/NodeType"
|
||||
key:
|
||||
$ref: "#/components/schemas/Expression"
|
||||
val:
|
||||
$ref: "#/components/schemas/Expression"
|
||||
FunctionExpression:
|
||||
description: Function expression
|
||||
type: object
|
||||
|
@ -9745,6 +9767,167 @@ components:
|
|||
format: float
|
||||
legendOrientationThreshold:
|
||||
type: integer
|
||||
GeoViewLayer:
|
||||
type: object
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/GeoCircleViewLayer"
|
||||
- $ref: "#/components/schemas/GeoHeatMapViewLayer"
|
||||
- $ref: "#/components/schemas/GeoPointMapViewLayer"
|
||||
- $ref: "#/components/schemas/GeoTrackMapViewLayer"
|
||||
GeoViewLayerProperties:
|
||||
type: object
|
||||
required: [type]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [heatmap, circleMap, pointMap, trackMap]
|
||||
GeoCircleViewLayer:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/GeoViewLayerProperties"
|
||||
- type: object
|
||||
required: [radiusField, radiusDimension, colorField, colorDimension, colors]
|
||||
properties:
|
||||
radiusField:
|
||||
type: string
|
||||
description: Radius field
|
||||
radiusDimension:
|
||||
$ref: '#/components/schemas/Axis'
|
||||
colorField:
|
||||
type: string
|
||||
description: Circle color field
|
||||
colorDimension:
|
||||
$ref: '#/components/schemas/Axis'
|
||||
colors:
|
||||
description: Colors define color encoding of data into a visualization
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DashboardColor"
|
||||
radius:
|
||||
description: Maximum radius size in pixels
|
||||
type: integer
|
||||
interpolateColors:
|
||||
description: Interpolate circle color based on displayed value
|
||||
type: boolean
|
||||
GeoPointMapViewLayer:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/GeoViewLayerProperties"
|
||||
- type: object
|
||||
required: [colorField, colorDimension, colors]
|
||||
properties:
|
||||
colorField:
|
||||
type: string
|
||||
description: Marker color field
|
||||
colorDimension:
|
||||
$ref: '#/components/schemas/Axis'
|
||||
colors:
|
||||
description: Colors define color encoding of data into a visualization
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DashboardColor"
|
||||
isClustered:
|
||||
description: Cluster close markers together
|
||||
type: boolean
|
||||
GeoTrackMapViewLayer:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/GeoViewLayerProperties"
|
||||
- type: object
|
||||
required: [trackWidth, speed, randomColors, trackPointVisualization]
|
||||
properties:
|
||||
trackWidth:
|
||||
description: Width of the track
|
||||
type: integer
|
||||
speed:
|
||||
description: Speed of the track animation
|
||||
type: integer
|
||||
randomColors:
|
||||
description: Assign different colors to different tracks
|
||||
type: boolean
|
||||
colors:
|
||||
description: Colors define color encoding of data into a visualization
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DashboardColor"
|
||||
GeoHeatMapViewLayer:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/GeoViewLayerProperties"
|
||||
- type: object
|
||||
required: [intensityField, intensityDimension, radius, blur, colors]
|
||||
properties:
|
||||
intensityField:
|
||||
type: string
|
||||
description: Intensity field
|
||||
intensityDimension:
|
||||
$ref: '#/components/schemas/Axis'
|
||||
radius:
|
||||
description: Radius size in pixels
|
||||
type: integer
|
||||
blur:
|
||||
description: Blur for heatmap points
|
||||
type: integer
|
||||
colors:
|
||||
description: Colors define color encoding of data into a visualization
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DashboardColor"
|
||||
GeoViewProperties:
|
||||
type: object
|
||||
required: [type, shape, queries, note, showNoteWhenEmpty, center, zoom, allowPanAndZoom, detectCoordinateFields, layers]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [geo]
|
||||
queries:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DashboardQuery"
|
||||
shape:
|
||||
type: string
|
||||
enum: ['chronograf-v2']
|
||||
center:
|
||||
description: Coordinates of the center of the map
|
||||
type: object
|
||||
required: [lat, lon]
|
||||
properties:
|
||||
lat:
|
||||
description: Latitude of the center of the map
|
||||
type: number
|
||||
format: double
|
||||
lon:
|
||||
description: Longitude of the center of the map
|
||||
type: number
|
||||
format: double
|
||||
zoom:
|
||||
description: Zoom level used for initial display of the map
|
||||
type: number
|
||||
format: double
|
||||
minimum: 1
|
||||
maximum: 28
|
||||
allowPanAndZoom:
|
||||
description: If true, map zoom and pan controls are enabled on the dashboard view
|
||||
type: boolean
|
||||
default: true
|
||||
detectCoordinateFields:
|
||||
description: If true, search results get automatically regroupped so that lon,lat and value are treated as columns
|
||||
type: boolean
|
||||
default: true
|
||||
mapStyle:
|
||||
description: Define map type - regular, satellite etc.
|
||||
type: string
|
||||
note:
|
||||
type: string
|
||||
showNoteWhenEmpty:
|
||||
description: If true, will display note when empty
|
||||
type: boolean
|
||||
colors:
|
||||
description: Colors define color encoding of data into a visualization
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DashboardColor"
|
||||
layers:
|
||||
description: List of individual layers shown in the map
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/GeoViewLayer"
|
||||
Axes:
|
||||
description: The viewport for a View's visualizations
|
||||
type: object
|
||||
|
@ -9916,6 +10099,7 @@ components:
|
|||
- $ref: "#/components/schemas/HeatmapViewProperties"
|
||||
- $ref: "#/components/schemas/MosaicViewProperties"
|
||||
- $ref: "#/components/schemas/BandViewProperties"
|
||||
- $ref: "#/components/schemas/GeoViewProperties"
|
||||
View:
|
||||
required:
|
||||
- name
|
||||
|
@ -11557,7 +11741,7 @@ components:
|
|||
type: string
|
||||
operator:
|
||||
type: string
|
||||
enum: ["equal", "notequal", "equalregex", "notequalregex"]
|
||||
enum: ["equal"]
|
||||
StatusRule:
|
||||
type: object
|
||||
properties:
|
||||
|
|
|
@ -114,6 +114,17 @@ paths:
|
|||
- $ref: "#/components/parameters/TraceSpan"
|
||||
- $ref: "#/components/parameters/AuthUserV1"
|
||||
- $ref: "#/components/parameters/AuthPassV1"
|
||||
- in: header
|
||||
name: Accept
|
||||
schema:
|
||||
type: string
|
||||
description: Specifies how query results should be encoded in the response. **Note:** When using `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
|
||||
default: application/json
|
||||
enum:
|
||||
- application/json
|
||||
- application/csv
|
||||
- text/csv
|
||||
- application/x-msgpack
|
||||
- in: header
|
||||
name: Accept-Encoding
|
||||
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
|
||||
|
@ -154,17 +165,19 @@ paths:
|
|||
type: string
|
||||
description: Specifies the request's trace ID.
|
||||
content:
|
||||
application/csv:
|
||||
schema:
|
||||
$ref: "#/components/schemas/InfluxQLCSVResponse"
|
||||
text/csv:
|
||||
schema:
|
||||
type: string
|
||||
example: >
|
||||
name,tags,time,test_field,test_tag
|
||||
test_measurement,,1603740794286107366,1,tag_value
|
||||
test_measurement,,1603740870053205649,2,tag_value
|
||||
test_measurement,,1603741221085428881,3,tag_value
|
||||
text/plain:
|
||||
$ref: "#/components/schemas/InfluxQLCSVResponse"
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/InfluxQLResponse"
|
||||
application/x-msgpack:
|
||||
schema:
|
||||
type: string
|
||||
format: binary
|
||||
"429":
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
|
||||
headers:
|
||||
|
@ -233,6 +246,14 @@ components:
|
|||
items:
|
||||
type: array
|
||||
items: {}
|
||||
InfluxQLCSVResponse:
|
||||
type: string
|
||||
example: >
|
||||
name,tags,time,test_field,test_tag
|
||||
test_measurement,,1603740794286107366,1,tag_value
|
||||
test_measurement,,1603740870053205649,2,tag_value
|
||||
test_measurement,,1603741221085428881,3,tag_value
|
||||
|
||||
Error:
|
||||
properties:
|
||||
code:
|
||||
|
|
|
@ -6749,6 +6749,7 @@ components:
|
|||
Expression:
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/ArrayExpression"
|
||||
- $ref: "#/components/schemas/DictExpression"
|
||||
- $ref: "#/components/schemas/FunctionExpression"
|
||||
- $ref: "#/components/schemas/BinaryExpression"
|
||||
- $ref: "#/components/schemas/CallExpression"
|
||||
|
@ -6781,6 +6782,27 @@ components:
|
|||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/Expression"
|
||||
DictExpression:
|
||||
description: Used to create and directly specify the elements of a dictionary
|
||||
type: object
|
||||
properties:
|
||||
type:
|
||||
$ref: "#/components/schemas/NodeType"
|
||||
elements:
|
||||
description: Elements of the dictionary
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/DictItem"
|
||||
DictItem:
|
||||
description: A key/value pair in a dictionary
|
||||
type: object
|
||||
properties:
|
||||
type:
|
||||
$ref: "#/components/schemas/NodeType"
|
||||
key:
|
||||
$ref: "#/components/schemas/Expression"
|
||||
val:
|
||||
$ref: "#/components/schemas/Expression"
|
||||
FunctionExpression:
|
||||
description: Function expression
|
||||
type: object
|
||||
|
@ -11557,7 +11579,7 @@ components:
|
|||
type: string
|
||||
operator:
|
||||
type: string
|
||||
enum: ["equal", "notequal", "equalregex", "notequalregex"]
|
||||
enum: ["equal"]
|
||||
StatusRule:
|
||||
type: object
|
||||
properties:
|
||||
|
|
|
@ -114,6 +114,17 @@ paths:
|
|||
- $ref: "#/components/parameters/TraceSpan"
|
||||
- $ref: "#/components/parameters/AuthUserV1"
|
||||
- $ref: "#/components/parameters/AuthPassV1"
|
||||
- in: header
|
||||
name: Accept
|
||||
schema:
|
||||
type: string
|
||||
description: Specifies how query results should be encoded in the response. **Note:** When using `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
|
||||
default: application/json
|
||||
enum:
|
||||
- application/json
|
||||
- application/csv
|
||||
- text/csv
|
||||
- application/x-msgpack
|
||||
- in: header
|
||||
name: Accept-Encoding
|
||||
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
|
||||
|
@ -154,17 +165,19 @@ paths:
|
|||
type: string
|
||||
description: Specifies the request's trace ID.
|
||||
content:
|
||||
application/csv:
|
||||
schema:
|
||||
$ref: "#/components/schemas/InfluxQLCSVResponse"
|
||||
text/csv:
|
||||
schema:
|
||||
type: string
|
||||
example: >
|
||||
name,tags,time,test_field,test_tag
|
||||
test_measurement,,1603740794286107366,1,tag_value
|
||||
test_measurement,,1603740870053205649,2,tag_value
|
||||
test_measurement,,1603741221085428881,3,tag_value
|
||||
text/plain:
|
||||
$ref: "#/components/schemas/InfluxQLCSVResponse"
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/InfluxQLResponse"
|
||||
application/x-msgpack:
|
||||
schema:
|
||||
type: string
|
||||
format: binary
|
||||
"429":
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
|
||||
headers:
|
||||
|
@ -233,6 +246,14 @@ components:
|
|||
items:
|
||||
type: array
|
||||
items: {}
|
||||
InfluxQLCSVResponse:
|
||||
type: string
|
||||
example: >
|
||||
name,tags,time,test_field,test_tag
|
||||
test_measurement,,1603740794286107366,1,tag_value
|
||||
test_measurement,,1603740870053205649,2,tag_value
|
||||
test_measurement,,1603741221085428881,3,tag_value
|
||||
|
||||
Error:
|
||||
properties:
|
||||
code:
|
||||
|
|
|
@ -20,10 +20,7 @@ var elementWhiteList = [
|
|||
"a.url-trigger"
|
||||
]
|
||||
|
||||
$('.article a[href^="#"]:not(' + elementWhiteList + ')').click(function (e) {
|
||||
e.preventDefault();
|
||||
|
||||
var target = this.hash;
|
||||
function scrollToAnchor(target) {
|
||||
var $target = $(target);
|
||||
|
||||
$('html, body').stop().animate({
|
||||
|
@ -31,6 +28,11 @@ $('.article a[href^="#"]:not(' + elementWhiteList + ')').click(function (e) {
|
|||
}, 400, 'swing', function () {
|
||||
window.location.hash = target;
|
||||
});
|
||||
}
|
||||
|
||||
$('.article a[href^="#"]:not(' + elementWhiteList + ')').click(function (e) {
|
||||
e.preventDefault();
|
||||
scrollToAnchor(this.hash);
|
||||
});
|
||||
|
||||
///////////////////////////// Left Nav Interactions /////////////////////////////
|
||||
|
@ -80,6 +82,29 @@ function tabbedContent(container, tab, content) {
|
|||
tabbedContent('.code-tabs-wrapper', '.code-tabs p a', '.code-tab-content');
|
||||
tabbedContent('.tabs-wrapper', '.tabs p a', '.tab-content');
|
||||
|
||||
//////////////////////// Activate Tabs with Query Params ////////////////////////
|
||||
|
||||
const queryParams = new URLSearchParams(window.location.search);
|
||||
var anchor = window.location.hash
|
||||
|
||||
tab = $('<textarea />').html(queryParams.get('t')).text();
|
||||
|
||||
if (tab !== "") {
|
||||
var targetTab = $('.tabs a:contains("' + tab + '")')
|
||||
targetTab.click()
|
||||
if (anchor !== "") { scrollToAnchor(anchor) }
|
||||
}
|
||||
|
||||
$('.tabs p a').click(function() {
|
||||
if ($(this).is(':not(":first-child")')) {
|
||||
queryParams.set('t', $(this).html())
|
||||
window.history.replaceState({}, '', `${location.pathname}?${queryParams}${anchor}`);
|
||||
} else {
|
||||
queryParams.delete('t')
|
||||
window.history.replaceState({}, '', `${location.pathname}${anchor}`);
|
||||
}
|
||||
})
|
||||
|
||||
/////////////////////////////// Truncate Content ///////////////////////////////
|
||||
|
||||
$(".truncate-toggle").click(function(e) {
|
||||
|
|
|
@ -89,6 +89,10 @@
|
|||
box-shadow: 1px 3px 10px $article-shadow;
|
||||
}
|
||||
|
||||
ul + p > img {
|
||||
margin-top: 1.5rem;
|
||||
}
|
||||
|
||||
hr {
|
||||
border-width: 1px 0 0;
|
||||
border-color: $article-hr;
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
font-weight: $medium;
|
||||
color: $g20-white;
|
||||
box-shadow: 2px 2px 6px rgba($g2-kevlar, .35);
|
||||
z-index: 10;
|
||||
|
||||
// temp styles for animation
|
||||
transition: margin .3s ease-out;
|
||||
|
@ -65,3 +64,23 @@
|
|||
margin-top: 2.5rem
|
||||
}
|
||||
}
|
||||
|
||||
///////////////////////////////// Media Queries ////////////////////////////////
|
||||
|
||||
@include media(small) {
|
||||
#callout-url-selector {
|
||||
top: .55rem;
|
||||
right: 5.15rem;
|
||||
|
||||
p:after {
|
||||
top: .15rem;
|
||||
right: -16px;
|
||||
border-width: 7px 0 7px 8px;
|
||||
border-color: transparent transparent transparent #C231D9;
|
||||
}
|
||||
|
||||
&.start-position {
|
||||
margin-top: 2.5rem
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -169,6 +169,16 @@ pre[class*="language-"] {
|
|||
.sd , /* Literal.String.Doc */
|
||||
.w /* Text.Whitespace */
|
||||
{ font-style: italic }
|
||||
|
||||
// Javascript / Flux specific styles (duration values)
|
||||
.language-js {
|
||||
.mi + .nx { color: $article-code-accent5; }
|
||||
}
|
||||
|
||||
// SQL / InfluxQL specific styles (duration values)
|
||||
.language-sql {
|
||||
.mi + .n { color: $article-code-accent5; }
|
||||
}
|
||||
}
|
||||
|
||||
.note {
|
||||
|
|
|
@ -1,9 +1,13 @@
|
|||
// Styles for accordian-like expandable content blocks
|
||||
|
||||
.expand-wrapper {
|
||||
margin: 2rem 0 2rem;
|
||||
}
|
||||
|
||||
.expand {
|
||||
border-top: 1px solid $article-hr;
|
||||
padding: .75rem 0;
|
||||
&:last-of-type { border-bottom: 1px solid $article-hr; }
|
||||
&:last-of-type, &:only-child { border-bottom: 1px solid $article-hr; }
|
||||
}
|
||||
|
||||
.expand-label {
|
||||
|
@ -47,3 +51,7 @@
|
|||
&:before, &:after { transform: rotate(180deg); }
|
||||
}
|
||||
}
|
||||
|
||||
.expand-content {
|
||||
padding-top: 1rem;
|
||||
}
|
||||
|
|
|
@ -49,7 +49,7 @@ table {
|
|||
}
|
||||
}
|
||||
|
||||
#flags, #global-flags {
|
||||
#flags:not(.no-shorthand), #global-flags {
|
||||
& + table {
|
||||
td:nth-child(2) code { margin-left: -2rem; }
|
||||
}
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
baseURL = "https://test2.docs.influxdata.com/"
|
||||
languageCode = "en-us"
|
||||
title = "InfluxDB Documentation"
|
||||
|
||||
# Git history information for lastMod-type functionality
|
||||
enableGitInfo = true
|
||||
|
||||
# Syntax Highlighting
|
||||
pygmentsCodefences = true
|
||||
pygmentsUseClasses = true
|
||||
|
||||
# Preserve case in article tags
|
||||
preserveTaxonomyNames = true
|
||||
|
||||
# Generate a robots.txt
|
||||
enableRobotsTXT = true
|
||||
|
||||
# Custom staging params
|
||||
[params]
|
||||
environment = "staging"
|
||||
|
||||
# Markdown rendering options
|
||||
[blackfriday]
|
||||
hrefTargetBlank = true
|
||||
smartDashes = false
|
||||
|
||||
[taxonomies]
|
||||
"influxdb/v2.0/tag" = "influxdb/v2.0/tags"
|
||||
"influxdb/cloud/tag" = "influxdb/cloud/tags"
|
||||
|
||||
[markup]
|
||||
[markup.goldmark]
|
||||
[markup.goldmark.renderer]
|
||||
unsafe = true
|
|
@ -12,6 +12,9 @@ pygmentsUseClasses = true
|
|||
# Preserve case in article tags
|
||||
preserveTaxonomyNames = true
|
||||
|
||||
# Generate a robots.txt
|
||||
enableRobotsTXT = true
|
||||
|
||||
# Markdown rendering options
|
||||
[blackfriday]
|
||||
hrefTargetBlank = true
|
||||
|
|
|
@ -40,7 +40,7 @@ Chronograf offers a UI for [Kapacitor](https://github.com/influxdata/kapacitor),
|
|||
|
||||
* Create and delete databases and retention policies
|
||||
* View currently-running queries and stop inefficient queries from overloading your system
|
||||
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
|
||||
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS 1.x](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
|
||||
|
||||
|
||||
### Multi-organization and multi-user support
|
||||
|
|
|
@ -58,7 +58,7 @@ Restart the InfluxDB service for your configuration changes to take effect:
|
|||
|
||||
### Step 3: Create an admin user.
|
||||
|
||||
Because authentication is enabled, you need to create an [admin user](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-types-and-privileges) before you can do anything else in the database.
|
||||
Because authentication is enabled, you need to create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) before you can do anything else in the database.
|
||||
Run the `curl` command below to create an admin user, replacing:
|
||||
|
||||
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxDB Enterprise data nodes
|
||||
|
@ -92,7 +92,7 @@ On the **Chronograf Admin** page:
|
|||
![InfluxDB OSS user management](/img/chronograf/1-6-admin-usermanagement-oss.png)
|
||||
|
||||
InfluxDB users are either admin users or non-admin users.
|
||||
See InfluxDB's [authentication and authorization](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-types-and-privileges) documentation for more information about those user types.
|
||||
See InfluxDB's [authentication and authorization](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) documentation for more information about those user types.
|
||||
|
||||
> ***Note:*** Note that Chronograf currently does not support assigning InfluxDB database `READ`or `WRITE` access to non-admin users.
|
||||
>This is a known issue.
|
||||
|
@ -156,13 +156,13 @@ Assign permissions and roles to both admin and non-admin users.
|
|||
#### AddRemoveNode
|
||||
Permission to add or remove nodes from a cluster.
|
||||
|
||||
**Relevant `influxd-ctl` arguments**:
|
||||
[`add-data`](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#add-data),
|
||||
[`add-meta`](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#add-meta),
|
||||
[`join`](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#join),
|
||||
[`remove-data`](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#remove-data),
|
||||
[`remove-meta`](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#remove-meta), and
|
||||
[`leave`](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#leave)
|
||||
**Relevant `influxd-ctl` commands**:
|
||||
[`add-data`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#add-data),
|
||||
[`add-meta`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#add-meta),
|
||||
[`join`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#join),
|
||||
[`remove-data`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#remove-data),
|
||||
[`remove-meta`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#remove-meta), and
|
||||
[`leave`](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#leave)
|
||||
|
||||
**Pages in Chronograf that require this permission**: NA
|
||||
|
||||
|
@ -170,7 +170,7 @@ Permission to add or remove nodes from a cluster.
|
|||
Permission to copy shards.
|
||||
|
||||
**Relevant `influxd-ctl` arguments**:
|
||||
[copy-shard](/{{< latest "enterprise_influxdb" >}}/features/cluster-commands/#copy-shard)
|
||||
[copy-shard](/{{< latest "enterprise_influxdb" >}}/administration/cluster-commands/#copy-shard)
|
||||
|
||||
**Pages in Chronograf that require this permission**: NA
|
||||
|
||||
|
@ -189,15 +189,15 @@ Permission to create databases, create [retention policies](/{{< latest "influxd
|
|||
Permission to manage users and roles; create users, drop users, grant admin status to users, grant permissions to users, revoke admin status from users, revoke permissions from users, change user passwords, view user permissions, and view users and their admin status.
|
||||
|
||||
**Relevant InfluxQL queries**:
|
||||
[`CREATE USER`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-management-commands),
|
||||
[`DROP USER`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#general-admin-and-non-admin-user-management),
|
||||
[`GRANT ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-management-commands),
|
||||
[`GRANT [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#non-admin-user-management),
|
||||
[`REVOKE ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-management-commands),
|
||||
[`REVOKE [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#non-admin-user-management),
|
||||
[`SET PASSWORD`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#general-admin-and-non-admin-user-management),
|
||||
[`SHOW GRANTS`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#non-admin-user-management), and
|
||||
[`SHOW USERS`](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-management-commands)
|
||||
[`CREATE USER`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
|
||||
[`DROP USER`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#general-admin-and-non-admin-user-management),
|
||||
[`GRANT ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
|
||||
[`GRANT [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management),
|
||||
[`REVOKE ALL PRIVILEGES`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands),
|
||||
[`REVOKE [READ,WRITE,ALL]`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management),
|
||||
[`SET PASSWORD`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#general-admin-and-non-admin-user-management),
|
||||
[`SHOW GRANTS`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#non-admin-user-management), and
|
||||
[`SHOW USERS`](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-management-commands)
|
||||
|
||||
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards, Users and Roles on the Admin page
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ aliases:
|
|||
- /chronograf/v1.6/guides/transition-web-admin-interface/
|
||||
---
|
||||
|
||||
Versions 1.3 and later of [InfluxDB](/{{< latest "influxdb" "v1" >}}/) and [InfluxDB Enterprise](/enterprise/latest/) do not support the [web admin interface](/{{< latest "influxdb" "v1" >}}/tools/web_admin/), the builtin user interface for writing and querying data in InfluxDB.
|
||||
Versions 1.3 and later of [InfluxDB](/{{< latest "influxdb" "v1" >}}/) and [InfluxDB Enterprise](/{{< latest "enterprise_influxdb" >}}/) do not support the [web admin interface](https://archive.docs.influxdata.com/influxdb/v1.2/tools/web_admin/), the builtin user interface for writing and querying data in InfluxDB.
|
||||
Chronograf replaces the web admin interface with improved tooling for querying data, writing data, and database management.
|
||||
|
||||
The following sections describe the Chronograf features that relate to the web admin interface:
|
||||
|
|
|
@ -11,7 +11,7 @@ Chronograf gives you the ability to view, search, filter, visualize, and analyze
|
|||
This helps to recognize and diagnose patterns, then quickly dive into logged events that lead up to events.
|
||||
|
||||
## Logging setup
|
||||
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/inputs/):
|
||||
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/#input-plugins):
|
||||
|
||||
[syslog](https://github.com/influxdata/telegraf/tree/release-1.7/plugins/inputs/syslog)
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ Next, start the InfluxDB process:
|
|||
|
||||
#### Step 4: Create an admin user
|
||||
|
||||
Create an [admin user](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
|
||||
Create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
|
||||
Because you enabled authentication, you must perform this step before moving on to the next section.
|
||||
Run the command below to create an admin user, replacing `chronothan` and `supersecret` with your own username and password.
|
||||
Note that the password requires single quotes.
|
||||
|
|
|
@ -38,7 +38,7 @@ Chronograf offers a UI for [Kapacitor](https://github.com/influxdata/kapacitor),
|
|||
|
||||
* Create and delete databases and retention policies
|
||||
* View currently-running queries and stop inefficient queries from overloading your system
|
||||
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxEnterprise user management)
|
||||
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
|
||||
|
||||
|
||||
### Multi-organization and multi-user support
|
||||
|
|
|
@ -919,7 +919,7 @@ In versions 1.3.1+, installing a new version of Chronograf automatically clears
|
|||
|
||||
### Features
|
||||
|
||||
* Add line-protocol proxy for InfluxDB/InfluxEnterprise Cluster data sources
|
||||
* Add line-protocol proxy for InfluxDB/InfluxDB Enterprise Cluster data sources
|
||||
* Add `:dashboardTime:` to support cell-specific time ranges on dashboards
|
||||
* Add support for enabling and disabling [TICKscripts that were created outside Chronograf](/chronograf/v1.7/guides/advanced-kapacitor/#tickscript-management)
|
||||
* Allow users to delete Kapacitor configurations
|
||||
|
|
|
@ -19,4 +19,4 @@ Enter the HTTP bind address of one of your cluster's meta nodes into that input
|
|||
Note that the example above assumes that you do not have authentication enabled.
|
||||
If you have authentication enabled, the form requires username and password information.
|
||||
|
||||
For details about monitoring InfluxEnterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/chronograf/v1.7/guides/monitoring-influxenterprise-clusters).
|
||||
For details about monitoring InfluxDB Enterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/chronograf/v1.7/guides/monitoring-influxenterprise-clusters).
|
||||
|
|
|
@ -23,9 +23,9 @@ The **Chronograf Admin** provides InfluxDB user management for InfluxDB OSS and
|
|||
## Enabling authentication
|
||||
|
||||
Follow the steps below to enable authentication.
|
||||
The steps are the same for InfluxDB OSS instances and InfluxEnterprise clusters.
|
||||
The steps are the same for InfluxDB OSS instances and InfluxDB Enterprise clusters.
|
||||
|
||||
> ***InfluxEnterprise clusters:***
|
||||
> ***InfluxDB Enterprise clusters:***
|
||||
> Repeat the first three steps for each data node in a cluster.
|
||||
|
||||
### Step 1: Enable authentication.
|
||||
|
@ -60,7 +60,7 @@ Restart the InfluxDB service for your configuration changes to take effect:
|
|||
Because authentication is enabled, you need to create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) before you can do anything else in the database.
|
||||
Run the `curl` command below to create an admin user, replacing:
|
||||
|
||||
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxEnterprise data nodes
|
||||
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxDB Enterprise data nodes
|
||||
* `chronothan` with your own username
|
||||
* `supersecret` with your own password (note that the password requires single quotes)
|
||||
|
||||
|
@ -273,8 +273,8 @@ Permission to create, drop, and view [subscriptions](/{{< latest "influxdb" "v1"
|
|||
Permission to view cluster statistics and diagnostics.
|
||||
|
||||
**Relevant InfluxQL queries**:
|
||||
[`SHOW DIAGNOSTICS`](/influxdb/administration/server_monitoring/#show-diagnostics) and
|
||||
[`SHOW STATS`](/influxdb/administration/server_monitoring/#show-stats)
|
||||
[`SHOW DIAGNOSTICS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-diagnostics) and
|
||||
[`SHOW STATS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-stats)
|
||||
|
||||
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ Chronograf will use this secret to generate the JWT Signature for all access tok
|
|||
TOKEN_SECRET=<mysecret>
|
||||
```
|
||||
|
||||
> ***InfluxEnterprise clusters:*** If you are running multiple Chronograf servers in a high availability configuration,
|
||||
> ***InfluxDB Enterprise clusters:*** If you are running multiple Chronograf servers in a high availability configuration,
|
||||
> set the `TOKEN_SECRET` environment variable on each server to ensure that users can stay logged in.
|
||||
|
||||
### JWKS Signature Verification (optional)
|
||||
|
|
|
@ -8,7 +8,7 @@ aliases:
|
|||
- /chronograf/v1.7/guides/transition-web-admin-interface/
|
||||
---
|
||||
|
||||
Versions 1.3 and later of [InfluxDB](/{{< latest "influxdb" "v1" >}}/) and [InfluxEnterprise](/enterprise/latest/) do not support the web admin interface, the previous built-in user interface for writing and querying data in InfluxDB.
|
||||
Versions 1.3 and later of [InfluxDB](/{{< latest "influxdb" "v1" >}}/) and [InfluxDB Enterprise](/{{< latest "enterprise_influxdb" >}}/) do not support the web admin interface, the previous built-in user interface for writing and querying data in InfluxDB.
|
||||
Chronograf replaces the web admin interface with improved tooling for querying data, writing data, and database management.
|
||||
|
||||
The following sections describe the Chronograf features that relate to the web admin interface:
|
||||
|
@ -86,8 +86,8 @@ The `Admin` page allows users to:
|
|||
* View, create, and delete users
|
||||
* Change user passwords
|
||||
* Assign and remove permissions to or from a user
|
||||
* Create, edit, and delete roles (available in InfluxEnterprise only)
|
||||
* Assign and remove roles to or from a user (available in InfluxEnterprise only)
|
||||
* Create, edit, and delete roles (available in InfluxDB Enterprise only)
|
||||
* Assign and remove roles to or from a user (available in InfluxDB Enterprise only)
|
||||
|
||||
![Chronograf User Management i](/img/chronograf/1-6-g-admin-chronousers1.png)
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ Chronograf gives you the ability to view, search, filter, visualize, and analyze
|
|||
This helps to recognize and diagnose patterns, then quickly dive into logged events that lead up to events.
|
||||
|
||||
## Logging setup
|
||||
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/inputs/):
|
||||
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/#input-plugins):
|
||||
|
||||
[syslog](https://github.com/influxdata/telegraf/tree/release-1.7/plugins/inputs/syslog)
|
||||
|
||||
|
|
|
@ -10,15 +10,15 @@ menu:
|
|||
|
||||
---
|
||||
|
||||
[InfluxEnterprise](/{{< latest "enterprise_influxdb" >}}/) offers high availability and a highly scalable clustering solution for your time series data needs.
|
||||
[InfluxDB Enterprise](/{{< latest "enterprise_influxdb" >}}/) offers high availability and a highly scalable clustering solution for your time series data needs.
|
||||
Use Chronograf to assess your cluster's health and to monitor the infrastructure behind your project.
|
||||
|
||||
This guide offers step-by-step instructions for using Chronograf, [InfluxDB](/{{< latest "influxdb" "v1" >}}/), and [Telegraf](/{{< latest "telegraf" >}}/) to monitor data nodes in your InfluxEnterprise cluster.
|
||||
This guide offers step-by-step instructions for using Chronograf, [InfluxDB](/{{< latest "influxdb" "v1" >}}/), and [Telegraf](/{{< latest "telegraf" >}}/) to monitor data nodes in your InfluxDB Enterprise cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
You have a fully-functioning InfluxEnterprise cluster with authentication enabled.
|
||||
See the InfluxEnterprise documentation for
|
||||
You have a fully-functioning InfluxDB Enterprise cluster with authentication enabled.
|
||||
See the InfluxDB Enterprise documentation for
|
||||
[detailed setup instructions](/{{< latest "enterprise_influxdb" >}}/production_installation/).
|
||||
This guide uses an InfluxData Enterprise cluster with three meta nodes and three data nodes; the steps are also applicable to other cluster configurations.
|
||||
|
||||
|
@ -34,7 +34,7 @@ Before we begin, here's an overview of the final monitoring setup:
|
|||
|
||||
![Architecture diagram](/img/chronograf/1-6-cluster-diagram.png)
|
||||
|
||||
The diagram above shows an InfluxEnterprise cluster that consists of three meta nodes (M) and three data nodes (D).
|
||||
The diagram above shows an InfluxDB Enterprise cluster that consists of three meta nodes (M) and three data nodes (D).
|
||||
Each data node has its own [Telegraf](/{{< latest "telegraf" >}}/) instance (T).
|
||||
|
||||
Each Telegraf instance is configured to collect node CPU, disk, and memory data using the Telegraf [system stats](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin.
|
||||
|
@ -80,7 +80,7 @@ Next, start the InfluxDB process:
|
|||
|
||||
#### Step 4: Create an admin user
|
||||
|
||||
Create an [admin user](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
|
||||
Create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
|
||||
Because you enabled authentication, you must perform this step before moving on to the next section.
|
||||
Run the command below to create an admin user, replacing `chronothan` and `supersecret` with your own username and password.
|
||||
Note that the password requires single quotes.
|
||||
|
|
|
@ -33,7 +33,7 @@ InfluxQL is a SQL-like query language you can use to interact with data in Influ
|
|||
|
||||
## Explore data with Flux
|
||||
|
||||
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/flux/v0.7/introduction/getting-started).
|
||||
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started).
|
||||
|
||||
> ***Note:*** Flux v0.7 is a technical preview included with [InfluxDB v1.7](/influxdb/v1.7). It is still in active development and many functions provided by InfluxQL and TICKscript have yet to be implemented.
|
||||
|
||||
|
|
|
@ -65,7 +65,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## apache
|
||||
|
||||
**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#apache-http-server)
|
||||
**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#apache)
|
||||
|
||||
`apache.json`
|
||||
|
||||
|
@ -75,7 +75,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## consul
|
||||
|
||||
**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#consul)
|
||||
**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#consul)
|
||||
|
||||
`consul_http.json`
|
||||
|
||||
|
@ -95,7 +95,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## docker
|
||||
|
||||
**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#docker)
|
||||
**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#docker)
|
||||
|
||||
`docker.json`
|
||||
|
||||
|
@ -115,7 +115,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## elasticsearch
|
||||
|
||||
**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#elasticsearch)
|
||||
**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch)
|
||||
|
||||
`elasticsearch.json`
|
||||
|
||||
|
@ -132,7 +132,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## haproxy
|
||||
|
||||
**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#haproxy)
|
||||
**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#haproxy)
|
||||
|
||||
`haproxy.json`
|
||||
|
||||
|
@ -154,7 +154,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## iis
|
||||
|
||||
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/telegraf/v1.8/plugins/inputs/#windows-performance-counters)
|
||||
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#win_perf_counters)
|
||||
|
||||
`win_websvc.json`
|
||||
|
||||
|
@ -162,7 +162,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## influxdb
|
||||
|
||||
**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#influxdb-v-1)
|
||||
**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#influxdb)
|
||||
|
||||
`influxdb_database.json`
|
||||
|
||||
|
@ -207,7 +207,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## Memcached (`memcached`)
|
||||
|
||||
**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#memcached)
|
||||
**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#memcached)
|
||||
|
||||
`memcached.json`
|
||||
|
||||
|
@ -227,7 +227,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## mesos
|
||||
|
||||
**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#mesos)
|
||||
**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#mesos)
|
||||
|
||||
`mesos.json`
|
||||
|
||||
|
@ -242,7 +242,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## mongodb
|
||||
|
||||
**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#mongodb)
|
||||
**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#mongodb)
|
||||
|
||||
`mongodb.json`
|
||||
|
||||
|
@ -254,7 +254,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## mysql
|
||||
|
||||
**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#mysql)
|
||||
**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#mysql)
|
||||
|
||||
`mysql.json`
|
||||
|
||||
|
@ -265,7 +265,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## nginx
|
||||
|
||||
**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#nginx)
|
||||
**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#nginx)
|
||||
|
||||
`nginx.json`
|
||||
|
||||
|
@ -276,7 +276,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## nsq
|
||||
|
||||
**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#nsq)
|
||||
**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#nsq)
|
||||
|
||||
`nsq_channel.json`
|
||||
|
||||
|
@ -297,7 +297,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## phpfpm
|
||||
|
||||
**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#php-fpm)
|
||||
**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#phpfpm)
|
||||
|
||||
`phpfpm.json`
|
||||
|
||||
|
@ -309,7 +309,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## ping
|
||||
|
||||
**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#ping)
|
||||
**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#ping)
|
||||
|
||||
`ping.json`
|
||||
|
||||
|
@ -318,7 +318,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## postgresql
|
||||
|
||||
**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#postgresql)
|
||||
**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#postgresql)
|
||||
|
||||
`postgresql.json`
|
||||
|
||||
|
@ -329,7 +329,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## rabbitmq
|
||||
|
||||
**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#rabbitmq)
|
||||
**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq)
|
||||
|
||||
`rabbitmq.json`
|
||||
|
||||
|
@ -340,7 +340,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## redis
|
||||
|
||||
**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#redis)
|
||||
**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#redis)
|
||||
|
||||
|
||||
`redis.json`
|
||||
|
@ -352,7 +352,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## riak
|
||||
|
||||
**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#riak)
|
||||
**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#riak)
|
||||
|
||||
|
||||
`riak.json`
|
||||
|
@ -371,7 +371,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### cpu
|
||||
|
||||
**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#cpu)
|
||||
**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#cpu)
|
||||
|
||||
`cpu.json`
|
||||
|
||||
|
@ -381,13 +381,13 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
`disk.json`
|
||||
|
||||
**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#disk)
|
||||
**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#disk)
|
||||
|
||||
* "System - Disk used %"
|
||||
|
||||
### diskio
|
||||
|
||||
**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#diskio)
|
||||
**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#diskio)
|
||||
|
||||
`diskio.json`
|
||||
|
||||
|
@ -396,7 +396,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### mem
|
||||
|
||||
**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#mem)
|
||||
**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#mem)
|
||||
|
||||
`mem.json`
|
||||
|
||||
|
@ -404,7 +404,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### net
|
||||
|
||||
**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#net)
|
||||
**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#net)
|
||||
|
||||
`net.json`
|
||||
|
||||
|
@ -413,7 +413,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### netstat
|
||||
|
||||
**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#netstat)
|
||||
**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#netstat)
|
||||
|
||||
`netstat.json`
|
||||
|
||||
|
@ -422,7 +422,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### processes
|
||||
|
||||
**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#processes)
|
||||
**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#processes)
|
||||
|
||||
`processes.json`
|
||||
|
||||
|
@ -430,7 +430,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### procstat
|
||||
|
||||
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#procstat)
|
||||
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat)
|
||||
|
||||
`procstat.json`
|
||||
|
||||
|
@ -439,7 +439,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
### system
|
||||
|
||||
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#procstat)
|
||||
**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat)
|
||||
|
||||
`load.json`
|
||||
|
||||
|
@ -447,7 +447,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## varnish
|
||||
|
||||
**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/inputs/#varnish)
|
||||
**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#varnish)
|
||||
|
||||
`varnish.json`
|
||||
|
||||
|
@ -456,7 +456,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/
|
|||
|
||||
## win_system
|
||||
|
||||
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/inputs/#windows-performance-counters)
|
||||
**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#win_perf_counters)
|
||||
|
||||
`win_cpu.json`
|
||||
|
||||
|
|
|
@ -72,7 +72,7 @@ sudo yum localinstall chronograf-<version#>.x86_64.rpm
|
|||
2. Fill out the form with the following details:
|
||||
* **Connection String**: Enter the hostname or IP of the machine that InfluxDB is running on, and be sure to include InfluxDB's default port `8086`.
|
||||
* **Connection Name**: Enter a name for your connection string.
|
||||
* **Username** and **Password**: These fields can remain blank unless you've [enabled authentication](/influxdb/v1.7/administration/authentication_and_authorization.md) in InfluxDB.
|
||||
* **Username** and **Password**: These fields can remain blank unless you've [enabled authentication](/influxdb/v1.7/administration/authentication_and_authorization) in InfluxDB.
|
||||
* **Telegraf Database Name**: Optionally, enter a name for your Telegraf database. The default name is Telegraf.
|
||||
3. Click **Add Source**.
|
||||
|
||||
|
|
|
@ -7,9 +7,9 @@ menu:
|
|||
parent: Troubleshooting
|
||||
---
|
||||
|
||||
## How do I connect Chronograf to an InfluxEnterprise cluster?
|
||||
## How do I connect Chronograf to an InfluxDB Enterprise cluster?
|
||||
|
||||
The connection details form requires additional information when connecting Chronograf to an [InfluxEnterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
|
||||
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
|
||||
|
||||
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
|
||||
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
|
||||
|
@ -19,4 +19,4 @@ Enter the HTTP bind address of one of your cluster's meta nodes into that input
|
|||
|
||||
Note that the example above assumes that you do not have authentication enabled.
|
||||
If you have authentication enabled, the form requires username and password information.
|
||||
For more details about monitoring an InfluxEnterprise cluster, see the [Monitor an InfluxEnterprise Cluster](/chronograf/v1.7/guides/monitoring-influxenterprise-clusters/) guide.
|
||||
For more details about monitoring an InfluxDB Enterprise cluster, see the [Monitor an InfluxDB Enterprise Cluster](/chronograf/v1.7/guides/monitoring-influxenterprise-clusters/) guide.
|
||||
|
|
|
@ -42,7 +42,7 @@ Chronograf offers a UI for [Kapacitor](https://github.com/influxdata/kapacitor),
|
|||
|
||||
* Create and delete databases and retention policies
|
||||
* View currently-running queries and stop inefficient queries from overloading your system
|
||||
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxEnterprise user management)
|
||||
* Create, delete, and assign permissions to users (Chronograf supports [InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#authorization) and InfluxDB Enterprise user management)
|
||||
|
||||
|
||||
### Multi-organization and multi-user support
|
||||
|
|
|
@ -8,29 +8,30 @@ menu:
|
|||
parent: About the project
|
||||
---
|
||||
|
||||
## v1.8.10 [2020-02-04]
|
||||
## v1.8.10 [2020-02-08]
|
||||
|
||||
### Features
|
||||
|
||||
- Add the ability to set the active InfluxDB database and retention policy for InfluxQL commands, including `DROP MEASUREMENT`, `DROP SERIES FROM`, and `DELETE FROM`. For example:
|
||||
- Add the ability to set the active InfluxDB database and retention policy for InfluxQL commands. Now, in Chronograf Data Explorer, if you select a metaquery template (InfluxQL command) that requires you to specify an active database, such as `DROP MEASUREMENT`, `DROP SERIES FROM`, and `DELETE FROM`, the `USE` command is prepended to your InfluxQL command as follows:
|
||||
|
||||
```sh
|
||||
```
|
||||
>>>>>>> c9cc1028a5aae0097eaad28b6841111d192c82c1
|
||||
USE "db_name"; DROP MEASUREMENT "measurement_name"
|
||||
USE "db_name"; DROP SERIES FROM "measurement_name" WHERE "tag" = 'value'
|
||||
USE "db_name"; DELETE FROM "measurement_name" WHERE "tag" = 'value' AND time < '2020-01-01'
|
||||
```
|
||||
|
||||
- Upgrade to Axios 0.21.1.
|
||||
- Add support for Bitbucket `emails` endpoint with generic OAuth. For more information, see [Bitbucket documentation](https://developer.atlassian.com/bitbucket/api/2/reference/resource/user/emails) and how to [configure Chronograf to authenticate with OAuth 2.0](/chronograf/v1.8/administration/managing-security/#configure-chronograf-to-authenticate-with-oauth-2-0).
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Repair ARMv5 build.
|
||||
- Upgrade to Axios 0.21.1.
|
||||
- Stop async executions on unmounted LogsPage.
|
||||
- Repair dashboard import to remap sources in variables.
|
||||
- UI updates:
|
||||
- Ignore databases that cannot be read. Now, the Admin page correctly displays all databases the user has permissions to.
|
||||
- Improve Send to Dashboard feedback on the Data Explorer page.
|
||||
- Ignore databases that cannot be read. Now, the Admin page correctly displays all databases that the user has permissions to.
|
||||
- Improve the Send to Dashboard feedback on the Data Explorer page.
|
||||
- Log Viewer updates:
|
||||
- Avoid endless networking loop.
|
||||
- Show timestamp with full nanosecond precision.
|
||||
|
@ -155,7 +156,7 @@ TLS1.2 is now the default minimum required TLS version. If you have clients that
|
|||
|
||||
### Features
|
||||
|
||||
- Update to [Flux v0.65.0](/flux/v0.65/about_the_project/releasenotes-changelog/#v0-65-0-2020-03-27).
|
||||
- Update to Flux v0.65.0.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
|
@ -1112,7 +1113,7 @@ In versions 1.3.1+, installing a new version of Chronograf automatically clears
|
|||
|
||||
### Features
|
||||
|
||||
* Add line-protocol proxy for InfluxDB/InfluxEnterprise Cluster data sources
|
||||
* Add line-protocol proxy for InfluxDB/InfluxDB Enterprise Cluster data sources
|
||||
* Add `:dashboardTime:` to support cell-specific time ranges on dashboards
|
||||
* Add support for enabling and disabling [TICKscripts that were created outside Chronograf](/chronograf/v1.8/guides/advanced-kapacitor/#tickscript-management)
|
||||
* Allow users to delete Kapacitor configurations
|
||||
|
|
|
@ -19,4 +19,4 @@ Enter the HTTP bind address of one of your cluster's meta nodes into that input
|
|||
Note that the example above assumes that you do not have authentication enabled.
|
||||
If you have authentication enabled, the form requires username and password information.
|
||||
|
||||
For details about monitoring InfluxEnterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/{{chronograf/v1.8/guides/monitoring-influxenterprise-clusters).
|
||||
For details about monitoring InfluxDB Enterprise clusters, see [Monitoring InfluxDB Enterprise clusters](/chronograf/v1.8/guides/monitoring-influxenterprise-clusters).
|
||||
|
|
|
@ -32,9 +32,9 @@ interfaces, CLIs, or APIs to complete administrative tasks.
|
|||
## Enable authentication
|
||||
|
||||
Follow the steps below to enable authentication.
|
||||
The steps are the same for InfluxDB OSS instances and InfluxEnterprise clusters.
|
||||
The steps are the same for InfluxDB OSS instances and InfluxDB Enterprise clusters.
|
||||
|
||||
> ***InfluxEnterprise clusters:***
|
||||
> ***InfluxDB Enterprise clusters:***
|
||||
> Repeat the first three steps for each data node in a cluster.
|
||||
|
||||
### Step 1: Enable authentication.
|
||||
|
@ -69,7 +69,7 @@ Restart the InfluxDB service for your configuration changes to take effect:
|
|||
Because authentication is enabled, you need to create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) before you can do anything else in the database.
|
||||
Run the `curl` command below to create an admin user, replacing:
|
||||
|
||||
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxEnterprise data nodes
|
||||
* `localhost` with the IP or hostname of your InfluxDB OSS instance or one of your InfluxDB Enterprise data nodes
|
||||
* `chronothan` with your own username
|
||||
* `supersecret` with your own password (note that the password requires single quotes)
|
||||
|
||||
|
@ -282,8 +282,8 @@ Permission to create, drop, and view [subscriptions](/{{< latest "influxdb" "v1"
|
|||
Permission to view cluster statistics and diagnostics.
|
||||
|
||||
**Relevant InfluxQL queries**:
|
||||
[`SHOW DIAGNOSTICS`](/influxdb/administration/server_monitoring/#show-diagnostics) and
|
||||
[`SHOW STATS`](/influxdb/administration/server_monitoring/#show-stats)
|
||||
[`SHOW DIAGNOSTICS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-diagnostics) and
|
||||
[`SHOW STATS`](/{{< latest "influxdb" "v1" >}}/administration/server_monitoring/#show-stats)
|
||||
|
||||
**Pages in Chronograf that require this permission**: Data Explorer, Dashboards
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ Chronograf will use this secret to generate the JWT Signature for all access tok
|
|||
```
|
||||
|
||||
{{% note %}}
|
||||
***InfluxEnterprise clusters:*** If you are running multiple Chronograf servers in a high availability configuration,
|
||||
***InfluxDB Enterprise clusters:*** If you are running multiple Chronograf servers in a high availability configuration,
|
||||
set the `TOKEN_SECRET` environment variable on each server to ensure that users can stay logged in.
|
||||
{{% /note %}}
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ Chronograf gives you the ability to view, search, filter, visualize, and analyze
|
|||
This helps to recognize and diagnose patterns, then quickly dive into logged events that lead up to events.
|
||||
|
||||
## Logging setup
|
||||
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/inputs/):
|
||||
Logs data is a first class citizen in InfluxDB and is populated using available log-related [Telegraf input plugins](/{{< latest "telegraf" >}}/plugins/#input-plugins):
|
||||
|
||||
[syslog](https://github.com/influxdata/telegraf/tree/release-1.8/plugins/inputs/syslog)
|
||||
|
||||
|
|
|
@ -10,15 +10,15 @@ menu:
|
|||
|
||||
---
|
||||
|
||||
[InfluxEnterprise](/{{< latest "enterprise_influxdb" >}}/) offers high availability and a highly scalable clustering solution for your time series data needs.
|
||||
[InfluxDB Enterprise](/{{< latest "enterprise_influxdb" >}}/) offers high availability and a highly scalable clustering solution for your time series data needs.
|
||||
Use Chronograf to assess your cluster's health and to monitor the infrastructure behind your project.
|
||||
|
||||
This guide offers step-by-step instructions for using Chronograf, [InfluxDB](/{{< latest "influxdb" "v1" >}}/), and [Telegraf](/{{< latest "telegraf" >}}/) to monitor data nodes in your InfluxEnterprise cluster.
|
||||
This guide offers step-by-step instructions for using Chronograf, [InfluxDB](/{{< latest "influxdb" "v1" >}}/), and [Telegraf](/{{< latest "telegraf" >}}/) to monitor data nodes in your InfluxDB Enterprise cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
You have a fully-functioning InfluxEnterprise cluster with authentication enabled.
|
||||
See the InfluxEnterprise documentation for
|
||||
You have a fully-functioning InfluxDB Enterprise cluster with authentication enabled.
|
||||
See the InfluxDB Enterprise documentation for
|
||||
[detailed setup instructions](/{{< latest "enterprise_influxdb" >}}/production_installation/).
|
||||
This guide uses an InfluxData Enterprise cluster with three meta nodes and three data nodes; the steps are also applicable to other cluster configurations.
|
||||
|
||||
|
@ -34,7 +34,7 @@ Before we begin, here's an overview of the final monitoring setup:
|
|||
|
||||
![Architecture diagram](/img/chronograf/1-6-cluster-diagram.png)
|
||||
|
||||
The diagram above shows an InfluxEnterprise cluster that consists of three meta nodes (M) and three data nodes (D).
|
||||
The diagram above shows an InfluxDB Enterprise cluster that consists of three meta nodes (M) and three data nodes (D).
|
||||
Each data node has its own [Telegraf](/{{< latest "telegraf" >}}/) instance (T).
|
||||
|
||||
Each Telegraf instance is configured to collect node CPU, disk, and memory data using the Telegraf [system stats](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin.
|
||||
|
@ -80,7 +80,7 @@ Next, start the InfluxDB process:
|
|||
|
||||
#### Step 4: Create an admin user
|
||||
|
||||
Create an [admin user](/{{< latest "influxdb" "v1" >}}/query_language/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
|
||||
Create an [admin user](/{{< latest "influxdb" "v1" >}}/administration/authentication_and_authorization/#user-types-and-privileges) on your InfluxDB instance.
|
||||
Because you enabled authentication, you must perform this step before moving on to the next section.
|
||||
Run the command below to create an admin user, replacing `chronothan` and `supersecret` with your own username and password.
|
||||
Note that the password requires single quotes.
|
||||
|
|
|
@ -42,7 +42,7 @@ For more information, see [InfluxQL support](/influxdb/cloud/query-data/influxql
|
|||
|
||||
## Explore data with Flux
|
||||
|
||||
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/flux/v0.7/introduction/getting-started).
|
||||
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started).
|
||||
|
||||
1. Open the Data Explorer and click **Add a Query**.
|
||||
2. To the right of the source dropdown above the graph placeholder, select **Flux** as the source type.
|
||||
|
|
|
@ -70,7 +70,8 @@ sudo yum localinstall chronograf-<version#>.x86_64.rpm
|
|||
2. Fill out the form with the following details:
|
||||
* **Connection String**: Enter the hostname or IP of the machine that InfluxDB is running on, and be sure to include InfluxDB's default port `8086`.
|
||||
* **Connection Name**: Enter a name for your connection string.
|
||||
* **Username** and **Password**: These fields can remain blank unless you've [enabled authentication](/influxdb/v1.8/administration/authentication_and_authorization) in InfluxDB. Chronograf user accounts and credentials should be different than credentials used for InfluxDB, to ensure distinct permissions can be applied. For example, you may want to set up a Chronograf to run as a service account with read-only permissions to InfluxDB. For more information, see how to [manage InfluxDB users in Chronograf] and [manage Chronograf users](/chronograf/v1.8/administration/managing-chronograf-users/).
|
||||
* **Username** and **Password**: These fields can remain blank unless you've [enabled authentication](/influxdb/v1.8/administration/authentication_and_authorization) in InfluxDB. Chronograf user accounts and credentials should be different than credentials used for InfluxDB, to ensure distinct permissions can be applied. For example, you may want to set up Chronograf to run as a service account with read-only permissions to InfluxDB. For more information, see how to [manage InfluxDB users in Chronograf](/chronograf/v1.8/administration/managing-influxdb-users/) and [manage Chronograf users](/chronograf/v1.8/administration/managing-chronograf-users/).
|
||||
|
||||
* **Telegraf Database Name**: Optionally, enter a name for your Telegraf database. The default name is Telegraf.
|
||||
3. Click **Add Source**.
|
||||
|
||||
|
|
|
@ -17,11 +17,12 @@ chronoctl add-superadmin [flags]
|
|||
```
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input type |
|
||||
| :--------------------- | :---------------------------------------------------------------------------------------------------- | :--------: |
|
||||
| `-b`, `--bolt-path` | Full path to boltDB file (e.g. './chronograf-v1.db')" env:"BOLT_PATH" default:"chronograf-v1.db" | string |
|
||||
| `-i`, `--id` | User ID for an existing user | uint64 |
|
||||
| `-n`, `--name` | User's name. Must be Oauth-able email address or username. | |
|
||||
| `-p`, `--provider` | Name of the Auth provider (e.g. Google, GitHub, auth0, or generic) | string |
|
||||
| `-s`, `--scheme` | Authentication scheme that matches auth provider (default:oauth2) | string |
|
||||
| `-o`, `--orgs` | A comma-separated list of organizations that the user should be added to (default:"default") | string |
|
||||
|
||||
| Flag | | Description | Input type |
|
||||
|:---- |:----------------- | :---------------------------------------------------------------------------------------------------- | :--------: |
|
||||
| `-b` | `--bolt-path` | Full path to boltDB file (e.g. `./chronograf-v1.db`)" env:"BOLT_PATH" default:"chronograf-v1.db" | string |
|
||||
| `-i` | `--id` | User ID for an existing user | uint64 |
|
||||
| `-n` | `--name` | User's name. Must be Oauth-able email address or username. | |
|
||||
| `-p` | `--provider` | Name of the Auth provider (e.g. Google, GitHub, auth0, or generic) | string |
|
||||
| `-s` | `--scheme` | Authentication scheme that matches auth provider (default:oauth2) | string |
|
||||
| `-o` | `--orgs` | A comma-separated list of organizations that the user should be added to (default:"default") | string |
|
||||
|
|
|
@ -18,6 +18,6 @@ chronoctl list-users [flags]
|
|||
```
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input type |
|
||||
| :--------------------- | :---------------------------------------------------------------------------------------------------- | :--------: |
|
||||
| `--b`, `--bolt-path` | Full path to boltDB file (e.g. './chronograf-v1.db')" env:"BOLT_PATH" (default:chronograf-v1.db) | string |
|
||||
| Flag | | Description | Input type |
|
||||
| :---- |:----------- | :------------------------------------------------------------ | :--------: |
|
||||
| `--b` | `--bolt-path` | Full path to boltDB file (e.g. `./chronograf-v1.db`)" env:"BOLT_PATH" (default:chronograf-v1.db) | string |
|
||||
|
|
|
@ -20,7 +20,7 @@ chronoctl migrate [flags]
|
|||
```
|
||||
|
||||
## Flags
|
||||
| Flag | Description | Input type |
|
||||
|:---- |:----------- |:----------: |
|
||||
| `-f`, `--from` | Full path to BoltDB file or etcd (e.g. 'bolt:///path/to/chronograf-v1.db' or 'etcd://user:pass@localhost:2379 (default: chronograf-v1.db) | string |
|
||||
| `-t`, `--to` | Full path to BoltDB file or etcd (e.g. 'bolt:///path/to/chronograf-v1.db' or 'etcd://user:pass@localhost:2379 (default: etcd://localhost:2379) | string |
|
||||
| Flag | | Description | Input type |
|
||||
|:---- |:--- |:----------- |:----------: |
|
||||
| `-f` | `--from` | Full path to BoltDB file or etcd (e.g. `bolt:///path/to/chronograf-v1.db` or `etcd://user:pass@localhost:2379` (default: `chronograf-v1.db`) | string |
|
||||
| `-t` | `--to` | Full path to BoltDB file or etcd (e.g. `bolt:///path/to/chronograf-v1.db` or `etcd://user:pass@localhost:2379` (default: `etcd://localhost:2379`) | string |
|
||||
|
|
|
@ -8,9 +8,9 @@ menu:
|
|||
parent: Troubleshoot
|
||||
---
|
||||
|
||||
## How do I connect Chronograf to an InfluxEnterprise cluster?
|
||||
## How do I connect Chronograf to an InfluxDB Enterprise cluster?
|
||||
|
||||
The connection details form requires additional information when connecting Chronograf to an [InfluxEnterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
|
||||
The connection details form requires additional information when connecting Chronograf to an [InfluxDB Enterprise cluster](/{{< latest "enterprise_influxdb" >}}/).
|
||||
|
||||
When you enter the InfluxDB HTTP bind address in the `Connection String` input, Chronograf automatically checks if that InfluxDB instance is a data node.
|
||||
If it is a data node, Chronograf automatically adds the `Meta Service Connection URL` input to the connection details form.
|
||||
|
@ -20,4 +20,4 @@ Enter the HTTP bind address of one of your cluster's meta nodes into that input
|
|||
|
||||
Note that the example above assumes that you do not have authentication enabled.
|
||||
If you have authentication enabled, the form requires username and password information.
|
||||
For more details about monitoring an InfluxEnterprise cluster, see the [Monitor an InfluxEnterprise Cluster](/chronograf/v1.8/guides/monitoring-influxenterprise-clusters/) guide.
|
||||
For more details about monitoring an InfluxDB Enterprise cluster, see the [Monitor an InfluxDB Enterprise Cluster](/chronograf/v1.8/guides/monitoring-influxenterprise-clusters/) guide.
|
||||
|
|
|
@ -317,7 +317,7 @@ Restored from my-incremental-backup/ in 66.715524ms, transferred 588800 bytes
|
|||
|
||||
In this example, our `telegraf` database was mistakenly dropped, but you have a recent backup so you've only lost a small amount of data.
|
||||
|
||||
If [Telegraf](/telegraf/v1.5/) is still running, it will recreate the `telegraf` database shortly after the database is dropped.
|
||||
If [Telegraf](/{{< latest "telegraf" >}}/) is still running, it will recreate the `telegraf` database shortly after the database is dropped.
|
||||
You might try to directly restore your `telegraf` backup just to find that you can't restore.
|
||||
|
||||
```
|
||||
|
|
|
@ -153,7 +153,7 @@ Error: authorization failed.
|
|||
|
||||
Adds a data node to a cluster.
|
||||
By default, `influxd-ctl` adds the specified data node to the local meta node's cluster.
|
||||
Use `add-data` instead of the [`join` argument](#join) when performing a [production installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/) of an InfluxEnterprise cluster.
|
||||
Use `add-data` instead of the [`join` argument](#join) when performing a [production installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/) of an InfluxDB Enterprise cluster.
|
||||
|
||||
#### Syntax
|
||||
|
||||
|
@ -191,7 +191,7 @@ Added data node 3 at cluster-data-node:8088
|
|||
|
||||
Adds a meta node to a cluster.
|
||||
By default, `influxd-ctl` adds the specified meta node to the local meta node's cluster.
|
||||
Use `add-meta` instead of the [`join` argument](#join) when performing a [Production Installation](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/) of an InfluxEnterprise cluster.
|
||||
Use `add-meta` instead of the [`join` argument](#join) when performing a [Production Installation](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/) of an InfluxDB Enterprise cluster.
|
||||
|
||||
Resources: [Production installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/)
|
||||
|
||||
|
@ -368,7 +368,7 @@ cluster-data-node-02:8088 cluster-data-node-03:8088 telegraf autogen 34
|
|||
|
||||
Joins a meta node and/or data node to a cluster.
|
||||
By default, `influxd-ctl` joins the local meta node and/or data node into a new cluster.
|
||||
Use `join` instead of the [`add-meta`](#add-meta) or [`add-data`](#add-data) arguments when performing a [QuickStart Installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/) of an InfluxEnterprise cluster.
|
||||
Use `join` instead of the [`add-meta`](#add-meta) or [`add-data`](#add-data) arguments when performing a [QuickStart Installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/) of an InfluxDB Enterprise cluster.
|
||||
|
||||
#### Syntax
|
||||
|
||||
|
@ -507,7 +507,7 @@ Killed shard copy 39 from cluster-data-node-02:8088 to cluster-data-node-03:8088
|
|||
### `leave`
|
||||
|
||||
Removes a meta node and/or data node from the cluster.
|
||||
Use `leave` instead of the [`remove-meta`](#remove-meta) and [`remove-data`](#remove-data) arguments if you set up your InfluxEnterprise cluster with the [QuickStart Installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/) process.
|
||||
Use `leave` instead of the [`remove-meta`](#remove-meta) and [`remove-data`](#remove-data) arguments if you set up your InfluxDB Enterprise cluster with the [QuickStart Installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/) process.
|
||||
|
||||
{{% warn %}}The `leave` argument is destructive; it erases all metastore information from meta nodes and all data from data nodes.
|
||||
Use `leave` only if you want to *permanently* remove a node from a cluster.
|
||||
|
@ -589,7 +589,7 @@ Successfully left cluster
|
|||
### `remove-data`
|
||||
|
||||
Removes a data node from a cluster.
|
||||
Use `remove-data` instead of the [`leave`](#leave) argument if you set up your InfluxEnterprise cluster with the [Production Installation](/enterprise_influxdb/v1.5/production_installation/) process.
|
||||
Use `remove-data` instead of the [`leave`](#leave) argument if you set up your InfluxDB Enterprise cluster with the [Production Installation](/enterprise_influxdb/v1.5/production_installation/) process.
|
||||
|
||||
{{% warn %}}The `remove-data` argument is destructive; it erases all data from the specified data node.
|
||||
Use `remove-data` only if you want to *permanently* remove a data node from a cluster.
|
||||
|
@ -624,7 +624,7 @@ Removed data node at cluster-data-node-03:8088
|
|||
### `remove-meta`
|
||||
|
||||
Removes a meta node from the cluster.
|
||||
Use `remove-meta` instead of the [`leave`](#leave) command if you set up your InfluxEnterprise cluster with the [Production Installation](/enterprise_influxdb/v1.5/production_installation/) process.
|
||||
Use `remove-meta` instead of the [`leave`](#leave) command if you set up your InfluxDB Enterprise cluster with the [Production Installation](/enterprise_influxdb/v1.5/production_installation/) process.
|
||||
|
||||
{{% warn %}}The `remove-meta` argument is destructive; it erases all metastore information from the specified meta node.
|
||||
Use `remove-meta` only if you want to *permanently* remove a meta node from a cluster.
|
||||
|
|
|
@ -194,7 +194,7 @@ Environment variable: `INFLUXDB_HOSTNAME`
|
|||
## [enterprise]
|
||||
|
||||
The `[enterprise]` section contains the parameters for the meta node's
|
||||
registration with the [InfluxEnterprise License Portal](https://portal.influxdata.com/).
|
||||
registration with the [InfluxDB Enterprise License Portal](https://portal.influxdata.com/).
|
||||
|
||||
### license-key = ""
|
||||
|
||||
|
@ -457,7 +457,7 @@ Environment variable: `INFLUXDB_GOSSIP_FREQUENCY`
|
|||
## [enterprise]
|
||||
|
||||
The `[enterprise]` section contains the parameters for the meta node's
|
||||
registration with the [InfluxEnterprise License Portal](https://portal.influxdata.com/).
|
||||
registration with the [InfluxDB Enterprise License Portal](https://portal.influxdata.com/).
|
||||
|
||||
### license-key = ""
|
||||
|
||||
|
@ -767,7 +767,7 @@ Environment variable: `INFLUXDB_SHARD_PRECREATION_ADVANCE_PERIOD`
|
|||
|
||||
By default, InfluxDB writes system monitoring data to the `_internal` database. If that database does not exist, InfluxDB creates it automatically. The `DEFAULT` retention policy on the `internal` database is seven days. To change the default seven-day retention policy, you must [create](/influxdb/v1.5/query_language/database_management/#retention-policy-management) it.
|
||||
|
||||
For InfluxDB Enterprise production systems, InfluxData recommends including a dedicated InfluxDB (OSS) monitoring instance for monitoring InfluxEnterprise cluster nodes.
|
||||
For InfluxDB Enterprise production systems, InfluxData recommends including a dedicated InfluxDB (OSS) monitoring instance for monitoring InfluxDB Enterprise cluster nodes.
|
||||
|
||||
* On the dedicated InfluxDB monitoring instance, set `store-enabled = false` to avoid potential performance and storage issues.
|
||||
* On each InfluxDB cluster node, install a Telegraf input plugin and Telegraf output plugin configured to report data to the dedicated InfluxDB monitoring instance.
|
||||
|
@ -909,7 +909,7 @@ Environment variable: `INFLUXDB_HTTP_MAX_CONNECTION_LIMIT`
|
|||
|
||||
See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#shared-secret).
|
||||
|
||||
This setting is required and must match on each data node if the cluster is using the InfluxEnterprise Web Console.
|
||||
This setting is required and must match on each data node if the cluster is using the InfluxDB Enterprise Web Console.
|
||||
|
||||
Environment variable: `INFLUXDB_HTTP_SHARED_SECRET`
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ Resources:
|
|||
## Secure your Host
|
||||
|
||||
### Ports
|
||||
For InfluxEnterprise Data Nodes, close all ports on each host except for port `8086`.
|
||||
For InfluxDB Enterprise Data Nodes, close all ports on each host except for port `8086`.
|
||||
You can also use a proxy to port `8086`. By default, data nodes and meta nodes communicate with each other over '8088','8089',and'8091'
|
||||
|
||||
For InfluxDB Enterprise, [backuping and restoring](/enterprise_influxdb/v1.5/administration/backup-and-restore/) is performed from the meta nodes.
|
||||
|
|
|
@ -231,4 +231,4 @@ rk-upgrading-03:8091 1.5.4_c1.5.4
|
|||
```
|
||||
|
||||
If you have any issues upgrading your cluster, please do not hesitate to contact InfluxData Support at the email address
|
||||
provided to you when you received your InfluxEnterprise license.
|
||||
provided to you when you received your InfluxDB Enterprise license.
|
||||
|
|
|
@ -9,11 +9,11 @@ menu:
|
|||
parent: Concepts
|
||||
---
|
||||
|
||||
This document describes in detail how clustering works in InfluxEnterprise. It starts with a high level description of the different components of a cluster and then delves into the implementation details.
|
||||
This document describes in detail how clustering works in InfluxDB Enterprise. It starts with a high level description of the different components of a cluster and then delves into the implementation details.
|
||||
|
||||
## Architectural overview
|
||||
|
||||
An InfluxEnterprise installation consists of three separate software processes: Data nodes, Meta nodes, and the Enterprise Web server. To run an InfluxDB cluster, only the meta and data nodes are required. Communication within a cluster looks like this:
|
||||
An InfluxDB Enterprise installation consists of three separate software processes: Data nodes, Meta nodes, and the Enterprise Web server. To run an InfluxDB cluster, only the meta and data nodes are required. Communication within a cluster looks like this:
|
||||
|
||||
{{< diagram >}}
|
||||
flowchart TB
|
||||
|
@ -66,7 +66,7 @@ On disk, the data is always organized by `<database>/<retention_policy>/<shard_i
|
|||
|
||||
## Optimal server counts
|
||||
|
||||
When creating a cluster you'll need to choose how meta and data nodes to configure and connect. You can think of InfluxEnterprise as two separate clusters that communicate with each other: a cluster of meta nodes and one of data nodes. The number of meta nodes is driven by the number of meta node failures they need to be able to handle, while the number of data nodes scales based on your storage and query needs.
|
||||
When creating a cluster you'll need to choose how meta and data nodes to configure and connect. You can think of InfluxDB Enterprise as two separate clusters that communicate with each other: a cluster of meta nodes and one of data nodes. The number of meta nodes is driven by the number of meta node failures they need to be able to handle, while the number of data nodes scales based on your storage and query needs.
|
||||
|
||||
The consensus protocol requires a quorum to perform any operation, so there should always be an odd number of meta nodes. For almost all use cases, 3 meta nodes is the correct number, and such a cluster operates normally even with the permanant loss of 1 meta node.
|
||||
|
||||
|
@ -116,7 +116,7 @@ The important thing to note is how failures are handled. In the case of failures
|
|||
|
||||
### Hinted handoff
|
||||
|
||||
Hinted handoff is how InfluxEnterprise deals with data node outages while writes are happening. Hinted handoff is essentially a durable disk based queue. When writing at `any`, `one` or `quorum` consistency, hinted handoff is used when one or more replicas return an error after a success has already been returned to the client. When writing at `all` consistency, writes cannot return success unless all nodes return success. Temporarily stalled or failed writes may still go to the hinted handoff queues but the cluster would have already returned a failure response to the write. The receiving node creates a separate queue on disk for each data node (and shard) it cannot reach.
|
||||
Hinted handoff is how InfluxDB Enterprise deals with data node outages while writes are happening. Hinted handoff is essentially a durable disk based queue. When writing at `any`, `one` or `quorum` consistency, hinted handoff is used when one or more replicas return an error after a success has already been returned to the client. When writing at `all` consistency, writes cannot return success unless all nodes return success. Temporarily stalled or failed writes may still go to the hinted handoff queues but the cluster would have already returned a failure response to the write. The receiving node creates a separate queue on disk for each data node (and shard) it cannot reach.
|
||||
|
||||
Let's again use the example of a write coming to `D` that should go to shard `1` on `A` and `B`. If we specified a consistency level of `one` and node `A` returns success, `D` immediately returns success to the client even though the write to `B` is still in progress.
|
||||
|
||||
|
|
|
@ -70,8 +70,8 @@ query capacity within the cluster.
|
|||
|
||||
## web console
|
||||
|
||||
Legacy user interface for the InfluxEnterprise cluster.
|
||||
Legacy user interface for the InfluxDB Enterprise cluster.
|
||||
|
||||
This has been deprecated and the suggestion is to use [Chronograf](/{{< latest "chronograf" >}}/introduction/).
|
||||
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf and helpful [transition guide](/{{< latest "chronograf" >}}/guides/transition-web-admin-interface/) is available.
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
|
||||
|
|
|
@ -43,7 +43,7 @@ view Chronograf.
|
|||
Roles are groups of permissions.
|
||||
A single role can belong to several cluster accounts.
|
||||
|
||||
InfluxEnterprise clusters have two built-in roles:
|
||||
InfluxDB Enterprise clusters have two built-in roles:
|
||||
|
||||
#### Global Admin
|
||||
|
||||
|
@ -60,7 +60,7 @@ permissions to:
|
|||
* Rebalance
|
||||
|
||||
### Permissions
|
||||
InfluxEnterprise clusters have 16 permissions:
|
||||
InfluxDB Enterprise clusters have 16 permissions:
|
||||
|
||||
#### View Admin
|
||||
Permission to view or edit admin screens.
|
||||
|
@ -114,7 +114,7 @@ The following table describes permissions required to execute the associated dat
|
|||
|Determined by type of select statement|SelectStatement|
|
||||
|
||||
### Statement to Permission
|
||||
The following table describes database statements and the permissions required to execute them. It also describes whether these permissions apply just to InfluxDB (Database) or InfluxEnterprise (Cluster).
|
||||
The following table describes database statements and the permissions required to execute them. It also describes whether these permissions apply just to InfluxDB (Database) or InfluxDB Enterprise (Cluster).
|
||||
|
||||
|Statment|Permissions|Scope|
|
||||
|---|---|---|
|
||||
|
|
|
@ -8,30 +8,30 @@ menu:
|
|||
---
|
||||
|
||||
This guide describes how to enable HTTPS for InfluxDB Enterprise.
|
||||
Setting up HTTPS secures the communication between clients and the InfluxEnterprise
|
||||
Setting up HTTPS secures the communication between clients and the InfluxDB Enterprise
|
||||
server,
|
||||
and, in some cases, HTTPS verifies the authenticity of the InfluxEnterprise server to
|
||||
and, in some cases, HTTPS verifies the authenticity of the InfluxDB Enterprise server to
|
||||
clients.
|
||||
|
||||
If you plan on sending requests to InfluxEnterprise over a network, we
|
||||
If you plan on sending requests to InfluxDB Enterprise over a network, we
|
||||
[strongly recommend](/enterprise_influxdb/v1.5/administration/security/)
|
||||
that you set up HTTPS.
|
||||
|
||||
## Requirements
|
||||
|
||||
To set up HTTPS with InfluxEnterprise, you'll need an existing or new InfluxEnterprise instance
|
||||
To set up HTTPS with InfluxDB Enterprise, you'll need an existing or new InfluxDB Enterprise instance
|
||||
and a Transport Layer Security (TLS) certificate (also known as a Secured Sockets Layer (SSL) certificate).
|
||||
InfluxEnterprise supports three types of TLS/SSL certificates:
|
||||
InfluxDB Enterprise supports three types of TLS/SSL certificates:
|
||||
|
||||
* **Single domain certificates signed by a [Certificate Authority](https://en.wikipedia.org/wiki/Certificate_authority)**
|
||||
|
||||
These certificates provide cryptographic security to HTTPS requests and allow clients to verify the identity of the InfluxEnterprise server.
|
||||
With this certificate option, every InfluxEnterprise instance requires a unique single domain certificate.
|
||||
These certificates provide cryptographic security to HTTPS requests and allow clients to verify the identity of the InfluxDB Enterprise server.
|
||||
With this certificate option, every InfluxDB Enterprise instance requires a unique single domain certificate.
|
||||
|
||||
* **Wildcard certificates signed by a Certificate Authority**
|
||||
|
||||
These certificates provide cryptographic security to HTTPS requests and allow clients to verify the identity of the InfluxDB server.
|
||||
Wildcard certificates can be used across multiple InfluxEnterprise instances on different servers.
|
||||
Wildcard certificates can be used across multiple InfluxDB Enterprise instances on different servers.
|
||||
|
||||
* **Self-signed certificates**
|
||||
|
||||
|
@ -39,13 +39,13 @@ InfluxEnterprise supports three types of TLS/SSL certificates:
|
|||
Unlike CA-signed certificates, self-signed certificates only provide cryptographic security to HTTPS requests.
|
||||
They do not allow clients to verify the identity of the InfluxDB server.
|
||||
We recommend using a self-signed certificate if you are unable to obtain a CA-signed certificate.
|
||||
With this certificate option, every InfluxEnterprise instance requires a unique self-signed certificate.
|
||||
With this certificate option, every InfluxDB Enterprise instance requires a unique self-signed certificate.
|
||||
|
||||
Regardless of your certificate's type, InfluxEnterprise supports certificates composed of
|
||||
Regardless of your certificate's type, InfluxDB Enterprise supports certificates composed of
|
||||
a private key file (`.key`) and a signed certificate file (`.crt`) file pair, as well as certificates
|
||||
that combine the private key file and the signed certificate file into a single bundled file (`.pem`).
|
||||
|
||||
The following two sections outline how to set up HTTPS with InfluxEnterprise [using a CA-signed
|
||||
The following two sections outline how to set up HTTPS with InfluxDB Enterprise [using a CA-signed
|
||||
certificate](#setup-https-with-a-ca-signed-certificate) and [using a self-signed certificate](#setup-https-with-a-self-signed-certificate)
|
||||
on Ubuntu 16.04.
|
||||
Specific steps may be different for other operating systems.
|
||||
|
@ -130,14 +130,14 @@ Second, Configure the Data Nodes to use HTTPS when communicating with the Meta N
|
|||
meta-tls-enabled = true
|
||||
```
|
||||
|
||||
#### Step 5: Restart InfluxEnterprise
|
||||
#### Step 5: Restart InfluxDB Enterprise
|
||||
|
||||
Restart the InfluxEnterprise meta node processes for the configuration changes to take effect:
|
||||
Restart the InfluxDB Enterprise meta node processes for the configuration changes to take effect:
|
||||
```
|
||||
sudo systemctl start influxdb-meta
|
||||
```
|
||||
|
||||
Restart the InfluxEnterprise data node processes for the configuration changes to take effect:
|
||||
Restart the InfluxDB Enterprise data node processes for the configuration changes to take effect:
|
||||
```
|
||||
sudo systemctl restart influxdb
|
||||
```
|
||||
|
@ -169,7 +169,7 @@ enterprise-meta-03:8091 1.x.y-c1.x.z
|
|||
```
|
||||
|
||||
|
||||
Next, verify that HTTPS is working by connecting to InfluxEnterprise with the [CLI tool](/influxdb/v1.5/tools/shell/):
|
||||
Next, verify that HTTPS is working by connecting to InfluxDB Enterprise with the [CLI tool](/influxdb/v1.5/tools/shell/):
|
||||
```
|
||||
influx -ssl -host <domain_name>.com
|
||||
```
|
||||
|
@ -181,7 +181,7 @@ InfluxDB shell version: 1.x.y
|
|||
>
|
||||
```
|
||||
|
||||
That's it! You've successfully set up HTTPS with InfluxEnterprise.
|
||||
That's it! You've successfully set up HTTPS with InfluxDB Enterprise.
|
||||
|
||||
## Setup HTTPS with a Self-Signed Certificate
|
||||
|
||||
|
@ -189,7 +189,7 @@ That's it! You've successfully set up HTTPS with InfluxEnterprise.
|
|||
|
||||
The following command generates a private key file (`.key`) and a self-signed
|
||||
certificate file (`.crt`) which remain valid for the specified `NUMBER_OF_DAYS`.
|
||||
It outputs those files to InfluxEnterprise's default certificate file paths and gives them
|
||||
It outputs those files to InfluxDB Enterprise's default certificate file paths and gives them
|
||||
the required permissions.
|
||||
|
||||
```
|
||||
|
@ -273,14 +273,14 @@ Second, Configure the Data Nodes to use HTTPS when communicating with the Meta N
|
|||
meta-insecure-tls = true
|
||||
```
|
||||
|
||||
#### Step 4: Restart InfluxEnterprise
|
||||
#### Step 4: Restart InfluxDB Enterprise
|
||||
|
||||
Restart the InfluxEnterprise meta node processes for the configuration changes to take effect:
|
||||
Restart the InfluxDB Enterprise meta node processes for the configuration changes to take effect:
|
||||
```
|
||||
sudo systemctl restart influxdb-meta
|
||||
```
|
||||
|
||||
Restart the InfluxEnterprise data node processes for the configuration changes to take effect:
|
||||
Restart the InfluxDB Enterprise data node processes for the configuration changes to take effect:
|
||||
```
|
||||
sudo systemctl restart influxdb
|
||||
```
|
||||
|
@ -312,7 +312,7 @@ enterprise-meta-03:8091 1.x.y-c1.x.z
|
|||
```
|
||||
|
||||
|
||||
Next, verify that HTTPS is working by connecting to InfluxEnterprise with the [CLI tool](/influxdb/v1.5/tools/shell/):
|
||||
Next, verify that HTTPS is working by connecting to InfluxDB Enterprise with the [CLI tool](/influxdb/v1.5/tools/shell/):
|
||||
```
|
||||
influx -ssl -unsafeSsl -host <domain_name>.com
|
||||
```
|
||||
|
@ -324,12 +324,12 @@ InfluxDB shell version: 1.x.y
|
|||
>
|
||||
```
|
||||
|
||||
That's it! You've successfully set up HTTPS with InfluxEnterprise.
|
||||
That's it! You've successfully set up HTTPS with InfluxDB Enterprise.
|
||||
|
||||
|
||||
## Connect Telegraf to a secured InfluxEnterprise instance
|
||||
## Connect Telegraf to a secured InfluxDB Enterprise instance
|
||||
|
||||
Connecting [Telegraf](/telegraf/v1.5/) to an InfluxEnterprise instance that's using
|
||||
Connecting [Telegraf](/{{< latest "telegraf" >}}/) to an InfluxDB Enterprise instance that's using
|
||||
HTTPS requires some additional steps.
|
||||
|
||||
In Telegraf's configuration file (`/etc/telegraf/telegraf.conf`), under the OUTPUT PLUGINS section, edit the `urls`
|
||||
|
@ -348,7 +348,7 @@ setting and set it to `true`.
|
|||
|
||||
# Configuration for influxdb server to send metrics to
|
||||
[[outputs.influxdb]]
|
||||
## The full HTTP or UDP endpoint URL for your InfluxEnterprise instance.
|
||||
## The full HTTP or UDP endpoint URL for your InfluxDB Enterprise instance.
|
||||
## Multiple urls can be specified as part of the same cluster,
|
||||
## this means that only ONE of the urls will be written to each interval.
|
||||
# urls = ["udp://localhost:8089"] # UDP endpoint example
|
||||
|
|
|
@ -32,16 +32,16 @@ of three or more meta nodes and zero or more data nodes. If you need instruction
|
|||
Please note that this migration process:
|
||||
|
||||
* Deletes all data from any data nodes that are already part of the InfluxDB Enterprise cluster
|
||||
* Will transfer all users from the OSS instance to the InfluxEnterprise Cluster*
|
||||
* Will transfer all users from the OSS instance to the InfluxDB Enterprise Cluster*
|
||||
* Requires downtime for writes and reads for the OSS instance
|
||||
|
||||
{{% warn %}}
|
||||
If you're using an InfluxDB Enterprise cluster version prior to 0.7.4, the
|
||||
following steps will **not** transfer users from the OSS instance to the
|
||||
InfluxEnterprise Cluster.
|
||||
InfluxDB Enterprise Cluster.
|
||||
{{% /warn %}}
|
||||
|
||||
In addition, please refrain from creating a Global Admin user in the InfluxEnterprise Web Console before implementing these steps. If you’ve already created a Global Admin user, contact InfluxData Support.
|
||||
In addition, please refrain from creating a Global Admin user in the InfluxDB Enterprise Web Console before implementing these steps. If you’ve already created a Global Admin user, contact InfluxData Support.
|
||||
|
||||
## Modify the `/etc/hosts` file
|
||||
|
||||
|
@ -184,7 +184,7 @@ Note: it may take a few minutes before the existing data become available in the
|
|||
|
||||
### 1. Add any data nodes that you removed from cluster back into the cluster
|
||||
|
||||
From a **meta** node in the InfluxEnterprise Cluster, run:
|
||||
From a **meta** node in the InfluxDB Enterprise Cluster, run:
|
||||
```
|
||||
influxd-ctl add-data <the-hostname>:8088
|
||||
```
|
||||
|
|
|
@ -10,7 +10,7 @@ menu:
|
|||
parent: Introduction
|
||||
---
|
||||
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.5/introduction/meta_node_installation/) InfluxDB Enterprise, you can configure Chronograf for [monitoring InfluxDB Enterprise clusters](/{{< latest "chronograf" >}}/guides/monitor-an-influxenterprise-cluster/).
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.5/introduction/meta_node_installation/) InfluxDB Enterprise, you can configure Chronograf for [monitoring InfluxDB Enterprise clusters](/{{< latest "chronograf" >}}/guides/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/).
|
||||
|
||||
See [Getting started with Chronograf](/{{< latest "chronograf" >}}/introduction/getting-started/) to learn more about using Chronograf with the InfluxData time series platform.
|
||||
|
||||
|
|
|
@ -46,13 +46,13 @@ If you alter the default ports in the configuration file(s), ensure the configur
|
|||
|
||||
#### Synchronize time between hosts
|
||||
|
||||
InfluxEnterprise uses hosts' local time in UTC to assign timestamps to data and for coordination purposes.
|
||||
InfluxDB Enterprise uses hosts' local time in UTC to assign timestamps to data and for coordination purposes.
|
||||
Use the Network Time Protocol (NTP) to synchronize time between hosts.
|
||||
|
||||
#### Use SSDs
|
||||
|
||||
Clusters require sustained availability of 1000-2000 IOPS from the attached storage.
|
||||
SANs must guarantee at least 1000 IOPS is always available to InfluxEnterprise
|
||||
SANs must guarantee at least 1000 IOPS is always available to InfluxDB Enterprise
|
||||
nodes or they may not be sufficient.
|
||||
SSDs are strongly recommended, and we have had no reports of IOPS contention from any customers running on SSDs.
|
||||
|
||||
|
|
|
@ -8,9 +8,9 @@ menu:
|
|||
---
|
||||
|
||||
The Production Installation process is designed for users looking to deploy
|
||||
InfluxEnterprise in a production environment.
|
||||
InfluxDB Enterprise in a production environment.
|
||||
|
||||
If you wish to evaluate InfluxEnterprise in a non-production
|
||||
If you wish to evaluate InfluxDB Enterprise in a non-production
|
||||
environment, feel free to follow the instructions outlined in the
|
||||
[QuickStart installation](/enterprise_influxdb/v1.5/quickstart_installation) section.
|
||||
Please note that if you install InfluxDB Enterprise with the QuickStart Installation process you
|
||||
|
@ -20,6 +20,6 @@ process before using the product in a production environment.
|
|||
|
||||
## Production installation
|
||||
|
||||
Follow the links below to get up and running with InfluxEnterprise.
|
||||
Follow the links below to get up and running with InfluxDB Enterprise.
|
||||
|
||||
{{< children hlevel="h2" >}}
|
||||
|
|
|
@ -8,19 +8,19 @@ menu:
|
|||
---
|
||||
|
||||
The QuickStart installation process is designed for users looking to quickly
|
||||
get up and running with InfluxEnterprise and for users who are looking to
|
||||
get up and running with InfluxDB Enterprise and for users who are looking to
|
||||
evaluate the product.
|
||||
|
||||
The QuickStart installation process **is not** designed for use
|
||||
in a production environment.
|
||||
Follow the instructions outlined in the [Production installation](/enterprise_influxdb/v1.5/production_installation/) section
|
||||
if you wish to use InfluxDB Enterprise in a production environment.
|
||||
Please note that if you install InfluxEnterprise with the QuickStart Installation process you
|
||||
Please note that if you install InfluxDB Enterprise with the QuickStart Installation process you
|
||||
will need to reinstall InfluxDB Enterprise with the Production Installation
|
||||
process before using the product in a production environment.
|
||||
|
||||
## QuickStart installation
|
||||
|
||||
Follow the links below to get up and running with InfluxEnterprise.
|
||||
Follow the links below to get up and running with InfluxDB Enterprise.
|
||||
|
||||
{{< children hlevel="h2" >}}
|
||||
|
|
|
@ -55,7 +55,7 @@ setting in the meta node and data node configuration files.
|
|||
|
||||
#### Load balancer
|
||||
|
||||
InfluxEnterprise does not function as a load balancer.
|
||||
InfluxDB Enterprise does not function as a load balancer.
|
||||
You will need to configure your own load balancer to send client traffic to the
|
||||
data nodes on port `8086` (the default port for the [HTTP API](/influxdb/v1.5/tools/api/)).
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ Note that for some [write consistency](/enterprise_influxdb/v1.5/concepts/cluste
|
|||
[stats] 2016/10/18 10:35:21 error writing count stats for FOO_grafana: partial write
|
||||
```
|
||||
|
||||
The `_internal` database collects per-node and also cluster-wide information about the InfluxEnterprise cluster. The cluster metrics are replicated to other nodes using `consistency=all`. For a [write consistency](/enterprise_influxdb/v1.5/concepts/clustering/#write-consistency) of `all`, InfluxDB returns a write error (500) for the write attempt even if the points are successfully queued in hinted handoff. Thus, if there are points still in hinted handoff, the `_internal` writes will fail the consistency check and log the error, even though the data is in the durable hinted handoff queue and should eventually persist.
|
||||
The `_internal` database collects per-node and also cluster-wide information about the InfluxDB Enterprise cluster. The cluster metrics are replicated to other nodes using `consistency=all`. For a [write consistency](/enterprise_influxdb/v1.5/concepts/clustering/#write-consistency) of `all`, InfluxDB returns a write error (500) for the write attempt even if the points are successfully queued in hinted handoff. Thus, if there are points still in hinted handoff, the `_internal` writes will fail the consistency check and log the error, even though the data is in the durable hinted handoff queue and should eventually persist.
|
||||
|
||||
|
||||
## Why am I seeing `queue is full` errors in my data node logs?
|
||||
|
|
|
@ -86,7 +86,7 @@ A "flapping" dashboard means data visualizations changing when data is refreshed
|
|||
and pulled from a node with entropy (inconsistent data).
|
||||
It is the visual manifestation of getting [different results from the same query](#different-results-for-the-same-query).
|
||||
|
||||
<img src="/img/kapacitor/flapping-dashboard.gif" alt="Flapping dashboard" style="width:100%; max-width:800px">
|
||||
<img src="/img/enterprise/1-6-flapping-dashboard.gif" alt="Flapping dashboard" style="width:100%; max-width:800px">
|
||||
|
||||
## Technical details
|
||||
|
||||
|
|
|
@ -278,7 +278,7 @@ Restored from my-incremental-backup/ in 66.715524ms, transferred 588800 bytes
|
|||
|
||||
Your `telegraf` database was mistakenly dropped, but you have a recent backup so you've only lost a small amount of data.
|
||||
|
||||
If [Telegraf](/telegraf/v1.7/) is still running, it will recreate the `telegraf` database shortly after the database is dropped.
|
||||
If [Telegraf](/{{< latest "telegraf" >}}/) is still running, it will recreate the `telegraf` database shortly after the database is dropped.
|
||||
You might try to directly restore your `telegraf` backup just to find that you can't restore:
|
||||
|
||||
```
|
||||
|
|
|
@ -73,4 +73,4 @@ Legacy user interface for the InfluxDB Enterprise.
|
|||
|
||||
This has been deprecated and the suggestion is to use [Chronograf](/{{< latest "chronograf" >}}/introduction/).
|
||||
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf and helpful [transition guide](/{{< latest "chronograf" >}}/guides/transition-web-admin-interface/) is available.
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
|
||||
|
|
|
@ -21,4 +21,4 @@ Learn how to deploy a cluster on the cloud provider of your choice:
|
|||
|
||||
- [GCP](/enterprise_influxdb/v1.7/install-and-deploy/google-cloud-platform/)
|
||||
- [AWS](/enterprise_influxdb/v1.7/install-and-deploy/aws/)
|
||||
- [Azure](/enterprise_influxdb/v1.7/install-and-deploy/deploying/azure/)
|
||||
- [Azure](/enterprise_influxdb/v1.7/install-and-deploy/azure/)
|
||||
|
|
|
@ -10,7 +10,7 @@ menu:
|
|||
parent: Introduction
|
||||
---
|
||||
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.6/introduction/meta_node_installation/) InfluxDB Enterprise, use [Chronograf to setup your cluster as a data source.](/{{< latest "chronograf" >}}/guides/monitor-an-influxenterprise-cluster/)
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.6/introduction/meta_node_installation/) InfluxDB Enterprise, use [Chronograf to setup your cluster as a data source.](/{{< latest "chronograf" >}}/guides/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/)
|
||||
|
||||
More details on leveraging [Chronograf and getting started are available.](/{{< latest "chronograf" >}}/introduction/getting-started/)
|
||||
|
||||
|
|
|
@ -91,7 +91,7 @@ A "flapping" dashboard means data visualizations change when data is refreshed
|
|||
and pulled from a node with entropy (inconsistent data).
|
||||
It is the visual manifestation of getting [different results from the same query](#different-results-for-the-same-query).
|
||||
|
||||
<img src="/img/kapacitor/flapping-dashboard.gif" alt="Flapping dashboard" style="width:100%; max-width:800px">
|
||||
<img src="/img/enterprise/1-6-flapping-dashboard.gif" alt="Flapping dashboard" style="width:100%; max-width:800px">
|
||||
|
||||
## Technical details
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ To choose a strategy that best suits your use case, we recommend considering you
|
|||
|
||||
- [Backup and restore utilities](#backup-and-restore-utilities) (suits **most InfluxDB Enterprise applications**)
|
||||
- [Export and import commands](#export-and-import-commands) (best for **backfill or recovering shards as files**)
|
||||
- [Take AWS snapshots as backup](/backup-and-restore/#take-aws-snapshots-as-backup) (optimal **convenience if budget permits**)
|
||||
- [Take AWS snapshots as backup](#take-aws-snapshots-as-backup) (optimal **convenience if budget permits**)
|
||||
- [Run two clusters in separate AWS regions](#run-two-clusters-in-separate-aws-regions) (also optimal **convenience if budget permits**, more custom work upfront)
|
||||
|
||||
> Test your backup and restore strategy for all applicable scenarios.
|
||||
|
|
|
@ -252,7 +252,8 @@ Environment variable: `INFLUXDB_DATA_CACHE_SNAPSHOT_WRITE_COLD_DURATION`
|
|||
#### `max-concurrent-compactions = 0`
|
||||
|
||||
The maximum number of concurrent full and level compactions that can run at one time.
|
||||
A value of `0` results in 50% of `runtime.GOMAXPROCS(0)` used at runtime.
|
||||
A value of `0` (unlimited compactions) results in 50% of `runtime.GOMAXPROCS(0)` used at runtime,
|
||||
so when 50% of the CPUs aren't available, compactions are limited.
|
||||
Any number greater than `0` limits compactions to that value.
|
||||
This setting does not apply to cache snapshotting.
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ The number of data nodes in a cluster **must be evenly divisible by the replicat
|
|||
|
||||
> **Important:** If the replication factor isn't evenly divisible into the number of data nodes, data may be distributed unevenly across the cluster and cause poor performance. Likewise, decreasing the replication factor (fewer copies of data in a cluster) may reduce performance.
|
||||
|
||||
Related entries: [cluster](/influxdb/v0.10/concepts/glossary/#cluster), [duration](/influxdb/v1.7/concepts/glossary/#duration), [node](/influxdb/v1.7/concepts/glossary/#node),
|
||||
Related entries: [duration](/influxdb/v1.7/concepts/glossary/#duration), [node](/influxdb/v1.7/concepts/glossary/#node),
|
||||
[retention policy](/influxdb/v1.7/concepts/glossary/#retention-policy-rp)
|
||||
|
||||
## web console
|
||||
|
@ -70,4 +70,4 @@ Legacy user interface for the InfluxDB Enterprise.
|
|||
|
||||
This has been deprecated and the suggestion is to use [Chronograf](/{{< latest "chronograf" >}}/introduction/).
|
||||
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf and helpful [transition guide](/{{< latest "chronograf" >}}/guides/transition-web-admin-interface/) is available.
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
|
||||
|
|
|
@ -156,6 +156,9 @@ Consult your CA if you are unsure about how to use these files.
|
|||
|
||||
# Use a separate private key location.
|
||||
https-private-key = "influxdb-data.key"
|
||||
|
||||
# If using a self-signed certificate:
|
||||
https-insecure-tls = true
|
||||
```
|
||||
|
||||
3. Configure the data nodes to use HTTPS when communicating with the meta nodes.
|
||||
|
|
|
@ -18,7 +18,7 @@ This guide requires the following:
|
|||
- Microsoft Azure account with access to the [Azure Marketplace](https://azuremarketplace.microsoft.com/).
|
||||
- SSH access to cluster instances.
|
||||
|
||||
To deploy InfluxDB Enterprise clusters on platforms other than Azure, see [Deploy InfluxDB Enterprise](/enterprise_influxdb/v1.8/install-and-deploy/_index).
|
||||
To deploy InfluxDB Enterprise clusters on platforms other than Azure, see [Deploy InfluxDB Enterprise](/enterprise_influxdb/v1.8/install-and-deploy/).
|
||||
|
||||
## Deploy a cluster
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ menu:
|
|||
parent: Introduction
|
||||
---
|
||||
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.7/introduction/meta_node_installation/) InfluxDB Enterprise, use [Chronograf to setup your cluster as a data source.](/{{< latest "chronograf" >}}/guides/monitor-an-influxenterprise-cluster/)
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.7/introduction/meta_node_installation/) InfluxDB Enterprise, use [Chronograf to setup your cluster as a data source.](/{{< latest "chronograf" >}}/guides/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/)
|
||||
|
||||
More details on leveraging [Chronograf and getting started are available.](/{{< latest "chronograf" >}}/introduction/getting-started/)
|
||||
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: InfluxDB Enterprise tools
|
||||
description: >
|
||||
Learn more about available tools for working with InfluxDB Enterprise.
|
||||
menu:
|
||||
enterprise_influxdb_1_7:
|
||||
name: Tools
|
||||
weight: 70
|
||||
---
|
||||
|
||||
Use the following tools to work with InfluxDB Enterprise:
|
||||
|
||||
{{< children >}}
|
||||
|
||||
## InfluxDB open source tools
|
||||
Tools built for InfluxDB OSS v1.8 also work with InfluxDB Enterprise v1.7.
|
||||
For more information, see [InfluxDB tools](/influxdb/v1.7/tools/).
|
|
@ -0,0 +1,79 @@
|
|||
---
|
||||
title: Use Grafana with InfluxDB Enterprise
|
||||
seotitle: Use Grafana with InfluxDB Enterprise v1.7
|
||||
description: >
|
||||
Configure Grafana to query and visualize data from InfluxDB Enterprise v1.7.
|
||||
menu:
|
||||
enterprise_influxdb_1_7:
|
||||
name: Grafana
|
||||
weight: 60
|
||||
parent: Tools
|
||||
canonical: /{{< latest "influxdb" >}}/tools/grafana/
|
||||
---
|
||||
|
||||
Use [Grafana](https://grafana.com/) or [Grafana Cloud](https://grafana.com/products/cloud/)
|
||||
to visualize data from your **InfluxDB Enterprise v1.7** instance.
|
||||
|
||||
{{% note %}}
|
||||
The instructions in this guide require **Grafana Cloud** or **Grafana v7.1+**.
|
||||
For information about using InfluxDB with other versions of Grafana,
|
||||
see the [Grafana documentation](https://grafana.com/docs/grafana/v7.0/features/datasources/influxdb/).
|
||||
{{% /note %}}
|
||||
|
||||
1. [Set up an InfluxDB Enterprise cluster](/enterprise_influxdb/v1.7/install-and-deploy/).
|
||||
2. [Sign up for Grafana Cloud](https://grafana.com/products/cloud/) or
|
||||
[download and install Grafana](https://grafana.com/grafana/download).
|
||||
3. Visit your **Grafana Cloud user interface** (UI) or, if running Grafana locally,
|
||||
[start Grafana](https://grafana.com/docs/grafana/latest/installation/) and visit
|
||||
`http://localhost:3000` in your browser.
|
||||
4. In the left navigation of the Grafana UI, hover over the gear
|
||||
icon to expand the **Configuration** section. Click **Data Sources**.
|
||||
5. Click **Add data source**.
|
||||
6. Select **InfluxDB** from the list of available data sources.
|
||||
7. On the **Data Source configuration page**, enter a **name** for your InfluxDB data source.
|
||||
8. Under **Query Language**, select one of the following:
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[InfluxQL](#)
|
||||
[Flux](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
## Configure Grafana to use InfluxQL
|
||||
|
||||
With **InfluxQL** selected as the query language in your InfluxDB data source settings:
|
||||
|
||||
1. Under **HTTP**, enter the following:
|
||||
|
||||
- **URL**: Your **InfluxDB Enterprise URL** or **load balancer URL**.
|
||||
|
||||
```sh
|
||||
http://localhost:8086/
|
||||
```
|
||||
- **Access**: Server (default)
|
||||
|
||||
2. Under **InfluxDB Details**, enter the following:
|
||||
|
||||
- **Database**: your database name
|
||||
- **User**: your InfluxDB Enterprise username _(if [authentication is enabled](/influxdb/v1.7/administration/authentication_and_authorization/))_
|
||||
- **Password**: your InfluxDB Enterprise password _(if [authentication is enabled](/influxdb/v1.7/administration/authentication_and_authorization/))_
|
||||
- **HTTP Method**: select **GET** or **POST** _(for differences between the two,
|
||||
see the [query HTTP endpoint documentation](/influxdb/v1.7/tools/api/#query-http-endpoint))_
|
||||
|
||||
3. Provide a **[Min time interval](https://grafana.com/docs/grafana/latest/datasources/influxdb/#min-time-interval)**
|
||||
(default is 10s).
|
||||
|
||||
{{< img-hd src="/img/enterprise/1-7-tools-grafana-influxql.png" />}}
|
||||
|
||||
4. Click **Save & Test**. Grafana attempts to connect to InfluxDB Enterprise and returns
|
||||
the result of the test.
|
||||
{{% /tab-content %}}
|
||||
|
||||
{{% tab-content %}}
|
||||
## Configure Grafana to use Flux
|
||||
To query InfluxDB Enterprise using Flux from Grafana, **upgrade to InfluxDB Enterprise 1.8.1+**:
|
||||
|
||||
- [Upgrade to InfluxDB Enterprise 1.8.x](/enterprise_influxdb/v1.8/administration/upgrading/)
|
||||
- [Use Grafana with InfluxDB Enterprise 1.8](/enterprise_influxdb/v1.8/tools/grafana/).
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
|
@ -9,6 +9,25 @@ menu:
|
|||
parent: About the project
|
||||
---
|
||||
|
||||
## v1.8.4 [2020-02-08]
|
||||
|
||||
The InfluxDB Enterprise 1.8.4 release builds on the InfluxDB OSS 1.8.4 release.
|
||||
For details on changes incorporated from the InfluxDB OSS release, see
|
||||
[InfluxDB OSS release notes](/influxdb/v1.8/about_the_project/releasenotes-changelog/#v1-8-4-unreleased).
|
||||
|
||||
> **Note:** InfluxDB Enterprise 1.8.3 was not released. Bug fixes intended for 1.8.3 were rolled into InfluxDB Enterprise 1.8.4.
|
||||
|
||||
### Features
|
||||
|
||||
#### Update your InfluxDB Enterprise license without restarting data nodes
|
||||
|
||||
Add the ability to [renew or update your license key or file](/enterprise_influxdb/v1.8/administration/renew-license/) without restarting data nodes.
|
||||
### Bug fixes
|
||||
|
||||
- Wrap TCP mux–based HTTP server with a function that adds custom headers.
|
||||
- Correct output for `influxd-ctl show shards`.
|
||||
- Properly encode/decode `control.Shard.Err`.
|
||||
|
||||
## v1.8.2 [2020-08-24]
|
||||
|
||||
The InfluxDB Enterprise 1.8.2 release builds on the InfluxDB OSS 1.8.2 and 1.8.1 releases.
|
||||
|
|
|
@ -4,6 +4,7 @@ description: >
|
|||
Use the `influxd-ctl` and `influx` command line tools to manage InfluxDB Enterprise clusters and data.
|
||||
aliases:
|
||||
- /enterprise/v1.8/features/cluster-commands/
|
||||
- /enterprise_influxdb/v1.8/features/cluster-commands/
|
||||
menu:
|
||||
enterprise_influxdb_1_8:
|
||||
name: Manage clusters
|
||||
|
|
|
@ -94,9 +94,7 @@ The `license-key` and `license-path` settings are
|
|||
mutually exclusive and one must remain set to the empty string.
|
||||
{{% /warn %}}
|
||||
|
||||
InfluxData recommends performing rolling restarts on the nodes after the license key update.
|
||||
Restart one meta, data, or Enterprise service at a time and wait for it to come back up successfully.
|
||||
The cluster should remain unaffected as long as only one node is restarting at a time as long as there are two or more data nodes.
|
||||
> **Note:** You must trigger data nodes to reload your configuration. For more information, see how to [renew or update your license key](/enterprise_influxdb/v1.8/administration/renew-license/).
|
||||
|
||||
Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY`
|
||||
|
||||
|
@ -108,9 +106,8 @@ Contact [sales@influxdb.com](mailto:sales@influxdb.com) if a license file is req
|
|||
|
||||
The license file should be saved on every server in the cluster, including Meta, Data, and Enterprise nodes.
|
||||
The file contains the JSON-formatted license, and must be readable by the `influxdb` user. Each server in the cluster independently verifies its license.
|
||||
InfluxData recommends performing rolling restarts on the nodes after the license file update.
|
||||
Restart one meta, data, or Enterprise service at a time and wait for it to come back up successfully.
|
||||
The cluster should remain unaffected as long as only one node is restarting at a time as long as there are two or more data nodes.
|
||||
|
||||
> **Note:** You must trigger data nodes to reload your configuration. For more information, see how to [renew or update your license key](/enterprise_influxdb/v1.8/administration/renew-license/).
|
||||
|
||||
{{% warn %}}
|
||||
Use the same license file for all nodes in the same cluster.
|
||||
|
@ -253,7 +250,8 @@ Environment variable: `INFLUXDB_DATA_CACHE_SNAPSHOT_WRITE_COLD_DURATION`
|
|||
#### `max-concurrent-compactions = 0`
|
||||
|
||||
The maximum number of concurrent full and level compactions that can run at one time.
|
||||
A value of `0` results in 50% of `runtime.GOMAXPROCS(0)` used at runtime.
|
||||
A value of `0` (unlimited compactions) results in 50% of `runtime.GOMAXPROCS(0)` used at runtime,
|
||||
so when 50% of the CPUs aren't available, compactions are limited.
|
||||
Any number greater than `0` limits compactions to that value.
|
||||
This setting does not apply to cache snapshotting.
|
||||
|
||||
|
|
|
@ -13,6 +13,7 @@ menu:
|
|||
* [Global options](#global-options)
|
||||
* [Enterprise license `[enterprise]`](#enterprise)
|
||||
* [Meta node `[meta]`](#meta)
|
||||
* [TLS `[tls]`](#tls-settings)
|
||||
|
||||
## Meta node configuration settings
|
||||
|
||||
|
@ -47,7 +48,6 @@ Environment variable: `INFLUXDB_HOSTNAME`
|
|||
-----
|
||||
|
||||
### Enterprise license settings
|
||||
|
||||
#### `[enterprise]`
|
||||
|
||||
The `[enterprise]` section contains the parameters for the meta node's
|
||||
|
@ -66,10 +66,7 @@ Use the same key for all nodes in the same cluster.
|
|||
{{% warn %}}The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
|
||||
{{% /warn %}}
|
||||
|
||||
InfluxData recommends performing rolling restarts on the nodes after the license key update.
|
||||
Restart one meta node or data node service at a time and wait for it to come back up successfully.
|
||||
The cluster should remain unaffected as long as only one node is restarting at a
|
||||
time as long as there are two or more data nodes.
|
||||
> **Note:** You must restart meta nodes to update your configuration. For more information, see how to [renew or update your license key](/enterprise_influxdb/v1.8/administration/renew-license/).
|
||||
|
||||
Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY`
|
||||
|
||||
|
@ -88,17 +85,11 @@ Each server in the cluster independently verifies its license.
|
|||
The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
|
||||
{{% /warn %}}
|
||||
|
||||
InfluxData recommends performing rolling restarts on the nodes after the
|
||||
license file update.
|
||||
Restart one meta node or data node service at a time and wait for it to come back
|
||||
up successfully.
|
||||
The cluster should remain unaffected as long as only one node is restarting at a
|
||||
time as long as there are two or more data nodes.
|
||||
> **Note:** You must restart meta nodes to update your configuration. For more information, see how to [renew or update your license key](/enterprise_influxdb/v1.8/administration/renew-license/).
|
||||
|
||||
Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_PATH`
|
||||
|
||||
-----
|
||||
|
||||
### Meta node settings
|
||||
|
||||
#### `[meta]`
|
||||
|
@ -272,3 +263,24 @@ This value must be the same value as the
|
|||
To use this option, set [`auth-enabled`](#auth-enabled-false) to `true`.
|
||||
|
||||
Environment variable: `INFLUXDB_META_INTERNAL_SHARED_SECRET`
|
||||
|
||||
### TLS settings
|
||||
|
||||
For more information, see [TLS settings for data nodes](/enterprise_influxdb/v1.8/administration/config-data-nodes#tls-settings).
|
||||
|
||||
#### Recommended "modern compatibility" cipher settings
|
||||
|
||||
```toml
|
||||
ciphers = [ "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
|
||||
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
|
||||
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
|
||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
||||
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
|
||||
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
|
||||
]
|
||||
|
||||
min-version = "tls1.2"
|
||||
|
||||
max-version = "tls1.2"
|
||||
|
||||
```
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: Renew or update a license key or file
|
||||
description: >
|
||||
Renew or update a license key or file for your InfluxDB enterprise cluster.
|
||||
menu:
|
||||
enterprise_influxdb_1_8:
|
||||
name: Renew a license
|
||||
weight: 50
|
||||
parent: Administration
|
||||
---
|
||||
|
||||
Use this procedure to renew or update an existing license key or file, switch from a license key to a license file, or switch from a license file to a license key.
|
||||
|
||||
> **Note:** To request a new license to renew or expand your InfluxDB Enterprise cluster, contact [sales@influxdb.com](mailto:sales@influxdb.com).
|
||||
|
||||
To update a license key or file, do the following:
|
||||
|
||||
1. If you are switching from a license key to a license file (or vice versa), delete your existing license key or file.
|
||||
2. **Add the license key or file** to your [meta nodes](/enterprise_influxdb/v1.8/administration/config-meta-nodes/#enterprise-license-settings) and [data nodes](/enterprise_influxdb/v1.8/administration/config-data-nodes/#enterprise-license-settings) configuration settings. For more information, see [how to configure InfluxDB Enterprise clusters](/enterprise_influxdb/v1.8/administration/configuration/).
|
||||
3. **On each meta node**, run `service influxdb-meta restart`, and wait for the meta node service to come back up successfully before restarting the next meta node.
|
||||
The cluster should remain unaffected as long as only one node is restarting at a time.
|
||||
4. **On each data node**, run `killall -s HUP influxd` to signal the `influxd` process to reload its configuration file.
|
|
@ -74,4 +74,4 @@ Legacy user interface for the InfluxDB Enterprise.
|
|||
|
||||
This has been deprecated and the suggestion is to use [Chronograf](/{{< latest "chronograf" >}}/introduction/).
|
||||
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf and helpful [transition guide](/{{< latest "chronograf" >}}/guides/transition-web-admin-interface/) is available.
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
|
||||
|
|
|
@ -163,6 +163,9 @@ Consult your CA if you are unsure about how to use these files.
|
|||
|
||||
# Use a separate private key location.
|
||||
https-private-key = "influxdb-data.key"
|
||||
|
||||
# If using a self-signed certificate:
|
||||
https-insecure-tls = true
|
||||
```
|
||||
|
||||
3. Configure the data nodes to use HTTPS when communicating with the meta nodes.
|
||||
|
|
|
@ -14,52 +14,60 @@ menu:
|
|||
Migrate a running instance of InfluxDB open source (OSS) to an InfluxDB Enterprise cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- An InfluxDB OSS instance running **InfluxDB 1.7.10 or later**.
|
||||
- An InfluxDB Enterprise cluster running **InfluxDB Enterprise 1.7.10 or later**
|
||||
- Your **OSS and Enterprise version is the same**, for example, InfluxDB 1.8 and InfluxDB Enterprise 1.8.
|
||||
- Network accessibility between the OSS instances and all data and meta nodes.
|
||||
|
||||
{{% warn %}}
|
||||
**Migrating does the following:**
|
||||
|
||||
- Deletes data in existing InfluxDB Enterprise data nodes
|
||||
- Deletes data in existing InfluxDB Enterprise data nodes (not applicable if you're migrating to a new cluster)
|
||||
- Transfers all users from the OSS instance to the InfluxDB Enterprise cluster
|
||||
- Requires downtime for the OSS instance
|
||||
{{% /warn %}}
|
||||
|
||||
## To migrate to InfluxDB Enterprise
|
||||
## Migrate to InfluxDB Enterprise
|
||||
|
||||
Complete the following tasks:
|
||||
|
||||
1. [Upgrade InfluxDB to the latest version](#upgrade-influxdb-to-the-latest-version)
|
||||
2. [Set up InfluxDB Enterprise meta nodes](#set-up-influxdb-enterprise-meta-nodes)
|
||||
3. [Set up InfluxDB Enterprise data nodes](#set-up-influxdb-enterprise-data-nodes)
|
||||
4. [Upgrade the InfluxDB binary on your OSS instance](#upgrade-the-influxdb-oss-instance-to-influxdb-enterprise)
|
||||
5. [Add the upgraded OSS instance to the InfluxDB Enterprise cluster](#add-the-new-data-node-to-the-cluster)
|
||||
6. [Add existing data nodes back to the cluster](#add-existing-data-nodes-back-to-the-cluster)
|
||||
7. [Rebalance the cluster](#rebalance-the-cluster)
|
||||
4. Do one of the following:
|
||||
- [Migrate a data set with zero downtime](#migrate-a-data-set-with-zero-downtime). We recommend using this method to create a portable backup first. This method lets you move data between OSS and Enterprise as you're testing the migration.
|
||||
- [Migrate a data set with downtime](#migrate-a-data-set-with-downtime). Note, with this method, you cannot move data from Enterprise back to OSS. This method is useful if you're not able to run a portable backup. Some reasons you may not be able to create a portable backup:
|
||||
- Data set exceeds a certain size
|
||||
- Hardware requirements aren't available
|
||||
- Time constraints (large data sets increase the time needed to back up data)
|
||||
|
||||
## Upgrade InfluxDB to the latest version
|
||||
Upgrade InfluxDB to the latest stable version before proceeding.
|
||||
### Upgrade InfluxDB to the latest version
|
||||
|
||||
Upgrade InfluxDB OSS and InfluxDB Enterprise to the latest stable version. Make sure the OSS and Enterprise version is the same.
|
||||
|
||||
- [Upgrade InfluxDB OSS](/{{< latest "influxdb" "v1" >}}/administration/upgrading/)
|
||||
- [Upgrade InfluxDB Enterprise](/enterprise_influxdb/v1.8/administration/upgrading/)
|
||||
|
||||
## Set up InfluxDB Enterprise meta nodes
|
||||
### Set up InfluxDB Enterprise meta nodes
|
||||
|
||||
Set up all meta nodes in your InfluxDB Enterprise cluster.
|
||||
For information about installing and setting up meta nodes, see
|
||||
[Install meta nodes](/enterprise_influxdb/v1.8/install-and-deploy/production_installation/meta_node_installation/).
|
||||
[Install meta nodes](/enterprise_influxdb/v1.8/install-and-deploy/production_installation/meta_node_installation).
|
||||
|
||||
{{% note %}}
|
||||
#### Add the OSS instance to the meta /etc/hosts files
|
||||
|
||||
When [modifying the `/etc/hosts` file](/enterprise_influxdb/v1.8/install-and-deploy/production_installation/meta_node_installation/#step-1-add-appropriate-dns-entries-for-each-of-your-servers)
|
||||
on each meta node, include the IP and host name of your InfluxDB OSS instance so
|
||||
meta nodes can communicate with the OSS instance.
|
||||
on each meta node, include the IP and host name of your InfluxDB OSS instance so meta nodes can communicate with the OSS instance.
|
||||
{{% /note %}}
|
||||
|
||||
## Set up InfluxDB Enterprise data nodes
|
||||
If you don't have any existing data nodes in your InfluxDB Enterprise cluster,
|
||||
[skip to the next step](#upgrade-the-influxdb-oss-instance-to-influxdb-enterprise).
|
||||
### Set up InfluxDB Enterprise data nodes
|
||||
|
||||
If you don't have any existing data nodes in your InfluxDB Enterprise cluster,
|
||||
skip this step.
|
||||
|
||||
#### For each existing data node:
|
||||
|
||||
### For each existing data node:
|
||||
1. **Remove the data node from the InfluxDB Enterprise cluster**
|
||||
|
||||
From a **meta** node in your InfluxDB Enterprise cluster, run:
|
||||
|
@ -97,7 +105,46 @@ If you don't have any existing data nodes in your InfluxDB Enterprise cluster,
|
|||
On each **data** node, add the IP and hostname of the OSS instance to the
|
||||
`/etc/hosts` file to allow the data node to communicate with the OSS instance.
|
||||
|
||||
## Upgrade the InfluxDB OSS instance to InfluxDB Enterprise
|
||||
### Migrate a data set with zero downtime
|
||||
|
||||
1. Take a portable backup from OSS:
|
||||
|
||||
```sh
|
||||
influxd backup -portable -host <IP address>:8088 /tmp/mysnapshot
|
||||
```
|
||||
|
||||
For more information, see [`-backup`](/influxdb/latest/administration/backup_and_restore/#backup)
|
||||
2. Restore the backup on the cluster by running the following:
|
||||
|
||||
```sh
|
||||
influxd restore -portable [ -host <host:port> ] <path-to-backup-files>
|
||||
```
|
||||
|
||||
For more information, see [`-restore`](/influxdb/latest/administration/backup_and_restore/#restore)
|
||||
3. Dual write to both OSS and Enterprise. See [Write data with the InfluxDB API](https://docs.influxdata.com/influxdb/v1.8/guides/write_data/). This keeps the OSS and cluster active for testing and acceptance work.
|
||||
4. [Export data from OSS](/enterprise_influxdb/latest/administration/backup-and-restore/#exporting-data) from the time the backup was taken to the time the dual write started.
|
||||
For example, if you take the backup on 2020-07-19T00:00:00.000Z, and started writing data to Enterprise at 2020-07-19T23:59:59.999Z, you could run the following command:
|
||||
|
||||
```sh
|
||||
influx_inspect export -compress -start 2020-07-19T00:00:00.000Z -end 2020-07-19T23:59:59.999Z`
|
||||
```
|
||||
|
||||
For more information, see [`-export`](/influxdb/latest/tools/influx_inspect#export).
|
||||
5. [Import data into Enterprise](/enterprise_influxdb/latest/administration/backup-and-restore/#importing-data).
|
||||
6. Verify data is successfully migrated. To review your data, see how to:
|
||||
- [Query data with the InfluxDB API](https://docs.influxdata.com/influxdb/latest/guides/query_data/#sidebar)
|
||||
- [View data in Chronograf](/chronograf/latest/)
|
||||
7. [Stop writes and remove OSS](#stop-writes-and-remove-oss).
|
||||
|
||||
### Migrate a data set with downtime
|
||||
|
||||
1. [Stop writes and remove OSS](#stop-writes-and-remove-oss)
|
||||
2. [Back up OSS configuration](#back-up-oss-configuration)
|
||||
3. [Add the upgraded OSS instance to the InfluxDB Enterprise cluster](#add-the-new-data-node-to-the-cluster)
|
||||
4. [Add existing data nodes back to the cluster](#add-existing-data-nodes-back-to-the-cluster)
|
||||
5. [Rebalance the cluster](#rebalance-the-cluster)
|
||||
|
||||
#### Stop writes and remove OSS
|
||||
|
||||
1. **Stop all writes to the InfluxDB OSS instance**
|
||||
2. **Stop the `influxdb` service on the InfluxDB OSS instance**
|
||||
|
@ -144,12 +191,14 @@ sudo yum remove influxdb
|
|||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
4. **Back up your InfluxDB OSS configuration file**
|
||||
#### Back up your InfluxDB OSS configuration file
|
||||
|
||||
1. **Back up your InfluxDB OSS configuration file**
|
||||
|
||||
If you have custom configuration settings for InfluxDB OSS, back up and save your configuration file.
|
||||
**Without a backup, you'll lose custom configuration settings when updating the InfluxDB binary.**
|
||||
|
||||
5. **Update the InfluxDB binary**
|
||||
2. **Update the InfluxDB binary**
|
||||
|
||||
> Updating the InfluxDB binary overwrites the existing configuration file.
|
||||
> To keep custom settings, back up your configuration file.
|
||||
|
@ -173,7 +222,7 @@ sudo yum localinstall influxdb-data-1.8.2-c1.8.2.x86_64.rpm
|
|||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
6. **Update the configuration file**
|
||||
3. **Update the configuration file**
|
||||
|
||||
In `/etc/influxdb/influxdb.conf`, set:
|
||||
|
||||
|
@ -203,12 +252,12 @@ Transfer any custom settings from the backup of your OSS configuration file
|
|||
to the new Enterprise configuration file.
|
||||
{{% /note %}}
|
||||
|
||||
7. **Update the `/etc/hosts` file**
|
||||
4. **Update the `/etc/hosts` file**
|
||||
|
||||
Add all meta and data nodes to the `/etc/hosts` file to allow the OSS instance
|
||||
to communicate with other nodes in the InfluxDB Enterprise cluster.
|
||||
|
||||
8. **Start the data node**
|
||||
5. **Start the data node**
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
|
@ -227,27 +276,28 @@ sudo systemctl start influxdb
|
|||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
#### Add the new data node to the cluster
|
||||
|
||||
## Add the new data node to the cluster
|
||||
After you upgrade your OSS instance to InfluxDB Enterprise, add the node to your Enterprise cluster.
|
||||
|
||||
From a **meta** node in the cluster, run:
|
||||
- From a **meta** node in the cluster, run:
|
||||
|
||||
```bash
|
||||
influxd-ctl add-data <new-data-node-hostname>:8088
|
||||
```
|
||||
|
||||
It should output:
|
||||
The output should look like:
|
||||
|
||||
```bash
|
||||
Added data node y at new-data-node-hostname:8088
|
||||
```
|
||||
|
||||
## Add existing data nodes back to the cluster
|
||||
#### Add existing data nodes back to the cluster
|
||||
|
||||
If you removed any existing data nodes from your InfluxDB Enterprise cluster,
|
||||
add them back to the cluster.
|
||||
|
||||
From a **meta** node in the InfluxDB Enterprise cluster, run the following for
|
||||
1. From a **meta** node in the InfluxDB Enterprise cluster, run the following for
|
||||
**each data node**:
|
||||
|
||||
```bash
|
||||
|
@ -260,7 +310,7 @@ It should output:
|
|||
Added data node y at the-hostname:8088
|
||||
```
|
||||
|
||||
Verify that all nodes are now members of the cluster as expected:
|
||||
2. Verify that all nodes are now members of the cluster as expected:
|
||||
|
||||
```bash
|
||||
influxd-ctl show
|
||||
|
@ -271,6 +321,7 @@ node with other data nodes in the cluster.
|
|||
It may take a few minutes before the existing data is available.
|
||||
|
||||
## Rebalance the cluster
|
||||
|
||||
1. Use the [ALTER RETENTION POLICY](/influxdb/v1.8/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy)
|
||||
statement to increase the [replication factor](/enterprise_influxdb/v1.8/concepts/glossary/#replication-factor)
|
||||
on all existing retention polices to the number of data nodes in your cluster.
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
title: Deploy an InfluxDB Enterprise cluster on Azure Cloud Platform
|
||||
description: >
|
||||
Deploy an InfluxDB Enterprise cluster on Microsoft Azure cloud computing service.
|
||||
aliases:
|
||||
- /enterprise_influxdb/v1.8/install-and-deploy/azure/
|
||||
menu:
|
||||
enterprise_influxdb_1_8:
|
||||
name: Azure
|
||||
|
|
|
@ -103,7 +103,7 @@ sudo dpkg -i influxdb-meta_1.8.2-c1.8.2_amd64.deb
|
|||
|
||||
```
|
||||
wget https://dl.influxdata.com/enterprise/releases/influxdb-meta-1.8.2_c1.8.2.x86_64.rpm
|
||||
sudo yum localinstall influxdb-meta-1.8.2-c1.8.2.x86_64.rpm
|
||||
sudo yum localinstall influxdb-meta-1.8.2_c1.8.2.x86_64.rpm
|
||||
```
|
||||
|
||||
##### Verify the authenticity of release download (recommended)
|
||||
|
@ -126,7 +126,7 @@ For added security, follow these steps to verify the signature of your InfluxDB
|
|||
3. Verify the signature with `gpg --verify`:
|
||||
|
||||
```
|
||||
gpg --verify influxdb-meta-1.8.2-c1.8.2.x86_64.rpm.asc influxdb-meta-1.8.2-c1.8.2.x86_64.rpm
|
||||
gpg --verify influxdb-meta-1.8.2_c1.8.2.x86_64.rpm.asc influxdb-meta-1.8.2_c1.8.2.x86_64.rpm
|
||||
```
|
||||
|
||||
The output from this command should include the following:
|
||||
|
|
|
@ -11,7 +11,7 @@ menu:
|
|||
parent: Introduction
|
||||
---
|
||||
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.8/introduction/meta_node_installation/) InfluxDB Enterprise, use [Chronograf to setup your cluster as a data source.](/{{< latest "chronograf" >}}/guides/monitor-an-influxenterprise-cluster/)
|
||||
Now that you successfully [installed and set up](/enterprise_influxdb/v1.8/introduction/meta_node_installation/) InfluxDB Enterprise, use [Chronograf to setup your cluster as a data source.](/{{< latest "chronograf" >}}/guides/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/monitoring-influxenterprise-cluster/)
|
||||
|
||||
More details on leveraging [Chronograf and getting started are available.](/{{< latest "chronograf" >}}/introduction/getting-started/)
|
||||
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: InfluxDB Enterprise tools
|
||||
description: >
|
||||
Learn more about available tools for working with InfluxDB Enterprise.
|
||||
menu:
|
||||
enterprise_influxdb_1_8:
|
||||
name: Tools
|
||||
weight: 70
|
||||
---
|
||||
|
||||
Use the following tools to work with InfluxDB Enterprise:
|
||||
|
||||
{{< children >}}
|
||||
|
||||
## InfluxDB open source tools
|
||||
Tools built for InfluxDB OSS v1.8 also work with InfluxDB Enterprise v1.8.
|
||||
For more information, see [InfluxDB tools](/influxdb/v1.8/tools/).
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: Use Grafana with InfluxDB Enterprise
|
||||
seotitle: Use Grafana with InfluxDB Enterprise v1.8
|
||||
description: >
|
||||
Configure Grafana to query and visualize data from InfluxDB Enterprise v1.8.
|
||||
menu:
|
||||
enterprise_influxdb_1_8:
|
||||
name: Grafana
|
||||
weight: 60
|
||||
parent: Tools
|
||||
canonical: /{{< latest "influxdb" >}}/tools/grafana/
|
||||
---
|
||||
|
||||
Use [Grafana](https://grafana.com/) or [Grafana Cloud](https://grafana.com/products/cloud/)
|
||||
to visualize data from your **InfluxDB Enterprise v1.8** instance.
|
||||
|
||||
{{% note %}}
|
||||
#### Required
|
||||
- The instructions in this guide require **Grafana Cloud** or **Grafana v7.1+**.
|
||||
For information about using InfluxDB with other versions of Grafana,
|
||||
see the [Grafana documentation](https://grafana.com/docs/grafana/v7.0/features/datasources/influxdb/).
|
||||
- To use **Flux**, use **InfluxDB Enterprise 1.8.1+** and [enable Flux](/influxdb/v1.8/flux/installation/)
|
||||
in your InfluxDB data node configuration file.
|
||||
{{% /note %}}
|
||||
|
||||
1. [Set up an InfluxDB Enterprise cluster](/enterprise_influxdb/v1.8/install-and-deploy/).
|
||||
2. [Sign up for Grafana Cloud](https://grafana.com/products/cloud/) or
|
||||
[download and install Grafana](https://grafana.com/grafana/download).
|
||||
3. Visit your **Grafana Cloud user interface** (UI) or, if running Grafana locally,
|
||||
[start Grafana](https://grafana.com/docs/grafana/latest/installation/) and visit
|
||||
`http://localhost:3000` in your browser.
|
||||
4. In the left navigation of the Grafana UI, hover over the gear
|
||||
icon to expand the **Configuration** section. Click **Data Sources**.
|
||||
5. Click **Add data source**.
|
||||
6. Select **InfluxDB** from the list of available data sources.
|
||||
7. On the **Data Source configuration page**, enter a **name** for your InfluxDB data source.
|
||||
8. Under **Query Language**, select one of the following:
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[InfluxQL](#)
|
||||
[Flux](#)
|
||||
{{% /tabs %}}
|
||||
<!--------------------------- BEGIN INFLUXQL CONTENT -------------------------->
|
||||
{{% tab-content %}}
|
||||
## Configure Grafana to use InfluxQL
|
||||
|
||||
With **InfluxQL** selected as the query language in your InfluxDB data source settings:
|
||||
|
||||
1. Under **HTTP**, enter the following:
|
||||
|
||||
- **URL**: Your **InfluxDB Enterprise URL** or **load balancer URL**.
|
||||
|
||||
```sh
|
||||
http://localhost:8086/
|
||||
```
|
||||
- **Access**: Server (default)
|
||||
|
||||
2. Under **InfluxDB Details**, enter the following:
|
||||
|
||||
- **Database**: your database name
|
||||
- **User**: your InfluxDB Enterprise username _(if [authentication is enabled](/influxdb/v1.8/administration/authentication_and_authorization/))_
|
||||
- **Password**: your InfluxDB Enterprise password _(if [authentication is enabled](/influxdb/v1.8/administration/authentication_and_authorization/))_
|
||||
- **HTTP Method**: select **GET** or **POST** _(for differences between the two,
|
||||
see the [query HTTP endpoint documentation](/influxdb/v1.8/tools/api/#query-http-endpoint))_
|
||||
|
||||
3. Provide a **[Min time interval](https://grafana.com/docs/grafana/latest/datasources/influxdb/#min-time-interval)**
|
||||
(default is 10s).
|
||||
|
||||
{{< img-hd src="/img/enterprise/1-7-tools-grafana-influxql.png" />}}
|
||||
|
||||
4. Click **Save & Test**. Grafana attempts to connect to InfluxDB Enterprise and returns
|
||||
the result of the test.
|
||||
{{% /tab-content %}}
|
||||
<!---------------------------- END INFLUXQL CONTENT --------------------------->
|
||||
<!----------------------------- BEGIN FLUX CONTENT ---------------------------->
|
||||
{{% tab-content %}}
|
||||
## Configure Grafana to use Flux
|
||||
|
||||
With **Flux** selected as the query language in your InfluxDB data source,
|
||||
configure your InfluxDB connection:
|
||||
|
||||
1. Ensure [Flux is enabled](/influxdb/v1.8/flux/installation/) in InfluxDB Enterprise data nodes.
|
||||
|
||||
2. Under **Connection**, enter the following:
|
||||
|
||||
- **URL**: Your **InfluxDB Enterprise URL** or **load balancer URL**.
|
||||
|
||||
```sh
|
||||
http://localhost:8086/
|
||||
```
|
||||
|
||||
- **Organization**: Provide an arbitrary value.
|
||||
- **Token**: Provide your InfluxDB Enterprise username and password using the following syntax:
|
||||
|
||||
```sh
|
||||
# Syntax
|
||||
username:password
|
||||
|
||||
# Example
|
||||
johndoe:mY5uP3rS3crE7pA5Sw0Rd
|
||||
```
|
||||
|
||||
We recommend [enabling authentication](/influxdb/v1.8/administration/authentication_and_authorization/)
|
||||
on all InfluxDB Enterprise clusters. If you choose to leave authentication disabled,
|
||||
leave this field blank.
|
||||
|
||||
- **Default Bucket**: Provide a default database and retention policy combination
|
||||
using the following syntax:
|
||||
|
||||
```sh
|
||||
# Syntax
|
||||
database-name/retention-policy-name
|
||||
|
||||
# Examples
|
||||
example-db/example-rp
|
||||
telegraf/autogen
|
||||
```
|
||||
|
||||
- **Min time interval**: [Grafana minimum time interval](https://grafana.com/docs/grafana/latest/features/datasources/influxdb/#min-time-interval).
|
||||
|
||||
{{< img-hd src="/img/enterprise/1-8-tools-grafana-flux.png" />}}
|
||||
|
||||
3. Click **Save & Test**. Grafana attempts to connect to InfluxDB Enterprise and returns
|
||||
the result of the test.
|
||||
{{% /tab-content %}}
|
||||
<!------------------------------ END FLUX CONTENT ----------------------------->
|
||||
{{< /tabs-wrapper >}}
|
|
@ -9,6 +9,7 @@ menu:
|
|||
name: Manage multiple users
|
||||
aliases:
|
||||
- /influxdb/v2.0/account-management/multi-user/
|
||||
- /influxdb/cloud/users/
|
||||
---
|
||||
|
||||
{{< cloud-name >}} accounts support multiple users in an organization.
|
||||
|
|
|
@ -98,7 +98,7 @@ To use the `influx` CLI to manage and interact with your InfluxDB Cloud instance
|
|||
|
||||
Click the following button to download and install `influx` CLI for macOS.
|
||||
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2_client_2.0.3_darwin_amd64.tar.gz" download>influx CLI (macOS)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.0.4-darwin-amd64.tar.gz" download>influx CLI (macOS)</a>
|
||||
|
||||
#### Step 2: Unpackage the influx binary
|
||||
|
||||
|
@ -110,7 +110,7 @@ or run the following command in a macOS command prompt application such
|
|||
|
||||
```sh
|
||||
# Unpackage contents to the current working directory
|
||||
tar zxvf ~/Downloads/influxdb2_client_2.0.3_darwin_amd64.tar.gz
|
||||
tar zxvf ~/Downloads/influxdb2-client-2.0.4-darwin-amd64.tar.gz
|
||||
```
|
||||
|
||||
#### Step 3: (Optional) Place the binary in your $PATH
|
||||
|
@ -122,7 +122,7 @@ prefix the executable with `./` to run in place. If the binary is on your $PATH,
|
|||
|
||||
```sh
|
||||
# Copy the influx binary to your $PATH
|
||||
sudo cp influxdb2_client_2.0.3_darwin_amd64/influx /usr/local/bin/
|
||||
sudo cp influxdb2-client-2.0.4-darwin-amd64/influx /usr/local/bin/
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
@ -166,8 +166,8 @@ To see all available `influx` commands, type `influx -h` or check out [influx -
|
|||
|
||||
Click one of the following buttons to download and install the `influx` CLI appropriate for your chipset.
|
||||
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2_client_2.0.3_linux_amd64.tar.gz" download >influx CLI (amd64)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2_client_2.0.3_linux_arm64.tar.gz" download >influx CLI (arm)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.0.4-linux-amd64.tar.gz" download >influx CLI (amd64)</a>
|
||||
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.0.4-linux-arm64.tar.gz" download >influx CLI (arm)</a>
|
||||
|
||||
#### Step 2: Unpackage the influx binary
|
||||
|
||||
|
@ -175,7 +175,7 @@ Click one of the following buttons to download and install the `influx` CLI appr
|
|||
|
||||
```sh
|
||||
# Unpackage contents to the current working directory
|
||||
tar xvfz influxdb_client_2.0.3_linux_amd64.tar.gz
|
||||
tar xvfz influxdb-client-2.0.4-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
#### Step 3: (Optional) Place the binary in your $PATH
|
||||
|
@ -187,7 +187,7 @@ prefix the executable with `./` to run in place. If the binary is on your $PATH,
|
|||
|
||||
```sh
|
||||
# Copy the influx and influxd binary to your $PATH
|
||||
sudo cp influxdb_client_2.0.3_linux_amd64/influx /usr/local/bin/
|
||||
sudo cp influxdb-client-2.0.4-linux-amd64/influx /usr/local/bin/
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
|
|
@ -11,225 +11,4 @@ related:
|
|||
- /influxdb/cloud/monitor-alert/checks/
|
||||
---
|
||||
|
||||
Send an alert email using a third party service, such as [SendGrid](https://sendgrid.com/), [Amazon Simple Email Service (SES)](https://aws.amazon.com/ses/), [Mailjet](https://www.mailjet.com/), or [Mailgun](https://www.mailgun.com/). To send an alert email, complete the following steps:
|
||||
|
||||
1. [Create a check](/influxdb/cloud/monitor-alert/checks/create/#create-a-check-in-the-influxdb-ui) to identify the data to monitor and the status to alert on.
|
||||
2. Set up your preferred email service (sign up, retrieve API credentials, and send test email):
|
||||
- **SendGrid**: See [Getting Started With the SendGrid API](https://sendgrid.com/docs/API_Reference/api_getting_started.html)
|
||||
- **AWS Simple Email Service (SES)**: See [Using the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email.html). Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html).
|
||||
- **Mailjet**: See [Getting Started with Mailjet](https://dev.mailjet.com/email/guides/getting-started/)
|
||||
- **Mailgun**: See [Mailgun Signup](https://signup.mailgun.com/new/signup)
|
||||
3. [Create an alert email task](#create-an-alert-email-task) to call your email service and send an alert email.
|
||||
|
||||
{{% note %}}
|
||||
In the procedure below, we use the **Task** page in the InfluxDB UI (user interface) to create a task. Explore other ways to [create a task](/influxdb/cloud/process-data/manage-tasks/create-task/).
|
||||
{{% /note %}}
|
||||
|
||||
### Create an alert email task
|
||||
|
||||
1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left.
|
||||
|
||||
{{< nav-icon "tasks" >}}
|
||||
|
||||
2. Click **{{< icon "plus" >}} Create Task**, and then select **New Task**.
|
||||
3. In the **Name** field, enter a descriptive name, for example, **Send alert email**, and then enter how often to run the task in the **Every** field, for example, `10m`. For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/cloud/process-data/task-options/).
|
||||
|
||||
4. In the right panel, enter the following detail in your **task script** (see [examples below](#examples)):
|
||||
- Import the [Flux HTTP package](/influxdb/cloud/reference/flux/stdlib/http/).
|
||||
- (Optional) Store your API key as a secret for reuse. First, [add your API key as a secret](/influxdb/cloud/security/secrets/manage-secrets/add/), and then import the [Flux InfluxDB Secrets package](/influxdb/cloud/reference/flux/stdlib/secrets/).
|
||||
- Query the `statuses` measurement in the `_monitoring` bucket to retrieve all statuses generated by your check.
|
||||
- Set the time range to monitor; use the same interval that the task is scheduled to run. For example, `range (start: -task.every)`.
|
||||
- Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`.
|
||||
- Use the `map()` function to evaluate the criteria to send an alert using `http.post()`.
|
||||
- Specify your email service `url` (endpoint), include applicable request `headers`, and verify your request `data` format follows the format specified for your email service.
|
||||
|
||||
#### Examples
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SendGrid](#)
|
||||
[AWS SES](#)
|
||||
[Mailjet](#)
|
||||
[Mailgun](#)
|
||||
{{% /tabs %}}
|
||||
|
||||
<!-------------------------------- BEGIN SendGrid -------------------------------->
|
||||
{{% tab-content %}}
|
||||
|
||||
The example below uses the SendGrid API to send an alert email when more than 3 critical statuses occur since the previous task run.
|
||||
|
||||
```js
|
||||
import "http"
|
||||
|
||||
// Import the Secrets package if you store your API key as a secret.
|
||||
// For detail on how to do this, see Step 4 above.
|
||||
import "influxdata/influxdb/secrets"
|
||||
|
||||
// Retrieve the secret if applicable. Otherwise, skip this line
|
||||
// and add the API key as the Bearer token in the Authorization header.
|
||||
SENDGRID_APIKEY = secrets.get(key: "SENDGRID_APIKEY")
|
||||
|
||||
numberOfCrits = from(bucket: "_monitoring")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) => r.measurement == "statuses" and r.level == "crit")
|
||||
|> count()
|
||||
|
||||
numberOfCrits
|
||||
|> map(fn: (r) => (if r._value > 3 then {
|
||||
r with _value: http.post(
|
||||
url: "https://api.sendgrid.com/v3/mail/send",
|
||||
headers: {"Content-Type": "application/json", Authorization: "Bearer ${SENDGRID_APIKEY}"},
|
||||
data: bytes(v: "{
|
||||
\"personalizations\": [{
|
||||
\"to\": [{
|
||||
\"email\": \”jane.doe@example.com\"}],
|
||||
\"subject\": \”InfluxData critical alert\"
|
||||
}],
|
||||
\"from\": {\"email\": \"john.doe@example.com\"},
|
||||
\"content\": [{
|
||||
\"type\": \"text/plain\",
|
||||
\"value\": \”Example alert text\"
|
||||
}]
|
||||
}\""))} else {r with _value: 0}))
|
||||
```
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
||||
<!-------------------------------- BEGIN AWS SES -------------------------------->
|
||||
{{% tab-content %}}
|
||||
|
||||
The example below uses the AWS SES API v2 to send an alert email when more than 3 critical statuses occur since the last task run.
|
||||
|
||||
{{% note %}}
|
||||
Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). We recommend signing your AWS API requests using the [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html).
|
||||
{{% /note %}}
|
||||
|
||||
```js
|
||||
import "http"
|
||||
|
||||
// Import the Secrets package if you store your API credentials as secrets.
|
||||
// For detail on how to do this, see Step 4 above.
|
||||
import "influxdata/influxdb/secrets"
|
||||
|
||||
// Retrieve the secrets if applicable. Otherwise, skip this line
|
||||
// and add the API key as the Bearer token in the Authorization header.
|
||||
AWS_AUTH_ALGORITHM = secrets.get(key: "AWS_AUTH_ALGORITHM")
|
||||
AWS_CREDENTIAL = secrets.get(key: "AWS_CREDENTIAL")
|
||||
AWS_SIGNED_HEADERS = secrets.get(key: "AWS_SIGNED_HEADERS")
|
||||
AWS_CALCULATED_SIGNATURE = secrets.get(key: "AWS_CALCULATED_SIGNATURE")
|
||||
|
||||
numberOfCrits = from(bucket: "_monitoring")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) => (r.measurement == "statuses" and r._level == "crit")
|
||||
|> count()
|
||||
|
||||
numberOfCrits
|
||||
|> map(fn: (r) => (if r._value > 3 then {
|
||||
r with _value: http.post(
|
||||
url: "https://email.your-aws-region.amazonaws.com/sendemail/v2/email/outbound-emails",
|
||||
headers: {"Content-Type": "application/json", Authorization: "Bearer ${AWS_AUTH_ALGORITHM}${AWS_CREDENTIAL}${AWS_SIGNED_HEADERS}${AWS_CALCULATED_SIGNATURE}"},
|
||||
data: bytes(v: "{
|
||||
\"personalizations\": [{
|
||||
\"to\": [{
|
||||
\"email\": \”jane.doe@example.com\"}],
|
||||
\"subject\": \”InfluxData critical alert\"
|
||||
}],
|
||||
\"from\": {\"email\": \"john.doe@example.com\"},
|
||||
\"content\": [{
|
||||
\"type\": \"text/plain\",
|
||||
\"value\": \”Example alert text\"
|
||||
}]
|
||||
}\""))} else {r with _value: 0}))
|
||||
```
|
||||
|
||||
For details on the request syntax, see [SendEmail API v2 reference](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html).
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
||||
<!-------------------------------- BEGIN Mailjet ------------------------------->
|
||||
{{% tab-content %}}
|
||||
|
||||
The example below uses the Mailjet Send API to send an alert email when more than 3 critical statuses occur since the last task run.
|
||||
|
||||
{{% note %}}
|
||||
To view your Mailjet API credentials, sign in to Mailjet and open the [API Key Management page](https://app.mailjet.com/account/api_keys).
|
||||
{{% /note %}}
|
||||
|
||||
```js
|
||||
import "http"
|
||||
|
||||
// Import the Secrets package if you store your API keys as secrets.
|
||||
// For detail on how to do this, see Step 4 above.
|
||||
import "influxdata/influxdb/secrets"
|
||||
|
||||
// Retrieve the secrets if applicable. Otherwise, skip this line
|
||||
// and add the API keys as Basic credentials in the Authorization header.
|
||||
MAILJET_APIKEY = secrets.get(key: "MAILJET_APIKEY")
|
||||
MAILJET_SECRET_APIKEY = secrets.get(key: "MAILJET_SECRET_APIKEY")
|
||||
|
||||
numberOfCrits = from(bucket: "_monitoring")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) => (r.measurement == "statuses" and "r.level" == "crit")
|
||||
|> count()
|
||||
|
||||
numberOfCrits
|
||||
|> map(fn: (r) => (if r._value > 3 then {
|
||||
r with _value: http.post(
|
||||
url: "https://api.mailjet.com/v3.1/send",
|
||||
headers: {"Content-type": "application/json", Authorization: "Basic ${MAILJET_APIKEY}:${MAILJET_SECRET_APIKEY}"},
|
||||
data: bytes(v: "{
|
||||
\"Messages\": [{
|
||||
\"From\": {\"Email\": \”jane.doe@example.com\"},
|
||||
\"To\": [{\"Email\": \"john.doe@example.com\"]},
|
||||
\"Subject\": \”InfluxData critical alert\",
|
||||
\"TextPart\": \”Example alert text\"
|
||||
\"HTMLPart\": `"<h3>Hello, to review critical alerts, review your <a href=\"https://www.example-dashboard.com/\">Critical Alert Dashboard</a></h3>}]}'
|
||||
}\""))} else {r with _value: 0}))
|
||||
```
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
||||
<!-------------------------------- BEGIN Mailgun ---------------------------->
|
||||
|
||||
{{% tab-content %}}
|
||||
|
||||
The example below uses the Mailgun API to send an alert email when more than 3 critical statuses occur since the last task run.
|
||||
|
||||
{{% note %}}
|
||||
To view your Mailgun API keys, sign in to Mailjet and open [Account Security - API security](https://app.mailgun.com/app/account/security/api_keys). Mailgun requires that a domain be specified via Mailgun. A domain is automatically created for you when you first set up your account. You must include this domain in your `url` endpoint (for example, `https://api.mailgun.net/v3/YOUR_DOMAIN` or `https://api.eu.mailgun.net/v3/YOUR_DOMAIN`. If you're using a free version of Mailgun, you can set up a maximum of five authorized recipients (to receive email alerts) for your domain. To view your Mailgun domains, sign in to Mailgun and view the [Domains page](https://app.mailgun.com/app/sending/domains).
|
||||
{{% /note %}}
|
||||
|
||||
```js
|
||||
import "http"
|
||||
|
||||
// Import the Secrets package if you store your API key as a secret.
|
||||
// For detail on how to do this, see Step 4 above.
|
||||
import "influxdata/influxdb/secrets"
|
||||
|
||||
// Retrieve the secret if applicable. Otherwise, skip this line
|
||||
// and add the API key as the Bearer token in the Authorization header.
|
||||
MAILGUN_APIKEY = secrets.get(key: "MAILGUN_APIKEY")
|
||||
|
||||
numberOfCrits = from(bucket: "_monitoring")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) => (r["_measurement"] == "statuses"))
|
||||
|> filter(fn: (r) => (r["_level"] == "crit"))
|
||||
|> count()
|
||||
|
||||
numberOfCrits
|
||||
|> map(fn: (r) =>
|
||||
(if r._value > 1 then {r with _value: http.post(
|
||||
url: "https://api.mailgun.net/v3/YOUR_DOMAIN/messages",
|
||||
headers: {"Content-type": "application/json", Authorization: "Basic api:${MAILGUN_APIKEY}"},
|
||||
data: bytes(v: "{
|
||||
\"from\": \"Username <mailgun@YOUR_DOMAIN_NAME>\",
|
||||
\"to\"=\"YOU@YOUR_DOMAIN_NAME\",
|
||||
\"to\"=\"email@example.com\",
|
||||
\"subject\"=\"Critical InfluxData alert\",
|
||||
\"text\"=\"You have critical alerts to review\"
|
||||
}\""))} else {r with _value: 0}))
|
||||
```
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
||||
{{< /tabs-wrapper >}}
|
||||
{{< duplicate-oss >}}
|
|
@ -27,7 +27,7 @@ If you change a bucket name, be sure to update the bucket in the above places as
|
|||
|
||||
{{< nav-icon "data" >}}
|
||||
|
||||
2. Click **Settings** under the bucket you want to rename.
|
||||
2. Click **Settings** to the right of the bucket you want to rename.
|
||||
3. Click **Rename**.
|
||||
3. Review the information in the window that appears and click **I understand, let's rename my bucket**.
|
||||
4. Update the bucket's name and click **Change Bucket Name**.
|
||||
|
@ -39,8 +39,15 @@ If you change a bucket name, be sure to update the bucket in the above places as
|
|||
{{< nav-icon "data" >}}
|
||||
|
||||
2. Click **Settings** next to the bucket you want to update.
|
||||
3. In the window that appears, edit the bucket's retention policy.
|
||||
4. Click **Save Changes**.
|
||||
3. In the window that appears, under **Delete data**, select a retention period:
|
||||
|
||||
- **Never**: data in the bucket is retained indefinitely.
|
||||
- **Older Than**: select a predefined retention period from the dropdown menu.
|
||||
|
||||
{{% note %}}
|
||||
Use the [`influx bucket update` command](#update-a-buckets-retention-policy) to set a custom retention policy.
|
||||
{{% /note %}}
|
||||
5. Click **Save Changes**.
|
||||
|
||||
## Update a bucket using the influx CLI
|
||||
|
||||
|
|
|
@ -14,135 +14,4 @@ menu:
|
|||
weight: 101
|
||||
---
|
||||
|
||||
An **InfluxDB task** is a scheduled Flux script that takes a stream of input data, modifies or analyzes
|
||||
it in some way, then stores the modified data in a new bucket or performs other actions.
|
||||
|
||||
This article walks through writing a basic InfluxDB task that downsamples
|
||||
data and stores it in a new bucket.
|
||||
|
||||
## Components of a task
|
||||
Every InfluxDB task needs the following four components.
|
||||
Their form and order can vary, but they are all essential parts of a task.
|
||||
|
||||
- [Task options](#define-task-options)
|
||||
- [A data source](#define-a-data-source)
|
||||
- [Data processing or transformation](#process-or-transform-your-data)
|
||||
- [A destination](#define-a-destination)
|
||||
|
||||
_[Skip to the full example task script](#full-example-task-script)_
|
||||
|
||||
## Define task options
|
||||
Task options define specific information about the task.
|
||||
The example below illustrates how task options are defined in your Flux script:
|
||||
|
||||
```js
|
||||
option task = {
|
||||
name: "cqinterval15m",
|
||||
every: 1h,
|
||||
offset: 0m,
|
||||
concurrency: 1,
|
||||
retry: 5
|
||||
}
|
||||
```
|
||||
|
||||
_See [Task configuration options](/influxdb/cloud/process-data/task-options) for detailed information
|
||||
about each option._
|
||||
|
||||
{{% note %}}
|
||||
When creating a task in the InfluxDB user interface (UI), task options are defined in form fields.
|
||||
{{% /note %}}
|
||||
|
||||
## Define a data source
|
||||
Define a data source using Flux's [`from()` function](/influxdb/cloud/reference/flux/stdlib/built-in/inputs/from/)
|
||||
or any other [Flux input functions](/influxdb/cloud/reference/flux/stdlib/built-in/inputs/).
|
||||
|
||||
For convenience, consider creating a variable that includes the sourced data with
|
||||
the required time range and any relevant filters.
|
||||
|
||||
```js
|
||||
data = from(bucket: "telegraf/default")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" and
|
||||
r.host == "myHost"
|
||||
)
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
#### Using task options in your Flux script
|
||||
Task options are passed as part of a `task` option record and can be referenced in your Flux script.
|
||||
In the example above, the time range is defined as `-task.every`.
|
||||
|
||||
`task.every` is dot notation that references the `every` property of the `task` option record.
|
||||
`every` is defined as `1h`, therefore `-task.every` equates to `-1h`.
|
||||
|
||||
Using task options to define values in your Flux script can make reusing your task easier.
|
||||
{{% /note %}}
|
||||
|
||||
## Process or transform your data
|
||||
The purpose of tasks is to process or transform data in some way.
|
||||
What exactly happens and what form the output data takes is up to you and your
|
||||
specific use case.
|
||||
|
||||
The example below illustrates a task that downsamples data by calculating the average of set intervals.
|
||||
It uses the `data` variable defined [above](#define-a-data-source) as the data source.
|
||||
It then windows the data into 5 minute intervals and calculates the average of each
|
||||
window using the [`aggregateWindow()` function](/influxdb/cloud/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/).
|
||||
|
||||
```js
|
||||
data
|
||||
|> aggregateWindow(
|
||||
every: 5m,
|
||||
fn: mean
|
||||
)
|
||||
```
|
||||
|
||||
_See [Common tasks](/influxdb/cloud/process-data/common-tasks) for examples of tasks commonly used with InfluxDB._
|
||||
|
||||
## Define a destination
|
||||
In the vast majority of task use cases, once data is transformed, it needs to be sent and stored somewhere.
|
||||
This could be a separate bucket or another measurement.
|
||||
|
||||
The example below uses Flux's [`to()` function](/influxdb/cloud/reference/flux/stdlib/built-in/outputs/to)
|
||||
to send the transformed data to another bucket:
|
||||
|
||||
```js
|
||||
// ...
|
||||
|> to(bucket: "telegraf_downsampled", org: "my-org")
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
In order to write data into InfluxDB, you must have `_time`, `_measurement`, `_field`, and `_value` columns.
|
||||
{{% /note %}}
|
||||
|
||||
## Full example task script
|
||||
Below is a task script that combines all of the components described above:
|
||||
|
||||
```js
|
||||
// Task options
|
||||
option task = {
|
||||
name: "cqinterval15m",
|
||||
every: 1h,
|
||||
offset: 0m,
|
||||
concurrency: 1,
|
||||
retry: 5
|
||||
}
|
||||
|
||||
// Data source
|
||||
data = from(bucket: "telegraf/default")
|
||||
|> range(start: -task.every)
|
||||
|> filter(fn: (r) =>
|
||||
r._measurement == "mem" and
|
||||
r.host == "myHost"
|
||||
)
|
||||
|
||||
data
|
||||
// Data transformation
|
||||
|> aggregateWindow(
|
||||
every: 5m,
|
||||
fn: mean
|
||||
)
|
||||
// Data destination
|
||||
|> to(bucket: "telegraf_downsampled")
|
||||
|
||||
```
|
||||
{{< duplicate-oss >}}
|
|
@ -53,6 +53,15 @@ The InfluxDB UI provides multiple ways to create a task:
|
|||
See [Task options](/influxdb/cloud/process-data/task-options) for detailed information about each option.
|
||||
5. Select a token to use from the **Token** dropdown.
|
||||
6. In the right panel, enter your task script.
|
||||
|
||||
{{% note %}}
|
||||
##### Leave out the options tasks assignment
|
||||
When creating a _new_ task in the InfluxDB Task UI, leave out the `options task`
|
||||
assignment that defines [task options](/influxdb/v2.0/process-data/task-options/).
|
||||
The InfluxDB UI injects this code using settings specified in the **Task options**
|
||||
fields in the left panel when you save the task.
|
||||
{{% /note %}}
|
||||
|
||||
7. Click **Save** in the upper right.
|
||||
|
||||
### Import a task
|
||||
|
|
|
@ -12,101 +12,4 @@ weight: 105
|
|||
influxdb/cloud/tags: [tasks, flux]
|
||||
---
|
||||
|
||||
Task options define specific information about a task.
|
||||
They are set in a Flux script or in the InfluxDB user interface (UI).
|
||||
The following task options are available:
|
||||
|
||||
- [name](#name)
|
||||
- [every](#every)
|
||||
- [cron](#cron)
|
||||
- [offset](#offset)
|
||||
- [concurrency](#concurrency)
|
||||
- [retry](#retry)
|
||||
|
||||
{{% note %}}
|
||||
`every` and `cron` are mutually exclusive, but at least one is required.
|
||||
{{% /note %}}
|
||||
|
||||
## name
|
||||
The name of the task. _**Required**_.
|
||||
|
||||
_**Data type:** String_
|
||||
|
||||
```js
|
||||
options task = {
|
||||
name: "taskName",
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
## every
|
||||
The interval at which the task runs.
|
||||
|
||||
_**Data type:** Duration_
|
||||
|
||||
```js
|
||||
options task = {
|
||||
// ...
|
||||
every: 1h,
|
||||
}
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
In the InfluxDB UI, the **Interval** field sets this option.
|
||||
{{% /note %}}
|
||||
|
||||
## cron
|
||||
The [cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that
|
||||
defines the schedule on which the task runs.
|
||||
Cron scheduling is based on system time.
|
||||
|
||||
_**Data type:** String_
|
||||
|
||||
```js
|
||||
options task = {
|
||||
// ...
|
||||
cron: "0 * * * *",
|
||||
}
|
||||
```
|
||||
|
||||
## offset
|
||||
Delays the execution of the task but preserves the original time range.
|
||||
For example, if a task is to run on the hour, a `10m` offset will delay it to 10
|
||||
minutes after the hour, but all time ranges defined in the task are relative to
|
||||
the specified execution time.
|
||||
A common use case is offsetting execution to account for data that may arrive late.
|
||||
|
||||
_**Data type:** Duration_
|
||||
|
||||
```js
|
||||
options task = {
|
||||
// ...
|
||||
offset: "0 * * * *",
|
||||
}
|
||||
```
|
||||
|
||||
## concurrency
|
||||
The number task of executions that can run concurrently.
|
||||
If the concurrency limit is reached, all subsequent executions are queued until
|
||||
other running task executions complete.
|
||||
|
||||
_**Data type:** Integer_
|
||||
|
||||
```js
|
||||
options task = {
|
||||
// ...
|
||||
concurrency: 2,
|
||||
}
|
||||
```
|
||||
|
||||
## retry
|
||||
The number of times to retry the task before it is considered as having failed.
|
||||
|
||||
_**Data type:** Integer_
|
||||
|
||||
```js
|
||||
options task = {
|
||||
// ...
|
||||
retry: 2,
|
||||
}
|
||||
```
|
||||
{{< duplicate-oss >}}
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Use the `influx query` command
|
||||
title: Use the influx query command
|
||||
description: Use the influx CLI to query InfluxDB data.
|
||||
weight: 204
|
||||
menu:
|
||||
|
@ -7,6 +7,8 @@ menu:
|
|||
name: Use the influx CLI
|
||||
parent: Execute queries
|
||||
influxdb/cloud/tags: [query]
|
||||
related:
|
||||
- /influxdb/cloud/reference/cli/influx/query/
|
||||
---
|
||||
|
||||
{{< duplicate-oss >}}
|
|
@ -87,6 +87,6 @@ The [Execute queries](/influxdb/cloud/query-data/execute-queries) guide walks th
|
|||
the different tools available for querying InfluxDB with Flux.
|
||||
|
||||
<div class="page-nav-btns">
|
||||
<a class="btn prev" href="/v2.0/query-data/">Introduction to Flux</a>
|
||||
<a class="btn next" href="/v2.0/query-data/get-started/query-influxdb/">Query InfluxDB with Flux</a>
|
||||
<a class="btn prev" href="/influxdb/cloud/query-data/">Introduction to Flux</a>
|
||||
<a class="btn next" href="/influxdb/cloud/query-data/get-started/query-influxdb/">Query InfluxDB with Flux</a>
|
||||
</div>
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue