Merge branch 'master' into fix/run-task

pull/3875/head
sunbryely-influxdata 2022-06-17 09:34:19 -07:00 committed by GitHub
commit e44c471b27
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
326 changed files with 16944 additions and 3864 deletions

2
.gitignore vendored
View File

@ -9,3 +9,5 @@ node_modules
/content/influxdb/*/api/*.html
/api-docs/redoc-static.html*
.vscode/*
.idea
package-lock.json

View File

@ -1,6 +1,7 @@
{
"cSpell.words": [
"CLOCKFACE",
"compactible",
"dbrp",
"downsample",
"eastus",

View File

@ -386,6 +386,12 @@ This shortcode must be closed with `{{% /tabs %}}`.
**Note**: The `%` characters used in this shortcode indicate that the contents should be processed as Markdown.
The `{{% tabs %}}` shortcode has an optional `style` argument that lets you
assign CSS classes to the tags HTML container. The following classes are available:
- **small**: Tab buttons are smaller and don't scale to fit the width.
- **even-wrap**: Prevents uneven tab widths when tabs are forced to wrap.
`{{% tab-content %}}`
This shortcode creates a container for a content block.
Each content block in the tab group needs to be wrapped in this shortcode.
@ -1108,6 +1114,52 @@ Use the `{{< caps >}}` shortcode to format text to match those buttons.
Click {{< caps >}}Add Data{{< /caps >}}
```
#### Code callouts
Use the `{{< code-callout >}}` shortcode to highlight and emphasize a specific
piece of code in a code block. Provide the string to highlight in the code block.
Include a syntax for the codeblock to properly style the called out code.
~~~md
{{< code-callout "03a2bbf46249a000" >}}
```sh
http://localhost:8086/orgs/03a2bbf46249a000/...
```
{{< /code-callout >}}
~~~
#### InfluxDB University banners
Use the `{{< influxdbu >}}` shortcode to add an InfluxDB University banner that
points to the InfluxDB University site or a specific course.
Use the default banner template, a predefined course template, or fully customize
the content of the banner.
```html
<!-- Default banner -->
{{< influxdbu >}}
<!-- Predfined course banner -->
{{< influxdbu "influxdb-101" >}}
<!-- Custom banner -->
{{< influxdbu title="Course title" summary="Short course summary." action="Take the course" link="https://university.influxdata.com/" >}}
```
##### Course templates
Use one of the following course templates:
- influxdb-101
- telegraf-102
- flux-103
##### Custom banner content
Use the following shortcode parameters to customize the content of the InfluxDB
University banner:
- **title**: Course or banner title
- **summary**: Short description shown under the title
- **action**: Text of the button
- **link**: URL the button links to
### Reference content
The InfluxDB documentation is "task-based," meaning content primarily focuses on
what a user is **doing**, not what they are **using**.

File diff suppressed because it is too large Load Diff

View File

@ -2,14 +2,20 @@ openapi: 3.0.0
info:
title: InfluxDB Cloud v1 compatibility API documentation
version: 0.1.0
description: |
The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.
description: >
The InfluxDB 1.x compatibility /write and /query endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana and
others.
If you want to use the latest InfluxDB `/api/v2` API instead,
see the [InfluxDB v2 API documentation](/influxdb/cloud/api/).
If you want to use the latest InfluxDB /api/v2 API instead, see the
[InfluxDB v2 API documentation](/influxdb/cloud/api/).
This documentation is generated from the
[InfluxDB OpenAPI
specification](https://raw.githubusercontent.com/influxdata/openapi/master/contracts/swaggerV1Compat.yml).
servers:
- url: /
paths:

View File

@ -32,7 +32,7 @@ weight: 102
v1frontmatter="---
title: InfluxDB $titleVersion v1 compatibility API documentation
description: >
The InfluxDB v1 compatibility API provides a programmatic interface for interactions with InfluxDB $titleVersion using InfluxDB v1.x compatibly endpoints.
The InfluxDB v1 compatibility API provides a programmatic interface for interactions with InfluxDB $titleVersion using InfluxDB v1.x compatibility endpoints.
layout: api
menu:
$menu:
@ -58,6 +58,7 @@ weight: 304
--options.sortPropsAlphabetically \
--options.menuToggle \
--options.hideHostname \
--options.hideDownloadButton \
--options.noAutoAuth \
--templateOptions.version="$version" \
--templateOptions.titleVersion="$titleVersion" \
@ -69,6 +70,7 @@ weight: 304
--title="InfluxDB $titleVersion v1 compatibility API documentation" \
--options.sortPropsAlphabetically \
--options.menuToggle \
--options.hideDownloadButton \
--options.hideHostname \
--options.noAutoAuth \
--templateOptions.version="$version" \

View File

@ -99,7 +99,7 @@ function postProcess() {
version="$2"
apiVersion="$3"
openapiCLI="@redocly/openapi-cli"
openapiCLI=" @redocly/cli"
npx --version

View File

@ -1,3 +1,7 @@
title: InfluxDB Cloud API Service
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB.
Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/master/contracts/ref/cloud.yml).

View File

@ -1 +1,8 @@
title: InfluxDB Cloud v1 compatibility API documentation
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/cloud/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/master/contracts/swaggerV1Compat.yml).

View File

@ -1,4 +1,7 @@
title: InfluxDB OSS API Service
version: 2.0.0
version: 2.2.0
description: |
The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint.
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss/contracts/ref/oss.yml).

View File

@ -1 +1,9 @@
title: InfluxDB OSS v1 compatibility API documentation
version: 2.2.0 v1 compatibility
description: |
The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others.
If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/v2.2/api/).
This documentation is generated from the
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss/contracts/swaggerV1Compat.yml).

File diff suppressed because it is too large Load Diff

View File

@ -1,15 +1,21 @@
openapi: 3.0.0
info:
title: InfluxDB OSS v1 compatibility API documentation
version: 0.1.0
description: |
The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana
and others.
version: 2.2.0 v1 compatibility
description: >
The InfluxDB 1.x compatibility /write and /query endpoints work with
InfluxDB 1.x client libraries and third-party integrations like Grafana and
others.
If you want to use the latest InfluxDB `/api/v2` API instead,
see the [InfluxDB v2 API documentation](/influxdb/cloud/api/).
If you want to use the latest InfluxDB /api/v2 API instead, see the
[InfluxDB v2 API documentation](/influxdb/v2.2/api/).
This documentation is generated from the
[InfluxDB OpenAPI
specification](https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss/contracts/swaggerV1Compat.yml).
servers:
- url: /
paths:

View File

@ -1,12 +1,25 @@
///////////////////////////// Make headers linkable /////////////////////////////
$(".article--content h2, \
.article--content h3, \
.article--content h4, \
.article--content h5, \
.article--content h6" ).each(function() {
var link = "<a href=\"#" + $(this).attr("id") + "\"></a>"
$(this).wrapInner( link );
var headingWhiteList = $("\
.article--content h2, \
.article--content h3, \
.article--content h4, \
.article--content h5, \
.article--content h6 \
");
var headingBlackList = ("\
.influxdbu-banner h4 \
");
headingElements = headingWhiteList.not(headingBlackList);
headingElements.each(function() {
function getLink(element) {
return ((element.attr('href') === undefined ) ? $(element).attr("id") : element.attr('href'))
}
var link = "<a href=\"#" + getLink($(this)) + "\"></a>"
$(this).wrapInner( link );
})
///////////////////////////////// Smooth Scroll /////////////////////////////////

View File

@ -264,10 +264,13 @@ $(window).focus(function() {
////////////////////////// Modal window interactions ///////////////////////////
////////////////////////////////////////////////////////////////////////////////
// Toggle the URL selector modal window
function toggleModal() {
$(".modal").fadeToggle(200).toggleClass("open")
}
// General modal window interactions are controlled in modals.js
// Open the InfluxDB URL selector modal
$(".url-trigger").click(function(e) {
e.preventDefault()
toggleModal('#influxdb-url-list')
})
// Set the selected URL radio buttons to :checked
function setRadioButtons() {
@ -276,12 +279,6 @@ function setRadioButtons() {
$('input[name="influxdb-oss-url"][value="' + currentUrls.oss + '"]').prop("checked", true)
}
// Toggle modal window on click
$("#modal-close, .modal-overlay, .url-trigger").click(function(e) {
e.preventDefault()
toggleModal()
})
// Add checked to fake-radio if cluster is selected on page load
if ($("ul.clusters label input").is(":checked")) {

20
assets/js/modals.js Normal file
View File

@ -0,0 +1,20 @@
////////////////////////////////////////////////////////////////////////////////
/////////////////////// General modal window interactions //////////////////////
////////////////////////////////////////////////////////////////////////////////
// Toggle the URL selector modal window
function toggleModal(modalID="") {
if ($(".modal").hasClass("open")) {
$(".modal").fadeOut(200).removeClass("open");
$(".modal-content").delay(400).hide(0);
} else {
$(".modal").fadeIn(200).addClass("open");
$(`${modalID}.modal-content`).show();
}
}
// Close modal window on click
$("#modal-close, .modal-overlay").click(function(e) {
e.preventDefault()
toggleModal()
})

View File

@ -0,0 +1,97 @@
/*
* This file controls the interactions and life-cycles of the page feedback
* buttons and modal.
*/
// Collect data from the page path
const pathArr = location.pathname.split('/').slice(1, -1)
const pageData = {
host: location.hostname,
path: location.pathname,
product: pathArr[0],
version: (/^v\d/.test(pathArr[1]) || pathArr[1] === "cloud" ? pathArr[1].replace(/^v/, '') : "n/a"),
}
// Hijack form submission and send feedback data to be stored.
// Called by onSubmit in each feedback form.
function submitFeedbackForm(formID) {
// Collect form data, structure as an object, and remove fname honeypot
const formData = new FormData(document.forms[formID]);
const formDataObj = Object.fromEntries(formData.entries());
const {fname, ...feedbackData} = formDataObj;
// Build lp fields from form data
let fields = "";
for (let key in feedbackData) {
// Strip out newlines and escape double quotes if the field key is "feedback"
if (key == "feedback-text") {
fields += key + '="' + feedbackData[key].replace(/(\r\n|\n+|\r+)/gm, " ").replace(/(\")/gm, '\\"') + '",';
} else {
fields += key + "=" + feedbackData[key] + ",";
}
}
fields = fields.substring(0, fields.length -1);
// Build lp using page data and the fields string
const lp = `feedback,host=${pageData.host},path=${pageData.path},product=${pageData.product},version=${pageData.version} ${fields}`
// Use a honeypot form field to detect a bot
// If the value of the honeypot field is greater than 0, the submitter is a bot
function isBot() {
const honeypot = formData.get('fname');
return (honeypot.length > 0)
}
// If the submitter is not a bot, send the feedback data
if (!isBot()) {
xhr = new XMLHttpRequest();
xhr.open('POST', 'https://j32dswat7l.execute-api.us-east-1.amazonaws.com/prod');
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.setRequestHeader('Access-Control-Allow-Origin', `${location.protocol}//${location.host}`);
xhr.setRequestHeader('Content-Type', 'text/plain; charset=utf-8');
xhr.setRequestHeader('Accept', 'application/json');
xhr.send(lp);
}
return false;
}
/////////////////////////////// Click life-cycles //////////////////////////////
// Trigger the lifecycle of page feedback (yes/no) radio select buttons
function submitLifeCycle() {
$('.helpful .loader-wrapper').fadeIn(200);
$('.helpful #thank-you').delay(800).fadeIn(200);
$('.helpful .loader-wrapper').delay(1000).hide(0);
}
// Submit the feedback form and close the feedback modal window.
// Called by onclick in the page-feedback modal submit button.
function submitLifeCycleAndClose() {
submitFeedbackForm('pagefeedbacktext');
$('.modal #page-feedback .loader-wrapper').css('display', 'flex').hide().fadeIn(200);
$('.modal #page-feedback #thank-you').css('display', 'flex').hide().delay(800).fadeIn(200);
$('.modal #page-feedback textarea').css('box-shadow', 'none')
$('.modal #page-feedback .loader-wrapper').delay(1000).hide(0);
setTimeout(function() {toggleModal()}, 1800);
return false;
}
//////////////////////////////// Event triggers ////////////////////////////////
// Submit page feedback (yes/no) on radio select and trigger life cycle
$('#pagefeedback input[type=radio]').change(function() {
$('form#pagefeedback').submit();
submitLifeCycle()
})
// Toggle the feedback modal when user selects that the page is not helpful
$('#pagefeedback #not-helpful input[type=radio]').click(function() {
setTimeout(function() {toggleModal('#page-feedback')}, 400);
})
// Toggle the feedback modal when user selects that the page is not helpful
$('.modal #no-thanks').click(function() {
toggleModal();
})

View File

@ -78,6 +78,13 @@
&:hover {
color: $article-link-hover;
}
&.help-link {
display: inline-block;
width: 1rem;
height: 1rem;
border-radius: 50%;
background: $article-bg;
}
}
strong {
@ -116,6 +123,7 @@
"article/flex",
"article/flux",
"article/html-diagrams",
"article/influxdbu",
"article/keybinding",
"article/list-filters",
"article/lists",

View File

@ -21,6 +21,9 @@ a {
flex-grow: 1;
}
// Used to hide honeypot form fields
.bowlofsweets {display: none;}
////////////////////////////////////////////////////////////////////////////////
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
////////////////////////////////////////////////////////////////////////////////

View File

@ -0,0 +1,26 @@
.loader,
.loader:after {
border-radius: 50%;
width: 10em;
height: 10em;
}
.loader {
font-size: 3px;
position: relative;
border-top: 1.1em solid rgba($article-text, 0.1);
border-right: 1.1em solid rgba($article-text, 0.1);
border-bottom: 1.1em solid rgba($article-text, 0.1);
border-left: 1.1em solid $b-dodger;
transform: translateZ(0);
animation: load8 .6s infinite linear;
}
@keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}

View File

@ -0,0 +1,127 @@
.modal {
display: none;
padding: 1rem;
position: fixed;
top: 0;
width: 100%;
height: 100%;
z-index: 100;
.modal-overlay {
position: absolute;
top:0;
left:0;
width: 100%;
height: 100%;
@include gradient($grad-Miyazakisky);
opacity: .85;
}
.modal-wrapper {
display: flex;
justify-content: center;
align-items: flex-start;
}
.modal-body {
position: relative;
display: flex;
overflow: hidden;
width: 100%;
max-width: 650px;
max-height: 97.5vh;
margin-top: 10vh;
padding: .75rem 2rem 1.5rem;
border-radius: $radius * 1.5;
background: $article-bg;
color: $article-text;
font-size: 1rem;
transition: margin .4s;
}
&.open {
.modal-body { margin-top: 0; }
}
#modal-close {
position: absolute;
padding: .25rem;
top: 1rem;
right: 1rem;
color: rgba($article-text, .5);
transition: color .2s;
text-decoration: none;
&:hover {
color: $article-text;
}
}
.modal-content{
display: none;
overflow: visible;
width: 100%;
h3 {
color: $article-heading;
font-weight: $medium;
font-size: 1.4rem;
margin-bottom: 1rem;
}
h4 {
color: $article-heading;
font-weight: $medium;
margin: 1rem 0 .5rem $radius;
}
h5 {
margin: .5rem 0 0;
color: $article-bold;
}
p,li {
margin: .25rem 0;
line-height: 1.5rem;
strong {
font-weight: $medium;
color: $article-bold;
}
&.note {
padding-top: 1.25rem;
margin-top: 1.5rem;
color: rgba($article-text, .5);
border-top: 1px solid rgba($article-text, .25);
font-size: .9rem;
font-style: italic;
}
}
a {
color: $article-link;
font-weight: $medium;
text-decoration: none;
transition: color .2s;
&:hover {
color: $article-link-hover;
}
}
}
@import "modals/url-selector";
@import "modals/page-feedback";
}
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
@include media(small) {
.modal {
padding: .5rem;
overflow: scroll;
.modal-body {
padding: .5rem 1.5rem 1.5rem;
}
}
}

View File

@ -1,405 +0,0 @@
.modal {
display: none;
padding: 1rem;
position: fixed;
top: 0;
width: 100%;
height: 100%;
z-index: 100;
.modal-overlay {
position: absolute;
top:0;
left:0;
width: 100%;
height: 100%;
@include gradient($grad-Miyazakisky);
opacity: .85;
}
.modal-wrapper {
display: flex;
justify-content: center;
align-items: flex-start;
}
.modal-body {
position: relative;
display: flex;
overflow: hidden;
max-width: 650px;
max-height: 97.5vh;
margin-top: 10vh;
padding: .75rem 2rem 1.5rem;
border-radius: $radius * 1.5;
background: $article-bg;
color: $article-text;
font-size: 1rem;
transition: margin .4s;
}
&.open {
.modal-body { margin-top: 0; }
}
#modal-close {
position: absolute;
padding: .25rem;
top: 1rem;
right: 1rem;
color: rgba($article-text, .5);
transition: color .2s;
text-decoration: none;
&:hover {
color: $article-text;
}
}
.modal-content{
overflow: auto;
h3 {
color: $article-heading;
font-weight: $medium;
font-size: 1.4rem;
margin-bottom: 1rem;
}
h4 {
color: $article-heading;
font-weight: $medium;
margin: 1rem 0 .5rem $radius;
}
h5 {
margin: .5rem 0 0;
color: $article-bold;
}
p,li {
margin: .25rem 0;
line-height: 1.5rem;
strong {
font-weight: $medium;
color: $article-bold;
}
&.note {
padding-top: 1.25rem;
margin-top: 1.5rem;
color: rgba($article-text, .5);
border-top: 1px solid rgba($article-text, .25);
font-size: .9rem;
font-style: italic;
}
}
a {
color: $article-link;
font-weight: $medium;
text-decoration: none;
transition: color .2s;
&:hover {
color: $article-link-hover;
}
}
}
.products {
display: flex;
flex-direction: column;
flex-wrap: wrap;
flex-grow: 1;
justify-content: flex-start;
}
.product {
.providers{
display: flex;
flex-wrap: wrap;
padding: .5rem 1rem;
background: rgba($article-text, .05);
border-radius: $radius;
.provider {
flex-grow: 1;
&:not(:last-child) {margin-right: 1rem;}
}
ul {
margin: .5rem .5rem .5rem 0;
padding: 0;
list-style: none;
&.clusters {
padding-left: 1.75rem;
}
}
p.region {
.fake-radio {
position: relative;
display: inline-block;
height: 1.15em;
width: 1.15em;
margin: 0 0.3rem 0 0.1rem;
border-radius: $radius;
border: 1.5px solid transparent;
background: rgba($article-text, 0.05);
border: 1.5px solid rgba($article-text, 0.2);
vertical-align: text-top;
cursor: pointer;
&:after {
content: "";
position: absolute;
display: block;
height: .5rem;
width: .5rem;
top: .23rem;
left: .23rem;
border-radius: 50%;
background: rgba($article-text, .3);
opacity: 0;
transition: opacity .2s;
}
&.checked:after {
opacity: 1;
}
}
}
}
}
li.custom {
display: flex;
align-items: center;
}
#custom-url {
display: inline-block;
width: 100%;
padding-left: .5rem;
position: relative;
&:after {
display: none;
content: attr(data-message);
position: absolute;
top: -1.8rem;
right: 0;
font-size: .85rem;
font-weight: $medium;
color: $r-fire;
}
&.error {
&:after { display: block; }
input#custom-url-field {
border-color: $r-fire;
&:focus {
border-color: $r-fire;
box-shadow: 1px 1px 10px rgba($r-fire,0.5);
}
}
}
input {
&#custom-url-field {
font-family: $rubik;
font-weight: $medium;
background: $modal-field-bg;
border-radius: $radius;
border: 1px solid $sidebar-search-bg;
padding: .5em;
width: 100%;
color: $sidebar-search-text;
transition-property: border, box-shadow;
transition-duration: .2s;
box-shadow: 2px 2px 6px $sidebar-search-shadow;
&:focus {
outline: none;
border-color: $sidebar-search-highlight;
box-shadow: 1px 1px 10px rgba($sidebar-search-highlight, .5);
border-radius: $radius;
}
&::placeholder {
color: rgba($sidebar-search-text, .45);
font-weight: normal;
font-style: italic;
}
}
}
}
.radio {
position: relative;
display: inline-block;
height: 1.15em;
width: 1.15em;
background: rgba($article-text, .05);
margin: 0 .3rem 0 .1rem;
vertical-align: text-top;
border-radius: $radius;
cursor: pointer;
border: 1.5px solid rgba($article-text, .2);
user-select: none;
}
input[type='radio'] {
margin-right: -1.1rem ;
padding: 0;
vertical-align: top;
opacity: 0;
cursor: pointer;
& + .radio:after {
content: "";
display: block;
position: absolute;
height: .5rem;
width: .5rem;
border-radius: 50%;
background: $article-link;
top: 50%;
left: 50%;
opacity: 0;
transform: scale(2) translate(-20%, -20%);
transition: all .2s;
}
&:checked + .radio:after {
opacity: 1;
transform: scale(1) translate(-50%, -50%);
}
}
}
td,label,li {
&:after {
display: inline;
vertical-align: middle;
font-style: italic;
font-weight: $medium;
font-size: .75em;
margin-left: .35rem;
padding: .1rem .3rem .12rem .32rem;
line-height: .75rem;
border-radius: 1rem;
}
&.beta:after {
content: "beta";
color: $g20-white;
@include gradient($grad-blue);
}
}
label:after {
margin-left: .15rem;
}
/////////////////////////// InfluxDB Preference Tabs ///////////////////////////
#pref-tabs {
padding: 0;
margin: 0 0 -5px;
list-style: none;
display: flex;
justify-content: space-between;
align-items: center;
}
.pref-tab {
padding: .75rem 1.25rem;
margin-right: 5px;
text-align: center;
font-weight: bold;
width: 49%;
color: rgba($article-text, .7);
background: rgba($article-text, .05);
border-radius: $radius;
cursor: pointer;
transition: color .2s;
&:last-child {
margin-right: 0;
}
&:hover {
color: $article-link;
}
&.active {
color: $g20-white;
@include gradient($article-btn-gradient);
}
span.ephemeral { display: inline; }
span.abbr:after {
display: none;
content: ".";
}
}
.product {
&.active { display: block; }
&.inactive { display: none; }
}
///////////////////////////// InfluxDB URL Triggers ////////////////////////////
.article--content {
.select-url {
margin: -2.5rem 0 1rem;
text-align: right;
display: none;
}
.url-trigger {
padding: .25rem .5rem;
display: inline-block;
font-size: .85rem;
font-style: italic;
color: rgba($article-tab-code-text, .5);
background: $article-code-bg;
border-radius: 0 0 $radius $radius;
&:before {
content: "\e924";
display: inline-block;
margin-right: .35rem;
font-family: "icomoon-v2";
font-style: normal;
font-size: .8rem;
}
&:hover {
color: $article-tab-code-text;
}
}
li .url-trigger { padding: 0rem .5rem; }
.code-tab-content {
.select-url{margin-top: -3.25rem}
}
}
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
@include media(small) {
.modal {
padding: .5rem;
overflow: scroll;
.modal-body {
padding: .5rem 1.5rem 1.5rem;
}
}
.pref-tab {
span.ephemeral { display: none; }
span.abbr:after { display: inline; }
}
}

View File

@ -48,3 +48,39 @@ a.btn {
font-size: 1.1rem;
}
}
///////////////////////////// InfluxDB URL Triggers ////////////////////////////
.select-url {
margin: -2.5rem 0 1rem;
text-align: right;
display: none;
}
.url-trigger {
padding: .25rem .5rem;
display: inline-block;
font-size: .85rem;
font-style: italic;
color: rgba($article-tab-code-text, .5);
background: $article-code-bg;
border-radius: 0 0 $radius $radius;
&:before {
content: "\e924";
display: inline-block;
margin-right: .35rem;
font-family: "icomoon-v2";
font-style: normal;
font-size: .8rem;
}
&:hover {
color: $article-tab-code-text;
}
}
li .url-trigger { padding: 0rem .5rem; }
.code-tab-content {
.select-url{margin-top: -3.25rem}
}

View File

@ -64,9 +64,9 @@ pre {
border-radius: $radius;
overflow-x: scroll;
overflow-y: hidden;
font-size: 1rem;
code {
padding: 0;
font-size: 1rem;
line-height: 1.7rem;
white-space: pre;
}

View File

@ -1,3 +1,5 @@
////////////////////////// Support and Feedback Block //////////////////////////
.feedback {
display: flex;
justify-content: space-between;
@ -61,6 +63,10 @@
color: $article-text !important;
background: $feedback-btn-bg !important;
&:after{
@include gradient($article-btn-gradient);
}
&:hover {
color: $g20-white !important;
}
@ -84,6 +90,77 @@
}
}
///////////////////////////// Page Helpful Section /////////////////////////////
.helpful {
position: relative;
display: flex;
flex-direction: row;
justify-content: space-between;
p {margin-bottom: 0;}
label.radio-btns {
position: relative;
display: inline-block;
min-width: 4rem;
padding: .5rem 1rem;
font-size: .95rem;
font-weight: $medium;
text-align: center;
color: $article-bold;
border-radius: 3px;
background: rgba($article-text, .1);
cursor: pointer;
z-index: 1;
&:after{
content: "";
display: block;
position: absolute;
margin: 0;
padding: 0;
top: 0;
left: 0;
width: 100%;
height: 100%;
border-radius: 3px;
min-width: 4rem;
z-index: -1;
opacity: 0;
transition: opacity .2s, color .2s;
}
&#helpful:after {@include gradient($grad-green-shade)}
&#not-helpful:after {@include gradient($grad-red)}
&:hover {
color: $g20-white;
&:after {opacity: 1}
}
}
input[type='radio'] {
display: none;
}
.loader-wrapper, #thank-you {
position: absolute;
display: none;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: $article-bg;
}
.loader-wrapper {
z-index: 5;
.loader {margin: 0 auto;}
}
#thank-you {z-index: 10; p {text-align: center;}}
}
///////////////////////////////// Media Queries ////////////////////////////////
@include media(medium) {

View File

@ -9,11 +9,15 @@
box-shadow: 1px 3px 10px $article-shadow;
& > ul { padding: 0; margin: 0;
li { line-height: 2rem; color: $article-code; }
ul { padding-left: 2rem;
ul {
padding-left: 2rem;
margin: 0;
li {
position: relative;
margin: 0;
line-height: 2rem;
margin: 0 0 0 -1.45rem;
padding-left: 1.45rem;
line-height: 2.5rem;
border-left: 1px solid $article-code;
&:before {
content: "";
@ -21,6 +25,7 @@
width: 1rem;
height: .25rem;
margin-right: .55rem;
margin-left: -1.45rem;
border-top: 1px solid $article-code;
}
&:last-child {
@ -32,7 +37,7 @@
padding: 0;
left: 0;
top: 0;
height: 1.1rem;
height: 1.4rem;
border-left: 1px solid $article-code;
}
}

View File

@ -0,0 +1,84 @@
.influxdbu-banner {
background-color: $br-dark-blue;
margin: 2.5rem 0 3rem;
padding: 2.5rem;
border-radius: 1.5rem;
box-shadow: 2px 2px 8px $article-shadow;
background-image: url('/svgs/home-bg-circle-right.svg');
background-size: cover;
display: flex;
justify-content: space-between;
align-items: center;
.influxdbu-logo {
max-width: 170px;
margin: 0 0 1rem;
box-shadow: none;
}
.banner-content {
margin-right: 1rem;
max-width: 65%;
h4 {
margin-top: -1.75rem;
font-size: 1.5rem;
font-style: normal;
color: $g20-white;
}
p {
margin-bottom: 0;
color: $g20-white;
strong { color: $g20-white; }
}
}
.banner-cta {
position: relative;
a {
display: block;
position: relative;
padding: 1rem 1.5rem;
color: $g20-white;
text-align: center;
border-radius: $radius;
@include gradient($grad-burningDusk);
z-index: 1;
&:after {
content: "";
position: absolute;
padding: 0;
top: 0;
right: 0;
width: 100%;
height: 100%;
border-radius: $radius;
@include gradient($grad-coolDusk, 270deg);
transition: opacity .2s;
z-index: -1;
opacity: 0;
}
&:hover:after {opacity: 1;}
}
}
}
////////////////////////////////////////////////////////////////////////////////
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
@include media(small) {
.influxdbu-banner {
flex-direction: column;
.banner-content {
max-width: 100%;
h4 {margin-top: -1.25rem;}
}
.banner-cta {
margin-top: 1.75rem;
width: 100%;
}
}
}

View File

@ -2,6 +2,9 @@
ol, ul {
padding-left: 1.6rem;
margin: 1rem 0 1.5rem 0;
ol, ul {margin: 0;}
}
ul {
@ -10,6 +13,16 @@ ul {
content: "" !important;
display: none;
}
ol {
list-style: revert;
li::marker {
font-weight: bold;
color: $article-bold;
}
}
}
ol {
@ -35,15 +48,12 @@ ol {
ul {
counter-reset: item;
& > ol {counter-reset: item;}
}
}
}
& > ol,
& > ul {
margin: 1rem 0 1.5rem 0;
}
li {
margin: .25rem 0;
&:not(:last-child) {

View File

@ -16,6 +16,7 @@
font-weight: $medium;
padding: .65rem 1.25rem;
display: inline-block;
white-space: nowrap;
text-align: center;
color: $article-tab-text !important;
border-radius: $radius;
@ -59,6 +60,10 @@
padding: .35rem 1rem;
}
}
// Style suggested for tabs that have uneven button widths because of wrapping.
&.even-wrap a {
flex-basis: 25%;
}
}
.code-tabs {

View File

@ -36,14 +36,40 @@
color: $article-code-accent7;
}
}
&.external {
margin: 0 0 0 -.25rem;
display: inline-block;
padding: .1rem .75rem;
background-color: rgba($article-text, .1);
border-radius: 1rem;
font-size: .9rem;
font-weight: $medium;
}
a.external {
position: relative;
margin: 0 0 0 -.25rem;
display: inline-block;
padding: .4rem .85rem;
background-color: rgba($article-text, .1);
border-radius: 1rem;
font-size: .9rem;
font-weight: $medium;
color: $article-text;
&:after {
content: "?";
position: absolute;
display: block;
top: -3px;
right: -3px;
padding: 0.1rem 0.3rem .04rem;
color: $g20-white;
@include gradient($article-btn-gradient);
border-radius: 50%;
font-weight: bold;
line-height: .8rem;
font-size: .7rem;
box-shadow: 1px 1px 4px $sidebar-search-shadow;
opacity: 0;
transition: opacity .2s;
}
&:hover {
color: $article-text;
&:after {opacity: 1;}
}
}

View File

@ -0,0 +1,112 @@
////////////////////// Styles for the page feedback modal //////////////////////
$button-padding: .65rem 1.1rem;
.form-buttons {
display: flex;
justify-content: end;
margin-top: 1rem;
}
textarea {
resize: vertical;
font-family: $proxima;
font-weight: $medium;
background: $sidebar-search-bg;
border-radius: $radius;
border: 1px solid $sidebar-search-bg;
margin-top: 1rem;
padding: .5em;
width: 100%;
height: 8rem;
color: $sidebar-search-text;
transition-property: border, box-shadow;
transition-duration: .2s;
box-shadow: 2px 2px 6px $sidebar-search-shadow;
&:focus {
outline: none;
border-color: $sidebar-search-highlight;
box-shadow: 1px 1px 10px rgba($sidebar-search-highlight, .5);
border-radius: $radius;
}
&::placeholder {
color: rgba($sidebar-search-text, .45);
font-weight: normal;
font-style: italic;
}
}
input[type='submit'] {
padding: $button-padding;
@include gradient($article-btn-gradient);
border: none;
border-radius: $radius;
color: $g20-white;
font-weight: $medium;
opacity: 1;
transition: opacity .2s;
z-index: 1;
&:hover {opacity: 0;}
}
.submit-wrapper {
position: relative;
@include gradient($article-btn-gradient-hover);
border-radius: $radius;
color: $g20-white;
font-weight: $medium;
&:before{
content: "Submit";
position: absolute;
pointer-events: none;
top: 0;
left: 0;
padding: $button-padding;
z-index: 0;
}
}
#no-thanks {
margin-right: .5rem;
padding: $button-padding;
background: rgba($article-text, .1);
color: rgba($article-bold, .65);
border-radius: $radius;
cursor: pointer;
transition: color .2s;
&:hover {color: $article-bold;}
}
.lifecycle-wrapper {
position: relative;
}
.loader-wrapper, #thank-you {
position: absolute;
display: none;
justify-content: center;
align-items: center;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: $article-bg;
}
.loader-wrapper {
z-index: 5;
.loader {margin: 0 auto;}
}
#thank-you {
z-index: 10;
font-size: 1.2rem;
font-style: italic;
font-weight: $medium;
color: rgba($article-text, .65);
p {
text-align: center;
}
}

View File

@ -0,0 +1,254 @@
/////////////////// Styes for the InfluxDB URL selector modal //////////////////
.products {
display: flex;
flex-direction: column;
flex-wrap: wrap;
flex-grow: 1;
justify-content: flex-start;
}
.product {
.providers{
display: flex;
flex-wrap: wrap;
padding: .5rem 1rem;
background: rgba($article-text, .05);
border-radius: $radius;
.provider {
flex-grow: 1;
&:not(:last-child) {margin-right: 1rem;}
}
ul {
margin: .5rem .5rem .5rem 0;
padding: 0;
list-style: none;
&.clusters {
padding-left: 1.75rem;
}
}
p.region {
.fake-radio {
position: relative;
display: inline-block;
height: 1.15em;
width: 1.15em;
margin: 0 0.3rem 0 0.1rem;
border-radius: $radius;
border: 1.5px solid transparent;
background: rgba($article-text, 0.05);
border: 1.5px solid rgba($article-text, 0.2);
vertical-align: text-top;
cursor: pointer;
&:after {
content: "";
position: absolute;
display: block;
height: .5rem;
width: .5rem;
top: .23rem;
left: .23rem;
border-radius: 50%;
background: rgba($article-text, .3);
opacity: 0;
transition: opacity .2s;
}
&.checked:after {
opacity: 1;
}
}
}
}
}
li.custom {
display: flex;
align-items: center;
}
#custom-url {
display: inline-block;
width: 100%;
padding-left: .5rem;
position: relative;
&:after {
display: none;
content: attr(data-message);
position: absolute;
top: -1.8rem;
right: 0;
font-size: .85rem;
font-weight: $medium;
color: $r-fire;
}
&.error {
&:after { display: block; }
input#custom-url-field {
border-color: $r-fire;
&:focus {
border-color: $r-fire;
box-shadow: 1px 1px 10px rgba($r-fire,0.5);
}
}
}
input {
&#custom-url-field {
font-family: $proxima;
font-weight: $medium;
background: $modal-field-bg;
border-radius: $radius;
border: 1px solid $sidebar-search-bg;
padding: .5em;
width: 100%;
color: $sidebar-search-text;
transition-property: border, box-shadow;
transition-duration: .2s;
box-shadow: 2px 2px 6px $sidebar-search-shadow;
&:focus {
outline: none;
border-color: $sidebar-search-highlight;
box-shadow: 1px 1px 10px rgba($sidebar-search-highlight, .5);
border-radius: $radius;
}
&::placeholder {
color: rgba($sidebar-search-text, .45);
font-weight: normal;
font-style: italic;
}
}
}
}
.radio {
position: relative;
display: inline-block;
height: 1.15em;
width: 1.15em;
background: rgba($article-text, .05);
margin: 0 .3rem 0 .1rem;
vertical-align: text-top;
border-radius: $radius;
cursor: pointer;
border: 1.5px solid rgba($article-text, .2);
user-select: none;
}
input[type='radio'] {
margin-right: -1.1rem ;
padding: 0;
vertical-align: top;
opacity: 0;
cursor: pointer;
& + .radio:after {
content: "";
display: block;
position: absolute;
height: .5rem;
width: .5rem;
border-radius: 50%;
background: $article-link;
top: 50%;
left: 50%;
opacity: 0;
transform: scale(2) translate(-20%, -20%);
transition: all .2s;
}
&:checked + .radio:after {
opacity: 1;
transform: scale(1) translate(-50%, -50%);
}
}
td,label,li {
&:after {
display: inline;
vertical-align: middle;
font-style: italic;
font-weight: $medium;
font-size: .75em;
margin-left: .35rem;
padding: .1rem .3rem .12rem .32rem;
line-height: .75rem;
border-radius: 1rem;
}
&.beta:after {
content: "beta";
color: $g20-white;
@include gradient($grad-blue);
}
}
label:after {
margin-left: .15rem;
}
/////////////////////////// InfluxDB Preference Tabs ///////////////////////////
#pref-tabs {
padding: 0;
margin: 0 0 -5px;
list-style: none;
display: flex;
justify-content: space-between;
align-items: center;
}
.pref-tab {
padding: .75rem 1.25rem;
margin-right: 5px;
text-align: center;
font-weight: bold;
width: 49%;
color: rgba($article-text, .7);
background: rgba($article-text, .05);
border-radius: $radius;
cursor: pointer;
transition: color .2s;
&:last-child {
margin-right: 0;
}
&:hover {
color: $article-link;
}
&.active {
color: $g20-white;
@include gradient($article-btn-gradient);
}
span.ephemeral { display: inline; }
span.abbr:after {
display: none;
content: ".";
}
}
.product {
&.active { display: block; }
&.inactive { display: none; }
}
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
@include media(small) {
.pref-tab {
span.ephemeral { display: none; }
span.abbr:after { display: inline; }
}
}

View File

@ -85,6 +85,7 @@
.plugin-card {
.github-link { @include gradient($telegraf-btn-gradient); }
a.external:after { @include gradient($telegraf-btn-gradient); }
&:hover .github-link { @include gradient($telegraf-btn-gradient); }
}

View File

@ -23,7 +23,8 @@
"layouts/algolia-search-overrides",
"layouts/landing",
"layouts/error-page",
"layouts/url-selector",
"layouts/modals",
"layouts/loading-spinner",
"layouts/feature-callouts",
"layouts/v1-overrides",
"layouts/notifications",

View File

@ -21,7 +21,7 @@ hrefTargetBlank = true
smartDashes = false
[taxonomies]
"influxdb/v2.2/tag" = "influxdb/v2.1/tags"
"influxdb/v2.2/tag" = "influxdb/v2.2/tags"
"influxdb/v2.1/tag" = "influxdb/v2.1/tags"
"influxdb/v2.0/tag" = "influxdb/v2.0/tags"
"influxdb/cloud/tag" = "influxdb/cloud/tags"

View File

@ -517,7 +517,7 @@ TLS_CERTIFICATE=my.crt TLS_PRIVATE_KEY=my.key chronograf
#### Docker example with environment variables
```sh
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key quay.io/influxdb/chronograf:latest
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key chronograf:{{< current-version >}}
```
### Testing with self-signed certificates

View File

@ -482,7 +482,7 @@ TLS_CERTIFICATE=my.crt TLS_PRIVATE_KEY=my.key chronograf
#### Docker example with environment variables
```sh
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key quay.io/influxdb/chronograf:latest
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key chronograf:{{< current-version >}}
```
### Testing with self-signed certificates

View File

@ -521,7 +521,7 @@ When configured, users can use HTTPS to securely communicate with your Chronogra
Using HTTPS helps guard against nefarious agents sniffing the JWT and using it to spoof a valid user against the Chronograf server.
{{% /note %}}
### Configuring TLS for Chronograf
### Configure TLS for Chronograf
Chronograf server has command line and environment variable options to specify the certificate and key files.
The server reads and parses a public/private key pair from these files.
@ -531,30 +531,62 @@ All Chronograf command line options have corresponding environment variables.
**To configure Chronograf to support TLS:**
1. Specify the certificate file using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
2. Specify the key file using the `TLS_PRIVATE_KEY` environment variable (or `--key` CLI option).
1. Specify the certificate file using the `TLS_CERTIFICATE` environment variable or the `--cert` CLI option.
2. Specify the key file using the `TLS_PRIVATE_KEY` environment variable or `--key` CLI option.
{{% note %}}
{{% note %}}
If both the TLS certificate and key are in the same file, specify them using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
{{% /note %}}
{{% /note %}}
3. _(Optional)_ To specify which TLS cipher suites to allow, use the `TLS_CIPHERS` environment variable or the `--tls-ciphers` CLI option.
Chronograf supports all cipher suites in the
[Go `crypto/tls` package](https://golang.org/pkg/crypto/tls/#pkg-constants)
and, by default, allows them all.
4. _(Optional)_ To specify the minimum and maximum TLS versions to allow, use the
`TLS_MIN_VERSION` and `TLS_MAX_VERSION` environment variables or the
`--tls-min-version` and `--tls-max-version` CLI options.
By default, the minimum TLS version allowed is `tls1.2` and the maximum version is
unlimited.
#### Example with CLI options
```sh
chronograf --cert=my.crt --key=my.key
chronograf \
--cert=my.crt \
--key=my.key \
--tls-ciphers=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
--tls-min-version=tls1.2 \
--tls-max-version=tls1.3
```
#### Example with environment variables
```sh
TLS_CERTIFICATE=my.crt TLS_PRIVATE_KEY=my.key chronograf
TLS_CERTIFICATE=my.crt \
TLS_PRIVATE_KEY=my.key \
TLS_CIPHERS=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
TLS_MIN_VERSION=tls1.2 \
TLS_MAX_VERSION=tls1.3 \
chronograf
```
#### Docker example with environment variables
```sh
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key quay.io/influxdb/chronograf:latest
docker run \
-v /host/path/to/certs:/certs \
-e TLS_CERTIFICATE=/certs/my.crt \
-e TLS_PRIVATE_KEY=/certs/my.key \
-e TLS_CIPHERS=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
-e TLS_MIN_VERSION=tls1.2 \
-e TLS_MAX_VERSION=tls1.3 \
chronograf:{{< current-version >}}
```
### Testing with self-signed certificates
In a production environment you should not use self-signed certificates, but for testing it is fast to create your own certificates.
### Test with self-signed certificates
To test your setup, you can use a self-signed certificate.
{{% warn %}}
Don't use self-signed certificates in production environments.
{{% /warn %}}
To create a certificate and key in one file with OpenSSL:

View File

@ -12,12 +12,12 @@ menu:
To enhance security, configure Chronograf to authenticate and authorize with [OAuth 2.0](https://oauth.net/) and use TLS/HTTPS.
(Basic authentication with username and password is also available.)
* [Configure Chronograf to authenticate with OAuth 2.0](#configure-chronograf-to-authenticate-with-oauth-20)
- [Configure Chronograf to authenticate with OAuth 2.0](#configure-chronograf-to-authenticate-with-oauth-20)
1. [Generate a Token Secret](#generate-a-token-secret)
2. [Set configurations for your OAuth provider](#set-configurations-for-your-oauth-provider)
3. [Configure authentication duration](#configure-authentication-duration)
* [Configure Chronograf to authenticate with a username and password](#configure-chronograf-to-authenticate-with-a-username-and-password)
* [Configure TLS (Transport Layer Security) and HTTPS](#configure-tls-transport-layer-security-and-https)
- [Configure Chronograf to authenticate with a username and password](#configure-chronograf-to-authenticate-with-a-username-and-password)
- [Configure TLS (Transport Layer Security) and HTTPS](#configure-tls-transport-layer-security-and-https)
## Configure Chronograf to authenticate with OAuth 2.0
@ -51,6 +51,7 @@ Chronograf will use this secret to generate the JWT Signature for all access tok
1. Generate a high-entropy pseudo-random string.
For example, to do this with OpenSSL, run this command:
```sh
openssl rand -base64 256 | tr -d '\n'
```
@ -377,7 +378,7 @@ export HEROKU_ORGS=hill-valley-preservation-sociey,the-pinheads
Set the following environment variables in `/etc/default/chronograf`:
```
```txt
GENERIC_TOKEN_URL=https://login.microsoftonline.com/<<TENANT-ID>>/oauth2/token
TENANT=<<TENANT-ID>>
GENERIC_NAME=AzureAD
@ -540,10 +541,10 @@ Use of the TLS cryptographic protocol provides server authentication, data confi
When configured, users can use HTTPS to securely communicate with your Chronograf applications.
{{% note %}}
Using HTTPS helps guard against nefarious agents sniffing the JWT and using it to spoof a valid user against the Chronograf server.
HTTPS helps prevent nefarious agents stealing the JWT and using it to spoof a valid user against the server.
{{% /note %}}
### Configuring TLS for Chronograf
### Configure TLS for Chronograf
Chronograf server has command line and environment variable options to specify the certificate and key files.
The server reads and parses a public/private key pair from these files.
@ -551,32 +552,64 @@ The files must contain PEM-encoded data.
All Chronograf command line options have corresponding environment variables.
**To configure Chronograf to support TLS:**
To configure Chronograf to support TLS, do the following:
1. Specify the certificate file using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
2. Specify the key file using the `TLS_PRIVATE_KEY` environment variable (or `--key` CLI option).
1. Specify the certificate file using the `TLS_CERTIFICATE` environment variable or the `--cert` CLI option.
2. Specify the key file using the `TLS_PRIVATE_KEY` environment variable or `--key` CLI option.
{{% note %}}
{{% note %}}
If both the TLS certificate and key are in the same file, specify them using the `TLS_CERTIFICATE` environment variable (or the `--cert` CLI option).
{{% /note %}}
{{% /note %}}
3. _(Optional)_ To specify which TLS cipher suites to allow, use the `TLS_CIPHERS` environment variable or the `--tls-ciphers` CLI option.
Chronograf supports all cipher suites in the
[Go `crypto/tls` package](https://golang.org/pkg/crypto/tls/#pkg-constants)
and, by default, allows them all.
4. _(Optional)_ To specify the minimum and maximum TLS versions to allow, use the
`TLS_MIN_VERSION` and `TLS_MAX_VERSION` environment variables or the
`--tls-min-version` and `--tls-max-version` CLI options.
By default, the minimum TLS version allowed is `tls1.2` and the maximum version is
unlimited.
#### Example with CLI options
```sh
chronograf --cert=my.crt --key=my.key
chronograf \
--cert=my.crt \
--key=my.key \
--tls-ciphers=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
--tls-min-version=tls1.2 \
--tls-max-version=tls1.3
```
#### Example with environment variables
```sh
TLS_CERTIFICATE=my.crt TLS_PRIVATE_KEY=my.key chronograf
TLS_CERTIFICATE=my.crt \
TLS_PRIVATE_KEY=my.key \
TLS_CIPHERS=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
TLS_MIN_VERSION=tls1.2 \
TLS_MAX_VERSION=tls1.3 \
chronograf
```
#### Docker example with environment variables
```sh
docker run -v /host/path/to/certs:/certs -e TLS_CERTIFICATE=/certs/my.crt -e TLS_PRIVATE_KEY=/certs/my.key quay.io/influxdb/chronograf:latest
docker run \
-v /host/path/to/certs:/certs \
-e TLS_CERTIFICATE=/certs/my.crt \
-e TLS_PRIVATE_KEY=/certs/my.key \
-e TLS_CIPHERS=TLS_RSA_WITH_AES_256_CBC_SHA,TLS_AES_128_GCM_SHA256 \
-e TLS_MIN_VERSION=tls1.2 \
-e TLS_MAX_VERSION=tls1.3 \
chronograf:{{< current-version >}}
```
### Testing with self-signed certificates
In a production environment you should not use self-signed certificates, but for testing it is fast to create your own certificates.
### Test with self-signed certificates
To test your setup, you can use a self-signed certificate.
{{% warn %}}
Don't use self-signed certificates in production environments.
{{% /warn %}}
To create a certificate and key in one file with OpenSSL:

View File

@ -82,6 +82,22 @@ From the **Alert Rules** page in Chronograf:
7. Click **Save Rule**.
## Enable and disable alert rules
To enable and disable alerts, click on **{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**.
- To enable an alert rule, locate the alert rule and click the box **Task Enabled**. A blue dot shows the task is enabled. A message appears to confirm the rule was successfully enabled.
- To disable an alert rule, click the box **Task Enabled**. The blue dot disappears and a message confirms the alert was successfully disabled.
## Delete alert rules
To delete an alert, click on **{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**.
1. Locate the alert you want to delete, and then hover over the "Task Enabled" box. A **Delete** button appears to the right.
3. Click **Delete** to delete the rule.
**Note:** Deleting a rule cannot be undone, and removes the rule permanently.
## View alert history
Chronograf lets you view your alert history on the **Alert History** page.

View File

@ -46,7 +46,7 @@ For more information, see [InfluxQL support](/influxdb/cloud/query-data/influxql
## Explore data with Flux
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started).
Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. The **Script Builder** lets you build a complete Flux script scoped to a selected time range. View new tag keys and tag values based on already selected tag keys and tag values. Search for key names and values. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started).
1. Open the Data Explorer by clicking **Explore** in the left navigation bar.
2. Select **Flux** as the source type.

View File

@ -18,15 +18,15 @@ If you have not set up your meta nodes, please visit
[Installing meta nodes](/enterprise_influxdb//v1.7/install-and-deploy/production_installation/meta_node_installation/).
Bad things can happen if you complete the following steps without meta nodes.
<br>
# Data node setup description and requirements
The Production Installation process sets up two [data nodes](/enterprise_influxdb/v1.7/concepts/glossary#data-node)
and each data node runs on its own server.
You **must** have a minimum of two data nodes in a cluster.
InfluxDB Enterprise clusters require at least two data nodes for high availability and redundancy.
<br>
Note: that there is no requirement for each data node to run on its own
`hh`, `wal`, `data`, and `meta` directories are required on all data nodes and are created as part of the installation process.
**Note:** There is no requirement for each data node to run on its own
server. However, best practices are to deploy each data node on a dedicated server.
See the
@ -72,16 +72,19 @@ Ultimately, use entries similar to the following (hostnames and domain IP addres
| A | ```enterprise-data-01.mydomain.com``` | ```<Data_1_IP>``` |
| A | ```enterprise-data-02.mydomain.com``` | ```<Data_2_IP>``` |
> **Verification steps:**
>
{{% note %}}
**Verification steps:**
Before proceeding with the installation, verify on each meta and data server that the other
servers are resolvable. Here is an example set of shell commands using `ping`:
>
```bash
ping -qc 1 enterprise-meta-01
ping -qc 1 enterprise-meta-02
ping -qc 1 enterprise-meta-03
ping -qc 1 enterprise-data-01
ping -qc 1 enterprise-data-02
```
{{% /note %}}
We highly recommend that each server be able to resolve the IP from the hostname alone as shown here.
Resolve any connectivity issues before proceeding with the installation.
@ -114,26 +117,26 @@ For added security, follow these steps to verify the signature of your InfluxDB
1. Download and import InfluxData's public key:
```
```bash
curl -s https://repos.influxdata.com/influxdb.key | gpg --import
```
2. Download the signature file for the release by adding `.asc` to the download URL.
For example:
```
```bash
wget https://dl.influxdata.com/enterprise/releases/influxdb-data-{{< latest-patch >}}_c{{< latest-patch >}}.x86_64.rpm.asc
```
3. Verify the signature with `gpg --verify`:
```
```bash
gpg --verify influxdb-data-{{< latest-patch >}}_c{{< latest-patch >}}.x86_64.rpm.asc influxdb-data-{{< latest-patch >}}_c{{< latest-patch >}}.x86_64.rpm
```
The output from this command should include the following:
```
```bash
gpg: Good signature from "InfluxDB Packaging Service <support@influxdb.com>" [unknown]
```
@ -207,16 +210,18 @@ On systemd systems, enter:
sudo systemctl start influxdb
```
> **Verification steps:**
>
Check to see that the process is running by entering:
>
ps aux | grep -v grep | grep influxdb
>
You should see output similar to:
>
influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
**Verification steps:**
Check to see that the process is running by entering:
```bash
ps aux | grep -v grep | grep influxdb
```
You should see output similar to:
```bash
influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
```
If you do not see the expected output, the process is either not launching or is exiting prematurely. Check the [logs](/enterprise_influxdb/v1.7/administration/logs/) for error messages and verify the previous setup steps are complete.
@ -251,25 +256,27 @@ to the cluster.
> **Verification steps:**
>
Issue the following command on any meta node:
>
influxd-ctl show
>
```bash
influxd-ctl show
```
The expected output is:
>
Data Nodes
==========
ID TCP Address Version
4 enterprise-data-01:8088 {{< latest-patch >}}-c{{< latest-patch >}}
5 enterprise-data-02:8088 {{< latest-patch >}}-c{{< latest-patch >}}
>
Meta Nodes
==========
TCP Address Version
enterprise-meta-01:8091 {{< latest-patch >}}-c{{< latest-patch >}}
enterprise-meta-02:8091 {{< latest-patch >}}-c{{< latest-patch >}}
enterprise-meta-03:8091 {{< latest-patch >}}-c{{< latest-patch >}}
```bash
Data Nodes
==========
ID TCP Address Version Labels
4 cluster-node-01:8088 1.7.x-c1.7.x {}
5 cluster-node-02:8088 1.7.x-c1.7.x {}
Meta Nodes
==========
TCP Address Version Labels
cluster-node-01:8091 1.7.x-c1.7.x {}
cluster-node-02:8091 1.7.x-c1.7.x {}
cluster-node-03:8091 1.7.x-c1.7.x {}
```
The output should include every data node that was added to the cluster.
The first data node added should have `ID=N`, where `N` is equal to one plus the number of meta nodes.

View File

@ -24,8 +24,9 @@ The Production Installation process sets up two [data nodes](/enterprise_influxd
and each data node runs on its own server.
You **must** have a minimum of two data nodes in a cluster.
InfluxDB Enterprise clusters require at least two data nodes for high availability and redundancy.
<br>
Note: that there is no requirement for each data node to run on its own
`hh`, `wal`, `data`, and `meta` directories are required on all data nodes and are created as part of the installation process.
**Note:** There is no requirement for each data node to run on its own
server. However, best practices are to deploy each data node on a dedicated server.
See the
@ -211,19 +212,17 @@ On systemd systems, enter:
sudo systemctl start influxdb
```
{{% note %}}
**Verification steps:**
Check to see that the process is running by entering:
ps aux | grep -v grep | grep influxdb
```bash
ps aux | grep -v grep | grep influxdb
```
You should see output similar to:
influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
{{% /note %}}
```bash
influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
```
If you do not see the expected output, the process is either not launching or is exiting prematurely. Check the [logs](/enterprise_influxdb/v1.8/administration/logs/) for error messages and verify the previous setup steps are complete.
If you see the expected output, repeat for the remaining data nodes.
@ -252,29 +251,28 @@ The expected output is:
Added data node y at enterprise-data-0x:8088
```
{{% note %}}
**Verification steps:**
Issue the following command on any meta node:
influxd-ctl show
```bash
influxd-ctl show
```
The expected output is:
```bash
Data Nodes
==========
ID TCP Address Version
4 enterprise-data-01:8088 {{< latest-patch >}}-c{{< latest-patch >}}
5 enterprise-data-02:8088 {{< latest-patch >}}-c{{< latest-patch >}}
Data Nodes
==========
ID TCP Address Version Labels
4 cluster-node-01:8088 1.8.x-c1.8.x {}
5 cluster-node-02:8088 1.8.x-c1.8.x {}
Meta Nodes
==========
TCP Address Version
enterprise-meta-01:8091 {{< latest-patch >}}-c{{< latest-patch >}}
enterprise-meta-02:8091 {{< latest-patch >}}-c{{< latest-patch >}}
enterprise-meta-03:8091 {{< latest-patch >}}-c{{< latest-patch >}}
{{% /note %}}
Meta Nodes
==========
TCP Address Version Labels
cluster-node-01:8091 1.8.x-c1.8.x {}
cluster-node-02:8091 1.8.x-c1.8.x {}
cluster-node-03:8091 1.8.x-c1.8.x {}
```
The output should include every data node that was added to the cluster.
The first data node added should have `ID=N`, where `N` is equal to one plus the number of meta nodes.

View File

@ -30,7 +30,7 @@ The following are the most frequently overlooked requirements when installing a
- [Ensure connectivity between machines](#ensure-connectivity-between-machines)
- [Synchronize time between hosts](#synchronize-time-between-hosts)
- [Use SSDs](#use-ssds)
- [Do not use NFS](#do-not-use-nfs-mounts)
- [Do not use NFS or NFS-based services](#do-not-use-nfs-or-nfs-based-services)
- [Disable swap](#disable-swap)
- [Use three and only three meta nodes](#use-three-and-only-three-meta-nodes)
- [Meta and data nodes are fully independent](#meta-and-data-nodes-are-fully-independent)
@ -56,8 +56,13 @@ SANs must guarantee at least 1000 IOPS is always available to InfluxDB Enterpris
nodes or they may not be sufficient.
SSDs are strongly recommended, and we have had no reports of IOPS contention from any customers running on SSDs.
#### Do not use NFS
For disk storage, use block devices only. InfluxDB Enterprise does not support NFS (Network File System)-mounted devices.
#### Do not use NFS or NFS-based services
For disk storage, use block devices only.
InfluxDB Enterprise does **not** support NFS (Network File System)-mounted devices
or services such as [AWS EFS](https://aws.amazon.com/efs/),
[Google Filestore](https://cloud.google.com/filestore), or
[Azure files](https://azure.microsoft.com/en-us/services/storage/files/).
#### Disable swap

View File

@ -9,6 +9,38 @@ menu:
parent: About the project
---
## 1.9.7 [2022-06-06]
{{% warn %}}
An edge case regression was introduced into this version that may cause a constant build-up of hinted handoff if writes are rejected due to malformed requests. We're reverting back to InfluxDB Enterprise 1.9.6 as the official stable version. If you experience write errors and hinted hand-off growth, we recommend reverting back to 1.9.6 or upgrading to 1.9.8 when released.
{{% /warn %}}
<!--
### Features
- Expose passive node feature to influxd-ctl and the API.
- Throttle inter-node data replication, both incoming writes and hinted hand-off, when errors are encountered.
#### Flux updates
- Add [http requests package](/{{< latest "flux" >}}/stdlib/experimental/http/requests/).
- Add [isType()](/{{< latest "flux" >}}/stdlib/types/istype/) function.
- Add [display()](/{{< latest "flux" >}}/stdlib/universe/display/) function.
- Enhancements to the following functions: [increase()](/{{< latest "flux" >}}/stdlib/universe/increase/), [sort()](/{{< latest "flux" >}}/stdlib/universe/sort/), [derivative()](/{{< latest "flux" >}}/stdlib/universe/derivative/), [union()](/{{< latest "flux" >}}/stdlib/universe/union/), [timeShift()](/{{< latest "flux" >}}/stdlib/universe/timeshift/), vectorization to applicable functions such as [map()](/{{< latest "flux" >}}/stdlib/universe/map/).
- Add TCP connection pooling to [mqtt.publish()](/{{< latest "flux" >}}/stdlib/experimental/mqtt/publish/) function when called in a map() function.
### Bug fixes
- Fix race condition causing `influxd-ctl restore` command to fail.
#### Error Messaging
- Improve error messaging for `max series per database exceeded`error.
- Improve influxd-ctl error messages when invalid JSON is received.
- Add detail to `error creating subscription` message.
- `DROP SHARD` now successfully ignores "shard not found" errors.
### Maintenance updates
- Upgrade to Go 1.17.9
- Update to [Flux v0.161.0](/flux/v0.x/release-notes/#v01610-2022-03-24).
-->
## 1.9.6 [2022-02-16]
{{% note %}} InfluxDB Enterprise offerings are no longer available on AWS, Azure, and GCP marketplaces. Please [contact Sales](https://www.influxdata.com/contact-sales/) to request an license key to [install InfluxDB Enterprise in your own environment](/enterprise_influxdb/v1.9/introduction/installation/).
@ -39,7 +71,6 @@ menu:
#### Data
- Adjust shard start and end times to avoid overlaps in existing shards. This resolves issues with existing shards (truncated or not) that have a different shard duration than the current default.
- `DROP SHARD` now successfully ignores "shard not found errors."
#### Errors

View File

@ -27,10 +27,12 @@ Depending on the volume of data to be protected and your application requirement
- [Backup and restore utilities](#backup-and-restore-utilities) — For most applications
- [Exporting and importing data](#exporting-and-importing-data) — For large datasets
> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/enterprise_influxdb/v1.9/administration/backup-and-restore/) to:
>
> - Restore InfluxDB Enterprise backup files to InfluxDB OSS instances.
> - Back up InfluxDB OSS data that can be restored in InfluxDB Enterprise clusters.
{{% note %}}
Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/enterprise_influxdb/v1.9/administration/backup-and-restore/) to:
- Restore InfluxDB Enterprise backup files to InfluxDB OSS instances.
- Back up InfluxDB OSS data that can be restored in InfluxDB Enterprise clusters.
{{% /note %}}
## Backup and restore utilities
@ -40,7 +42,7 @@ Most InfluxDB Enterprise applications can use the backup and restore utilities.
Use the `backup` and `restore` utilities to back up and restore between `influxd`
instances with the same versions or with only minor version differences.
For example, you can backup from {{< latest-patch version="1.8" >}} and restore on {{< latest-patch >}}.
For example, you can backup from {{< latest-patch minorVersionOffset=-1 >}} and restore on {{< latest-patch >}}.
### Backup utility

View File

@ -9,3 +9,5 @@ menu:
---
{{< children >}}
{{< influxdbu title="Configuring InfluxDB Enterprise: Best Practices" summary="Learn about best practices and optimization techniques for InfluxDB Enterprise in this **free** InfluxDB University course." action="Take the course" link="https://university.influxdata.com/courses/configuring-influxdb-enterprise-best-practices-tutorial/" >}}

View File

@ -742,11 +742,9 @@ Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_INTERVAL`
#### `retry-max-interval`
Default is `"10s"`.
Default is `"200s"`.
The maximum interval after which the hinted handoff retries a write after the write fails.
The `retry-max-interval` option is no longer in use and will be removed from the configuration file in a future release.
Changing the `retry-max-interval` setting has no effect on your cluster.
Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_MAX_INTERVAL`

View File

@ -18,8 +18,9 @@ Configure InfluxDB Enterprise to use LDAP (Lightweight Directory Access Protocol
- Synchronize InfluxDB and LDAP so each LDAP request doesn't need to be queried
{{% note %}}
To configure InfluxDB Enterprise to support LDAP, all users must be managed in the remote LDAP service.
If LDAP is configured and enabled, users **must** authenticate through LDAP, including users who may have existed before enabling LDAP.
LDAP **requires** JWT authentication. For more information, see [Configure authentication using JWT tokens](/enterprise_influxdb/v1.9/administration/configure/security/authentication/#configure-authentication-using-jwt-tokens).
To configure InfluxDB Enterprise to support LDAP, all users must be managed in the remote LDAP service. If LDAP is configured and enabled, users **must** authenticate through LDAP, including users who may have existed before enabling LDAP.
{{% /note %}}
## Configure LDAP for an InfluxDB Enterprise cluster

View File

@ -10,3 +10,5 @@ menu:
---
{{< children hlevel="h2" type="list" >}}
{{< influxdbu title="Configuring InfluxDB Enterprise: Best Practices" summary="Learn about best practices and optimization techniques for InfluxDB Enterprise in this **free** InfluxDB University course." action="Take the course" link="https://university.influxdata.com/courses/configuring-influxdb-enterprise-best-practices-tutorial/" >}}

View File

@ -9,79 +9,11 @@ menu:
parent: Concepts
---
## data node
A node that runs the data service.
For high availability, installations must have at least two data nodes.
The number of data nodes in your cluster must be the same as your highest
replication factor.
Any replication factor greater than two gives you additional fault tolerance and
query capacity within the cluster.
Data node sizes will depend on your needs.
The Amazon EC2 m4.large or m4.xlarge are good starting points.
Related entries: [data service](#data-service), [replication factor](#replication-factor)
## data service
Stores all time series data and handles all writes and queries.
Related entries: [data node](#data-node)
## meta node
A node that runs the meta service.
For high availability, installations must have three meta nodes.
Meta nodes can be very modestly sized instances like an EC2 t2.micro or even a
nano.
For additional fault tolerance installations may use five meta nodes; the
number of meta nodes must be an odd number.
Related entries: [meta service](#meta-service)
## meta service
The consistent data store that keeps state about the cluster, including which
servers, databases, users, continuous queries, retention policies, subscriptions,
and blocks of time exist.
Related entries: [meta node](#meta-node)
## replication factor
The attribute of the retention policy that determines how many copies of the
data are stored in the cluster.
InfluxDB replicates data across `N` data nodes, where `N` is the replication
factor.
To maintain data availability for queries, the replication factor should be less
than or equal to the number of data nodes in the cluster:
* Data is fully available when the replication factor is greater than the
number of unavailable data nodes.
* Data may be unavailable when the replication factor is less than the number of
unavailable data nodes.
Any replication factor greater than two gives you additional fault tolerance and
query capacity within the cluster.
## web console
Legacy user interface for the InfluxDB Enterprise.
This has been deprecated and the suggestion is to use [Chronograf](/{{< latest "chronograf" >}}/introduction/).
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
<!-- --- -->
## aggregation
An InfluxQL function that returns an aggregated value across a set of points.
For a complete list of the available and upcoming aggregations, see [InfluxQL functions](/enterprise_influxdb/v1.9/query_language/functions/#aggregations).
For a complete list of the available and upcoming aggregations,
see [InfluxQL functions](/enterprise_influxdb/v1.9/query_language/functions/#aggregations).
Related entries: [function](#function), [selector](#selector), [transformation](#transformation)
@ -107,6 +39,27 @@ See [Continuous Queries](/enterprise_influxdb/v1.9/query_language/continuous_que
Related entries: [function](#function)
## data node
A node that runs the data service.
For high availability, installations must have at least two data nodes.
The number of data nodes in your cluster must be the same as your highest
replication factor.
Any replication factor greater than two gives you additional fault tolerance and
query capacity within the cluster.
Data node sizes will depend on your needs.
The Amazon EC2 m4.large or m4.xlarge are good starting points.
Related entries: [data service](#data-service), [replication factor](#replication-factor)
## data service
Stores all time series data and handles all writes and queries.
Related entries: [data node](#data-node)
## database
A logical container for users, retention policies, continuous queries, and time series data.
@ -191,6 +144,26 @@ Measurements are strings.
Related entries: [field](#field), [series](#series)
## meta node
A node that runs the meta service.
For high availability, installations must have three meta nodes.
Meta nodes can be very modestly sized instances like an EC2 t2.micro or even a
nano.
For additional fault tolerance installations may use five meta nodes; the
number of meta nodes must be an odd number.
Related entries: [meta service](#meta-service)
## meta service
The consistent data store that keeps state about the cluster, including which
servers, databases, users, continuous queries, retention policies, subscriptions,
and blocks of time exist.
Related entries: [meta node](#meta-node)
## metastore
Contains internal information about the status of the system.
@ -198,9 +171,6 @@ The metastore contains the user information, databases, retention policies, shar
Related entries: [database](#database), [retention policy](#retention-policy-rp), [user](#user)
<!--
## permission
-->
## node
An independent `influxd` process.
@ -211,6 +181,15 @@ Related entries: [server](#server)
The local server's nanosecond timestamp.
<!-- ## passive node (experimental)
Passive nodes act as load balancers--they accept write calls, perform shard lookup and RPC calls (on active data nodes), and distribute writes to active data nodes. They do not own shards or accept writes.
**Note:** This is an experimental feature. -->
<!--
## permission
-->
## point
In InfluxDB, a point represents a single data record, similar to a row in a SQL database table. Each point:
@ -238,12 +217,23 @@ Related entries: [point](#point), [schema](#schema), [values per second](#values
An operation that retrieves data from InfluxDB.
See [Data Exploration](/enterprise_influxdb/v1.9/query_language/explore-data/), [Schema Exploration](/enterprise_influxdb/v1.9/query_language/explore-schema/), [Database Management](/enterprise_influxdb/v1.9/query_language/manage-database/).
## replication factor
## replication factor (RF)
The attribute of the retention policy that determines how many copies of data to concurrently store (or retain) in the cluster. Replicating copies ensures that data is available when a data node (or more) is unavailable.
The attribute of the retention policy that determines how many copies of the
data are stored in the cluster. Replicating copies ensures that data is accessible when one or more data nodes are unavailable.
InfluxDB replicates data across `N` data nodes, where `N` is the replication
factor.
For three nodes or less, the default replication factor equals the number of data nodes.
For more than three nodes, the default replication factor is 3. To change the default replication factor, specify the replication factor `n` in the retention policy.
To maintain data availability for queries, the replication factor should be less
than or equal to the number of data nodes in the cluster:
* Data is fully available when the replication factor is greater than the
number of unavailable data nodes.
* Data may be unavailable when the replication factor is less than the number of
unavailable data nodes.
Any replication factor greater than two gives you additional fault tolerance and
query capacity within the cluster.
Related entries: [duration](#duration), [node](#node),
[retention policy](#retention-policy-rp)
@ -457,11 +447,10 @@ Points in the WAL can be queried, and they persist through a system reboot. On p
Related entries: [tsm](#tsm-time-structured-merge-tree)
<!--
## web console
Legacy user interface for the InfluxDB Enterprise.
This interface has been deprecated. We recommend using [Chronograf](/{{< latest "chronograf" >}}/introduction/).
## shard
## shard group
-->
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).

View File

@ -133,3 +133,20 @@ InfluxDB Enterprise clusters support backup and restore functionality starting w
version 0.7.1.
See [Backup and restore](/enterprise_influxdb/v1.9/administration/backup-and-restore/) for
more information.
<!-- ## Passive node setup (experimental)
Passive nodes act as load balancers--they accept write calls, perform shard lookup and RPC calls (on active data nodes), and distribute writes to active data nodes. They do not own shards or accept writes.
Use this feature when you have a replication factor (RF) of 2 or more and your CPU usage is consistently above 80 percent. Using the passive feature lets you scale a cluster when you can no longer vertically scale. Especially useful if you experience a large amount of hinted handoff growth. The passive node writes the hinted handoff queue to its own disk, and then communicates periodically with the appropriate node until it can send the queue contents there.
Best practices when using an active-passive node setup:
- Use when you have a large cluster setup, generally 8 or more nodes.
- Keep the ratio of active to passive nodes between 1:1 and 2:1.
- Passive nodes should receive all writes.
For more inforrmation, see how to [add a passive node to a cluster](/enterprise_influxdb/v1.9/tools/influxd-ctl/#add-a-passive-node-to-the-cluster).
{{% note %}}
**Note:** This feature is experimental and available only in InfluxDB Enterprise.
{{% /note %}} -->

View File

@ -33,3 +33,5 @@ from(bucket: "telegraf/autogen")
```
{{< children >}}
{{< influxdbu title="Intro to Basic Flux Elements" summary="Learn the basics about Flux, InfluxDBs functional scripting language in this **free** InfluxDB University course." action="Take the course" link="https://university.influxdata.com/courses/intro-to-basic-flux-elements-tutorial/" >}}

View File

@ -110,6 +110,8 @@ The Schema pane allows you to explore your data.
The Script pane is where you write your Flux script.
The Functions pane provides a list of functions available in your Flux queries.
{{< influxdbu "flux-103" >}}
<div class="page-nav-btns">
<a class="btn next" href="/enterprise_influxdb/v1.9/flux/get-started/query-influxdb/">Query InfluxDB with Flux</a>
</div>

View File

@ -35,3 +35,5 @@ data = from(bucket: "db/rp")
---
{{< children pages="all" readmore="true" hr="true" >}}
{{< influxdbu title="Intro to Basic Flux Elements" summary="Learn the basics about Flux, InfluxDBs functional scripting language in this **free** InfluxDB University course." action="Take the course" link="https://university.influxdata.com/courses/intro-to-basic-flux-elements-tutorial/" >}}

View File

@ -12,3 +12,5 @@ menu:
---
{{< children >}}
{{< influxdbu title="Intro to InfluxDB Enterprise" summary="Learn about the features and benefits of using InfluxDB Enterprise in this **free** InfluxDB University course." action="Take the course" link="https://university.influxdata.com/courses/intro-to-influxdb-enterprise-tutorial/" >}}

View File

@ -20,3 +20,5 @@ After you successfully [install and set up](/enterprise_influxdb/v1.9/introducti
- Find [Enterprise guides](/enterprise_influxdb/v1.9/guides/) on a variety of topics, such as how to downsample and retain data, rebalance InfluxDB Enterprise clusters, use fine-grained authorization, and more!
- Explore the [InfluxQL](/enterprise_influxdb/v1.9/query_language/) and [Flux](/enterprise_influxdb/v1.9/flux/) languages.
- Learn about [InfluxDB line protocol](/enterprise_influxdb/v1.9/write_protocols/) and other [supported protocols](/enterprise_influxdb/v1.9/supported_protocols/).
{{< influxdbu "influxdb-101" >}}

View File

@ -18,3 +18,5 @@ Complete the following steps to install an InfluxDB Enterprise cluster in your o
1. [Install InfluxDB Enterprise meta nodes](/enterprise_influxdb/v1.9/introduction/installation/installation/meta_node_installation/)
2. [Install InfluxDB data nodes](/enterprise_influxdb/v1.9/introduction/installation/installation/data_node_installation/)
3. [Install Chronograf](/enterprise_influxdb/v1.9/introduction/installation/installation/chrono_install/)
{{< influxdbu title="Installing InfluxDB Enterprise" summary="Learn about InfluxDB architecture and how to install InfluxDB Enterprise with step-by-step instructions." action="Take the course" link="https://university.influxdata.com/courses/installing-influxdb-enterprise-tutorial/" >}}

View File

@ -30,7 +30,7 @@ The following are the most frequently overlooked requirements when installing a
- [Ensure connectivity between machines](#ensure-connectivity-between-machines)
- [Synchronize time between hosts](#synchronize-time-between-hosts)
- [Use SSDs](#use-ssds)
- [Do not use NFS](#do-not-use-nfs-mounts)
- [Do not use NFS or NFS-based services](#do-not-use-nfs-or-nfs-based-services)
- [Disable swap](#disable-swap)
- [Use three and only three meta nodes](#use-three-and-only-three-meta-nodes)
- [Meta and data nodes are fully independent](#meta-and-data-nodes-are-fully-independent)
@ -56,10 +56,13 @@ SANs must guarantee at least 1000 IOPS is always available to InfluxDB Enterpris
nodes or they may not be sufficient.
SSDs are strongly recommended, and we have had no reports of IOPS contention from any customers running on SSDs.
#### Do not use NFS
#### Do not use NFS or NFS-based services
For disk storage, use block devices only.
InfluxDB Enterprise does not support NFS (Network File System)-mounted devices.
InfluxDB Enterprise does **not** support NFS (Network File System)-mounted devices
or services such as [AWS EFS](https://aws.amazon.com/efs/),
[Google Filestore](https://cloud.google.com/filestore), or
[Azure files](https://azure.microsoft.com/en-us/services/storage/files/).
#### Disable swap

View File

@ -67,7 +67,6 @@ List of host names that should **not** go through any proxy. If set to an asteri
NO_PROXY=123.45.67.89,123.45.67.90
```
## `influx` Arguments
There are several arguments you can pass into `influx` when starting.
List them with `$ influx --help`.
@ -86,11 +85,11 @@ The database to which `influx` connects.
`-execute 'command'`
Execute an [InfluxQL](/enterprise_influxdb/v1.9/query_language/explore-data/) command and quit.
See [-execute](#execute-an-influxql-command-and-quit-with-execute).
See [-execute](#execute-an-influxql-command-and-quit-with--execute).
`-format 'json|csv|column'`
Specifies the format of the server responses.
See [-format](#specify-the-format-of-the-server-responses-with-format).
See [-format](#specify-the-format-of-the-server-responses-with--format).
`-host 'host name'`
The host to which `influx` connects.
@ -98,7 +97,7 @@ By default, InfluxDB runs on localhost.
`-import`
Import new data from a file or import a previously [exported](https://github.com/influxdb/influxdb/blob/1.8/importer/README.md) database from a file.
See [-import](#import-data-from-a-file-with-import).
See [-import](#import-data-from-a-file-with--import).
`-password 'password'`
The password `influx` uses to connect to the server.
@ -347,7 +346,7 @@ Quits the `influx` shell.
`format <format>`
Specifies the format of the server responses: `json`, `csv`, or `column`.
See the description of [-format](#specify-the-format-of-the-server-responses-with-format) for examples of each format.
See the description of [-format](#specify-the-format-of-the-server-responses-with--format) for examples of each format.
`history`
Displays your command history.

View File

@ -149,9 +149,9 @@ If authentication is enabled and the `influxd-ctl` command provides the incorrec
Error: authorization failed.
```
### Commands
### **Commands**
#### `add-data`
### `add-data`
Adds a data node to a cluster.
By default, `influxd-ctl` adds the specified data node to the local meta node's cluster.
@ -165,7 +165,14 @@ add-data <data-node-TCP-bind-address>
Resources: [Installation](/enterprise_influxdb/v1.9/installation/data_node_installation/)
##### Examples
##### Arguments
Optional arguments are in brackets.
<!-- ##### `[ -p ]`
Add a passive node to an Enterprise cluster. -->
### Examples
###### Add a data node to a cluster using the local meta node
@ -189,6 +196,13 @@ $ influxd-ctl -bind cluster-meta-node-01:8091 add-data cluster-data-node:8088
Added data node 3 at cluster-data-node:8088
```
<!-- ###### Add a passive node to a cluster
**Passive nodes** act as load balancers--they accept write calls, perform shard lookup and RPC calls (on active data nodes), and distribute writes to active data nodes. They do not own shards or accept writes. If you are using passive nodes, they should be the write endpoint for all data ingest. A cluster can have multiple passive nodes.
```bash
influxd-ctl add-data -p <passive-data-node-TCP-bind-address>
``` -->
### `add-meta`
Adds a meta node to a cluster.
@ -1083,6 +1097,19 @@ cluster-node-01:8091 1.9.x-c1.9.x {}
cluster-node-02:8091 1.9.x-c1.9.x {}
cluster-node-03:8091 1.9.x-c1.9.x {}
```
<!-- ##### Show active and passive data nodes in a cluster
In this example, the `show` command output displays that the cluster includes a passive data node.
```bash
Data Nodes
==========
ID TCP Address Version Labels Passive
4 cluster-node_0_1:8088 1.9.6-c1.9.6 {} false
5 cluster-node_1_1:8088 1.9.6-c1.9.6 {} true
6 cluster-node_2_1:8088 1.9.6-c1.9.6 {} false
```
-->
### `show-shards`

View File

@ -125,25 +125,25 @@ duration(v: int(v: 24h) / 2)
```
### Add a duration to a time value
1. Import the [`experimental` package](/flux/v0.x/stdlib/experimental/).
2. Use [`experimental.addDuration()`](/flux/v0.x/stdlib/experimental/addduration/)
1. Import the [`date` package](/flux/v0.x/stdlib/date/).
2. Use [`date.add()`](/flux/v0.x/stdlib/date/add/)
to add a duration to a time value.
```js
import "experimental"
import "date"
experimental.addDuration(d: 1w, to: 2021-01-01T00:00:00Z)
date.add(d: 1w, to: 2021-01-01T00:00:00Z)
// Returns 2021-01-08T00:00:00.000000000Z
```
### Subtract a duration from a time value
1. Import the [`experimental` package](/flux/v0.x/stdlib/experimental/).
2. Use [`experimental.subDuration()`](/flux/v0.x/stdlib/experimental/subduration/)
1. Import the [`date` package](/flux/v0.x/stdlib/date/).
2. Use [`date.sub()`](/flux/v0.x/stdlib/date/sub/)
to subtract a duration from a time value.
```js
import "experimental"
import "date"
experimental.subDuration(d: 1w, from: 2021-01-01T00:00:00Z)
date.sub(d: 1w, from: 2021-01-01T00:00:00Z)
// Returns 2020-12-25T00:00:00.000000000Z
```

View File

@ -220,27 +220,27 @@ date.quarter(t: t0)
### Add a duration to a time value
To add a [duration](/flux/v0.x/data-types/basic/duration/) to a time value:
1. Import the [`experimental` package](/flux/v0.x/stdlib/experimental/).
2. Use [`experimental.addDuration()`](/flux/v0.x/stdlib/experimental/addduration/)
1. Import the [`date` package](/flux/v0.x/stdlib/date/).
2. Use [`date.add()`](/flux/v0.x/stdlib/date/add/)
to add a duration to a time value.
```js
import "experimental"
import "date"
experimental.addDuration(d: 1w, to: 2021-01-01T00:00:00Z)
date.add(d: 1w, to: 2021-01-01T00:00:00Z)
// Returns 2021-01-08T00:00:00.000000000Z
```
### Subtract a duration from a time value
To subtract a [duration](/flux/v0.x/data-types/basic/duration/) from a time value:
1. Import the [`experimental` package](/flux/v0.x/stdlib/experimental/).
2. Use [`experimental.subDuration()`](/flux/v0.x/stdlib/experimental/subduration/)
to subtract a duration from a time value.
1. Import the [`date` package](/flux/v0.x/stdlib/date/).
2. Use [`date.sub()`](/flux/v0.x/stdlib/date/sub/)
to subtract a duration from a time value.
```js
import "experimental"
import "date"
experimental.subDuration(d: 1w, from: 2021-01-01T00:00:00Z)
date.sub(d: 1w, from: 2021-01-01T00:00:00Z)
// Returns 2020-12-25T00:00:00.000000000Z
```

View File

@ -60,8 +60,7 @@ An **empty group key** groups all data in a stream of tables into a single table
_For an example of how group keys work, see the [Table grouping example](#table-grouping-example) below._
{{% note %}}
#### Data sources determine data structure
## Data sources determine data structure
The Flux data model is separate from the queried data source model.
Queried sources return data structured into columnar tables.
The table structure and columns included depends on the data source.
@ -70,7 +69,13 @@ For example, InfluxDB returns data grouped by [series](/{{< latest "influxdb" >}
so each table in the returned stream of tables represents a unique series.
However, [SQL data sources](/flux/v0.x/stdlib/sql/from/) return a stream of tables
with a single table and an empty group key.
{{% /note %}}
### Column labels beginning with underscores
Some data sources return column labels prefixed with an underscore (`_`).
This is a Flux convention used to identify important or reserved column names.
While the underscore doesn't change the functionality of the column, many
functions in the [Flux standard library](/flux/v0.x/stdlib/) expect or require
these specific column names.
## Operate on tables
At its core, Flux operates on tables.

View File

@ -10,6 +10,107 @@ aliases:
- /influxdb/cloud/reference/release-notes/flux/
---
## v0.171.0 [2022-06-14]
### Breaking changes
- Remove `testing.loadStorage()`.
### Features
- Add `FromStr` to allow the Flux LSP (language server protocol) CLI to run with
optional Flux features.
- Add method to parallelize aggregate transformations.
- Report unused symbols.
- Add `From` implementations for `Node/NodeMut`.
### Bug fixes
- Pass a seed to the tables generator.
- Ensure buffers are retained when copying a buffered table.
- Return an error when using a label variable without the Label constraint.
---
## v0.170.1 [2022-06-06]
### Bug fixes
- Require an earlier minimum version of `lsp-types`.
---
## v0.170.0 [2022-06-02]
### Features
- Add a `pretty.rs`-based MonoType formatter.
### Bug fixes
- Update vectorized `map()` to properly handle shadowed columns.
---
## v0.169.0 [2022-05-31]
### Features
- Add a `_status` tag to PagerDuty records.
- Refactor the operator profile to be in the query statistics.
### Bug fixes
- Ensure that constraints are checked and propagated fully.
- Fix math for integral with a single value.
- Add `json` tags for the transport profiles in statistics.
- Initialize `Metadata` in Flux statistics.
- Return a more helpful error message when an HTTP response body exceeds 100MB.
- Correct several issues found during the implementation of polymorphic labels.
---
## v0.168.0 [2022-05-23]
### Features
- Enable [`movingAverage()`](/flux/v0.x/stdlib/universe/movingaverage/) and
[`cumulativeSum()`](/flux/v0.x/stdlib/universe/cumulativesum/) optimizations
by default.
- Vectorize logical operations in [`map()`](/flux/v0.x/stdlib/universe/map/).
- Add a planner rule that expands logical join nodes.
- Added timezone support to [`hourSelection()`](/flux/v0.x/stdlib/universe/hourselection/).
### Bug fixes
- Attach type when constructing logical expressions.
- Fix panic with half-diamond logical plan.
---
## v0.167.0 [2022-05-16]
### Features
- Allow default types to be specified for default arguments.
- Add [`date.scale()`](/flux/v0.x/stdlib/date/scale/) to allow for dynamic duration changes.
- Expose aggregate window spec fields for use by the query planner.
- Add [`experimental.preview()`](/flux/v0.x/stdlib/experimental/preview/).
### Bug fixes
- Update `date.add()` and `date.sub()` to ork correctly with timezones enabled.
- Fix failing continuous integration tests.
- Update `hourSelection()` to support overnight time ranges.
- Fix logic error in aggregate window planner rule preserve the rule if
`table.fill` is present.
- Use `MultiplicativeOperator` in `MultiplicativeExpression`.
---
## v0.166.0 [2022-05-09]
### Features
- Add InfluxData semantic commit and pull request title validator.
- Add an `Expr` node to the visitor API.
- Add label polymorphism.
- Vectorize remaining arithmetic operators.
### Bug fixes
- Remove `JoinOpSpec.TableNames` in favor of `JoinOpSpec.params` to stay
consistent inside `tableFind()`.
- Fix `SortLimit` for empty input group.
---
## v0.165.0 [2022-04-25]
### Features

View File

@ -269,37 +269,43 @@ The operator precedence is encoded directly into the grammar as the following.
```js
Expression = ConditionalExpression .
ConditionalExpression = LogicalExpression
| "if" Expression "then" Expression "else" Expression .
| "if" Expression "then" Expression "else" Expression .
LogicalExpression = UnaryLogicalExpression
| LogicalExpression LogicalOperator UnaryLogicalExpression .
| LogicalExpression LogicalOperator UnaryLogicalExpression .
LogicalOperator = "and" | "or" .
UnaryLogicalExpression = ComparisonExpression
| UnaryLogicalOperator UnaryLogicalExpression .
| UnaryLogicalOperator UnaryLogicalExpression .
UnaryLogicalOperator = "not" | "exists" .
ComparisonExpression = AdditiveExpression
| ComparisonExpression ComparisonOperator AdditiveExpression .
ComparisonExpression = MultiplicativeExpression
| ComparisonExpression ComparisonOperator MultiplicativeExpression .
ComparisonOperator = "==" | "!=" | "<" | "<=" | ">" | ">=" | "=~" | "!~" .
AdditiveExpression = MultiplicativeExpression
| AdditiveExpression AdditiveOperator MultiplicativeExpression .
| AdditiveExpression AdditiveOperator MultiplicativeExpression .
AdditiveOperator = "+" | "-" .
MultiplicativeExpression = PipeExpression
| MultiplicativeExpression MultiplicativeOperator PipeExpression .
MultiplicativeOperator = "*" | "/" | "%" | "^" .
MultiplicativeExpression = ExponentExpression
| ExponentExpression ExponentOperator MultiplicativeExpression .
| ExponentExpression MultiplicativeOperator MultiplicativeExpression .
MultiplicativeOperator = "*" | "/" | "%" .
ExponentExpression = PipeExpression
| ExponentExpression ExponentOperator PipeExpression .
ExponentOperator = "^" .
PipeExpression = PostfixExpression
| PipeExpression PipeOperator UnaryExpression .
| PipeExpression PipeOperator UnaryExpression .
PipeOperator = "|>" .
UnaryExpression = PostfixExpression
| PrefixOperator UnaryExpression .
| PrefixOperator UnaryExpression .
PrefixOperator = "+" | "-" .
PostfixExpression = PrimaryExpression
| PostfixExpression PostfixOperator .
| PostfixExpression PostfixOperator .
PostfixOperator = MemberExpression
| CallExpression
| IndexExpression .
| CallExpression
| IndexExpression .
```
{{% warn %}}
Dividing by 0 or using the mod operator with a divisor of 0 will result in an error.
Floating point divide by zero produces positive or negative infinity according
to the [IEEE-754](https://en.wikipedia.org/wiki/IEEE_754) floating point specification.
{{% /warn %}}
_Also see [Flux Operators](/flux/v0.x/spec/operators)._

View File

@ -1,24 +1,26 @@
---
title: date.addDuration() function
title: date.add() function
description: >
`date.addDuration()` adds a duration to a time value and returns the resulting time.
`date.add()` adds a duration to a time value and returns the resulting time.
menu:
flux_0_x_ref:
name: date.addDuration
name: date.add
parent: date
weight: 302
flux/v0.x/tags: [date/time]
aliases:
- /flux/v0.x/stdlib/date/addduration/
related:
- /flux/v0.x/stdlib/date/subduration/
introduced: 0.162.0
---
`date.addDuration()` adds a duration to a time value and returns the resulting time.
`date.add()` adds a duration to a time value and returns the resulting time.
```js
import "date"
date.addDuration(d: 12h, to: now())
date.add(d: 12h, to: now())
```
## Parameters
@ -37,7 +39,7 @@ Durations are relative to [`now()`](/flux/v0.x/stdlib/universe/now/).
```js
import "date"
date.addDuration(d: 6h, to: 2019-09-16T12:00:00Z)
date.add(d: 6h, to: 2019-09-16T12:00:00Z)
// Returns 2019-09-16T18:00:00.000000000Z
```
@ -48,7 +50,7 @@ import "date"
option now = () => 2022-01-01T12:00:00Z
date.addDuration(d: 6h, to: 3h)
date.add(d: 6h, to: 3h)
// Returns 2022-01-01T21:00:00.000000000Z
```

View File

@ -16,7 +16,8 @@ introduced: 0.37.0
---
The `date.month()` function returns the month of a specified time.
Results range from `[1-12]`.
Results range from `[1-12]` and correspond to `date` package
[month constants](/flux/v0.x/stdlib/date/#months-of-the-year).
```js
import "date"

View File

@ -0,0 +1,59 @@
---
title: date.scale() function
description: >
`date.scale()` multiplies a duration by a specified value.
menu:
flux_0_x_ref:
name: date.scale
parent: date
weight: 301
introduced: 0.167.0
flux/v0.x/tags: [date/time]
---
`date.scale()` multiplies a duration by a specified value.
This function lets you dynamically scale a duration value.
```js
import "date"
date.scale(d: 1h, n: 12)
// Returns 12h
```
## Parameters
### d {data-type="duration"}
({{< req >}}) Duration to scale.
### n {data-type="int"}
({{< req >}} Amount to scale the duration (`d`) by.
## Examples
### Add n hours to a time
```js
import "date"
n = 5
d = date.scale(d: 1h, n: n)
date.add(d: d, to: 2022-05-10T00:00:00Z)
// Returns 2022-05-10T00:00:00.000000000Z
```
### Add scaled mixed duration to a time
```js
import "date"
n = 5
d = date.scale(d: 1mo1h, n: 5)
date.add(d: d, to: 2022-01-01T00:00:00Z)
// Returns 2022-06-01T05:00:00.000000000Z
```

View File

@ -1,26 +1,28 @@
---
title: date.subDuration() function
title: date.sub() function
description: >
`date.subDuration()` subtracts a duration from a time value and returns the
`date.sub()` subtracts a duration from a time value and returns the
resulting time value.
menu:
flux_0_x_ref:
name: date.subDuration
name: date.sub
parent: date
weight: 302
flux/v0.x/tags: [date/time]
aliases:
- /flux/v0.x/stdlib/date/subduration/
related:
- /flux/v0.x/stdlib/date/addduration/
introduced: 0.162.0
---
`date.subDuration()` subtracts a duration from a time value and returns the
`date.sub()` subtracts a duration from a time value and returns the
resulting time value.
```js
import "date"
date.subDuration(d: 12h, from: now())
date.sub(d: 12h, from: now())
```
## Parameters
@ -39,7 +41,7 @@ Durations are relative to [`now()`](/flux/v0.x/stdlib/universe/now/).
```js
import "date"
date.subDuration(d: 6h, from: 2019-09-16T12:00:00Z)
date.sub(d: 6h, from: 2019-09-16T12:00:00Z)
// Returns 2019-09-16T06:00:00.000000000Z
```
@ -50,7 +52,7 @@ import "date"
option now = () => 2022-01-01T12:00:00Z
date.subDuration(d: 6h, from: -3h)
date.sub(d: 6h, from: -3h)
// Returns 2022-01-01T03:00:00.000000000Z
```

View File

@ -16,7 +16,8 @@ introduced: 0.37.0
---
The `date.weekDay()` function returns the day of the week for a specified time.
Results range from `[0-6]`.
Results range from `[0-6]` and correspond to `date` package
[weekday constants](/flux/v0.x/stdlib/date/#days-of-the-week).
```js
import "date"

View File

@ -19,7 +19,7 @@ deprecated: 0.162.0
---
{{% warn %}}
This function was promoted to the [`date` package](/flux/v0.x/stdlib/date/addduration/)
This function was promoted to the [`date` package](/flux/v0.x/stdlib/date/add/)
in **Flux v0.162.0**. This experimental version has been deprecated.
{{% /warn %}}

View File

@ -0,0 +1,44 @@
---
title: experimental.preview() function
description: >
`experimental.preview()` limits the number of rows and tables in the stream.
menu:
flux_0_x_ref:
name: experimental.preview
parent: experimental
weight: 302
flux/v0.x/tags: [transformations]
introduced: 0.167.0
---
`experimental.preview()` limits the number of rows and tables in the stream.
```js
import "experimental"
data
|> experimental.preview()
```
## Parameters
### nrows {data-type="int"}
Maximum number of rows per table to return. Default is `5`.
### ntables {data-type="int"}
Maximum number of tables to return. Default is `5`.
### tables {data-type="stream of tables"}
Input data.
Default is piped-forward data (`<-`).
## Examples
### Preview data output
```js
import "experimental"
import "sampledata"
sampledata.int()
|> experimental.preview(nrows: 3)
```

View File

@ -19,7 +19,7 @@ deprecated: 0.162.0
---
{{% warn %}}
This function was promoted to the [`date` package](/flux/v0.x/stdlib/date/subduration/)
This function was promoted to the [`date` package](/flux/v0.x/stdlib/date/sub/)
in **Flux v0.162.0**. This experimental version has been deprecated.
{{% /warn %}}

View File

@ -98,23 +98,24 @@ Default is `false`.
- [Query downsampled usage data for a different InfluxDB Cloud organization](#query-downsampled-usage-data-for-a-different-influxdb-cloud-organization)
- [Query number of bytes in requests to the /api/v2/write endpoint](#query-number-of-bytes-in-requests-to-the-apiv2write-endpoint)
- [Query number of bytes returned from the /api/v2/query endpoint](#query-number-of-bytes-returned-from-the-apiv2query-endpoint)
- [Query the query count for InfluxDB Cloud query endpoints](#query-the-query-count-for-influxdb-cloud-query-endpoints)
- [Compare usage metrics to organization usage limits](#compare-usage-metrics-to-organization-usage-limits)
##### Query downsampled usage data for your InfluxDB Cloud organization
### Query downsampled usage data for your InfluxDB Cloud organization
```js
import "experimental/usage"
usage.from(start: -30d, stop: now())
```
##### Query raw usage data for your InfluxDB Cloud organization
### Query raw usage data for your InfluxDB Cloud organization
```js
import "experimental/usage"
usage.from(start: -1h, stop: now(), raw: true)
```
##### Query downsampled usage data for a different InfluxDB Cloud organization
### Query downsampled usage data for a different InfluxDB Cloud organization
```js
import "experimental/usage"
import "influxdata/influxdb/secrets"
@ -130,7 +131,7 @@ usage.from(
)
```
##### Query number of bytes in requests to the /api/v2/write endpoint
### Query number of bytes in requests to the /api/v2/write endpoint
```js
import "experimental/usage"
@ -143,7 +144,7 @@ usage.from(start: -30d, stop: now())
|> group()
```
##### Query number of bytes returned from the /api/v2/query endpoint
### Query number of bytes returned from the /api/v2/query endpoint
```js
import "experimental/usage"
@ -156,10 +157,31 @@ usage.from(start: -30d, stop: now())
|> group()
```
##### Compare usage metrics to organization usage limits
### Query the query count for InfluxDB Cloud query endpoints
The following query returns query counts for the following query endpoints:
- **/api/v2/query**: Flux queries
- **/query**: InfluxQL queries
```javascript
import "experimental/usage"
usage.from(start: -30d, stop: now())
|> filter(fn: (r) => r._measurement == "query_count")
|> sort(columns: ["_time"])
```
### Compare usage metrics to organization usage limits
The following query compares the amount of data written to and queried from your
InfluxDB Cloud organization to your organization's rate limits.
It appends a `limitReached` column to each row that indicates if your rate
limit was exceeded.
```js
import "experimental/usage"
limits = usage.limits()
checkLimit = (tables=<-, limit) => tables
|> map(fn: (r) => ({r with _value: r._value / 1000, limit: int(v: limit) * 60 * 5}))
|> map(fn: (r) => ({r with limitReached: r._value > r.limit}))
@ -171,6 +193,7 @@ read = usage.from(start: -30d, stop: now())
|> group(columns: ["_time"])
|> sum()
|> group()
|> checkLimit(limit: limits.rate.readKBs)
write = usage.from(start: -30d, stop: now())
|> filter(fn: (r) => r._measurement == "http_request")

View File

@ -31,8 +31,7 @@ usage.limits(
)
```
{{< expand-wrapper >}}
{{% expand "View example usage limits record" %}}
#### Example output record
```js
{
orgID: "123",
@ -65,8 +64,6 @@ usage.limits(
}
}
```
{{% /expand %}}
{{< /expand-wrapper >}}
## Parameters
@ -89,14 +86,14 @@ Default is `""`.
- [Output organization limits in a table](#output-organization-limits-in-a-table)
- [Output current cardinality with your cardinality limit](#output-current-cardinality-with-your-cardinality-limit)
##### Get rate limits for your InfluxDB Cloud organization
### Get rate limits for your InfluxDB Cloud organization
```js
import "experimental/usage"
usage.limits()
```
##### Get rate limits for a different InfluxDB Cloud organization
### Get rate limits for a different InfluxDB Cloud organization
```js
import "experimental/usage"
import "influxdata/influxdb/secrets"
@ -106,7 +103,7 @@ token = secrets.get(key: "INFLUX_TOKEN")
usage.limits(host: "https://cloud2.influxdata.com", orgID: "x000X0x0xx0X00x0", token: token)
```
##### Output organization limits in a table
### Output organization limits in a table
```js
import "array"
import "experimental/usage"
@ -133,7 +130,7 @@ array.from(
)
```
##### Output current cardinality with your cardinality limit
### Output current cardinality with your cardinality limit
```js
import "experimental/usage"
import "influxdata/influxdb"
@ -147,4 +144,4 @@ buckets()
|> map(fn: (r) => ({bucket: r.name, Cardinality: bucketCardinality(bucket: r.name)}))
|> sum(column: "Cardinality")
|> map(fn: (r) => ({r with "Cardinality Limit": limits.rate.cardinality}))
```
```

View File

@ -26,6 +26,13 @@ pagerduty.endpoint(
)
```
## Output data
For each input row, `pagerduty.endpoint()` sends an event to the PagerDuty API
and outputs a corresponding output row with the following additional columns:
- **sent**: Sent succesfully <span style="opacity: .5">_(bool)_</span>
- **\_status**: HTTP response status code <span style="opacity: .5">_(string)_</span>
## Parameters
### url {data-type="string"}

View File

@ -3,18 +3,20 @@ title: testing.loadStorage() function
description: >
The `testing.loadStorage()` function loads annotated CSV test data as if it were queried from InfluxDB.
This function ensures tests behave correctly in both the Flux and InfluxDB test suites.
menu:
flux_0_x_ref:
name: testing.loadStorage
parent: testing
aliases:
- /influxdb/v2.0/reference/flux/stdlib/testing/loadstorage/
- /influxdb/cloud/reference/flux/stdlib/testing/loadstorage/
weight: 301
flux/v0.x/tags: [tests, inputs]
introduced: 0.20.0
removed: 0.171.0
---
{{% warn %}}
#### Removed in Flux 0.171.0
`testing.loadStorage()` was removed in Flux 0.171.0 and is no longer supported.
{{% /warn %}}
The `testing.loadStorage()` function loads [annotated CSV](/influxdb/cloud/reference/syntax/annotated-csv/)
test data as if it were queried from InfluxDB.
This function ensures tests behave correctly in both the Flux and InfluxDB test suites.

View File

@ -14,7 +14,7 @@ flux/v0.x/tags: [tests, types]
`types.isType()` tests if a value is a specified
[Flux basic type](/flux/v0.x/data-types/basic/) or
[regular expression type](/flux/v0.x/data-types/regexp/).
[regular expression type](/flux/v0.x/data-types/regexp/). Use this function to filter your data by type. Often used to downsample or [aggregate data by type](#aggregate-or-select-data-based-on-type).
```js
import "types"

View File

@ -30,7 +30,8 @@ aggregateWindow(
column: "_value",
timeSrc: "_stop",
timeDst: "_time",
location: "UTC",
location: timezone.utc,
offset: 0s,
createEmpty: true,
)
```
@ -99,13 +100,19 @@ Defaults to `"_stop"`.
The "time destination" column to which time is copied for the aggregate record.
Defaults to `"_time"`.
### location {data-type="string"}
### location {data-type="record"}
Location used to determine timezone.
Default is the [`location` option](/flux/v0.x/stdlib/universe/#location).
_Flux uses the timezone database (commonly referred to as "tz" or "zoneinfo")
provided by the operating system._
### offset {data-type="duration"}
Duration to shift the window boundaries by. Default is `0s`.
`offset` can be negative, indicating that the offset goes backwards in time.
### createEmpty {data-type="bool"}
For windows without data, create a single-row table for each empty window (using
@ -128,6 +135,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi
- [Use an aggregate function with default parameters](#use-an-aggregate-function-with-default-parameters)
- [Specify parameters of the aggregate function](#specify-parameters-of-the-aggregate-function)
- [Window and aggregate by calendar month](#window-and-aggregate-by-calendar-month)
- [Window and aggregate by calendar week starting on Monday](#window-and-aggregate-by-calendar-week-starting-on-monday)
#### Use an aggregate function with default parameters
The following example uses the default parameters of the
@ -215,6 +223,7 @@ data
|> aggregateWindow(every: 1mo, fn: mean)
```
{{< expand-wrapper >}}
{{% expand "View input and output" %}}
##### Input data
{{% flux/sample set="float" includeRange=true %}}
@ -228,3 +237,47 @@ data
| :------------------- | :------------------- | :------------------- | :-- | ----------------: |
| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | t2 | 9.426666666666668 |
{{% /expand %}}
{{< /expand-wrapper >}}
#### Window and aggregate by calendar week starting on Monday
Flux increments weeks from the Unix epoch, which was a Thursday.
Because of this, by default, all `1w` windows begin on Thursday.
Use the `offset` parameter to shift the start of weekly windows to the desired day.
| Week start | Offset |
| :--------- | :----: |
| Monday | -3d |
| Tuesday | -2d |
| Wednesday | -1d |
| Thursday | 0d |
| Friday | 1d |
| Saturday | 2d |
| Sunday | 3d |
```js
import "sampledata"
data = sampledata.float()
|> range(start: sampledata.start, stop: sampledata.stop)
data
|> aggregateWindow(every: 1w, offset: -3d, fn: mean)
```
{{< expand-wrapper >}}
{{% expand "View input and output" %}}
##### Input data
{{% flux/sample set="float" includeRange=true %}}
#### Output data
| _start | _stop | _time | tag | _value |
| :------------------- | :------------------- | :------------------- | :-- | -----: |
| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | t1 | 8.88 |
| _start | _stop | _time | tag | _value |
| :------------------- | :------------------- | :------------------- | :-- | ----------------: |
| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | t2 | 9.426666666666668 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -31,9 +31,64 @@ bool(v: "true")
The value to convert.
## Examples
#### Convert a numeric column to a boolean column
```js
from(bucket: "sensor-data")
|> range(start: -1m)
|> filter(fn: (r) => r._measurement == "system")
|> map(fn: (r) => ({r with responsive: bool(v: r.responsive)}))
import "sampledata"
data = sampledata.numericBool()
|> rename(columns: {_value: "online"})
data
|> map(fn: (r) => ({r with online: bool(v: r.online)}))
```
{{< expand-wrapper >}}
{{% expand "View input and output" %}}
{{< flex >}}
{{% flex-content %}}
##### Input data
| _time | tag | online |
| :------------------- | :-- | -----: |
| 2021-01-01T00:00:00Z | t1 | 1 |
| 2021-01-01T00:00:10Z | t1 | 1 |
| 2021-01-01T00:00:20Z | t1 | 0 |
| 2021-01-01T00:00:30Z | t1 | 1 |
| 2021-01-01T00:00:40Z | t1 | 0 |
| 2021-01-01T00:00:50Z | t1 | 0 |
| _time | tag | online |
| :------------------- | :-- | -----: |
| 2021-01-01T00:00:00Z | t2 | 0 |
| 2021-01-01T00:00:10Z | t2 | 1 |
| 2021-01-01T00:00:20Z | t2 | 0 |
| 2021-01-01T00:00:30Z | t2 | 1 |
| 2021-01-01T00:00:40Z | t2 | 1 |
| 2021-01-01T00:00:50Z | t2 | 0 |
{{% /flex-content %}}
{{% flex-content %}}
##### Output data
| _time | tag | online |
| :------------------- | :-- | -----: |
| 2021-01-01T00:00:00Z | t1 | true |
| 2021-01-01T00:00:10Z | t1 | true |
| 2021-01-01T00:00:20Z | t1 | false |
| 2021-01-01T00:00:30Z | t1 | true |
| 2021-01-01T00:00:40Z | t1 | false |
| 2021-01-01T00:00:50Z | t1 | false |
| _time | tag | online |
| :------------------- | :-- | -----: |
| 2021-01-01T00:00:00Z | t2 | false |
| 2021-01-01T00:00:10Z | t2 | true |
| 2021-01-01T00:00:20Z | t2 | false |
| 2021-01-01T00:00:30Z | t2 | true |
| 2021-01-01T00:00:40Z | t2 | true |
| 2021-01-01T00:00:50Z | t2 | false |
{{% /flex-content %}}
{{< /flex >}}
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -23,6 +23,7 @@ The `hourSelection()` function retains all rows with time values in a specified
hourSelection(
start: 9,
stop: 17,
location: {offset: 0h, zone: "UTC"},
timeColumn: "_time",
)
```
@ -39,6 +40,10 @@ Hours range from `[0-23]`.
The last hour of the hour range (inclusive).
Hours range from `[0-23]`.
### location {data-type="record"}
Location used to determine timezone.
Default is the [`location` option](/flux/v0.x/stdlib/universe/#location).
### timeColumn {data-type="string"}
The column that contains the time value.
Default is `"_time"`.

View File

@ -25,9 +25,8 @@ toBool()
```
{{% note %}}
To convert values in a column other than `_value`, define a custom function
patterned after the [function definition](#function-definition),
but replace `_value` with your desired column.
To convert values in a column other than `_value`, use `map()` and `bool()`
as shown in [this example](/flux/v0.x/stdlib/universe/bool/#convert-a-numeric-column-to-a-boolean-column).
{{% /note %}}
##### Supported data types

View File

@ -25,9 +25,8 @@ toFloat()
```
{{% note %}}
To convert values in a column other than `_value`, define a custom function
patterned after the [function definition](#function-definition),
but replace `_value` with your desired column.
To convert values in a column other than `_value`, use `map()` and `float()`
as shown in [this example](/flux/v0.x/stdlib/universe/float/#convert-all-values-in-a-column-to-float-values).
{{% /note %}}
##### Supported data types

View File

@ -45,9 +45,8 @@ toInt()
| uint | Integer equivalent of the unsigned integer |
{{% note %}}
To convert values in a column other than `_value`, define a custom function
patterned after the [function definition](#function-definition),
but replace `_value` with your desired column.
To convert values in a column other than `_value`, use `map()` and `int()`
as shown in [this example](/flux/v0.x/stdlib/universe/int/#convert-all-values-in-a-column-to-integer-values).
{{% /note %}}
## Parameters

View File

@ -22,9 +22,8 @@ toString()
```
{{% note %}}
To convert values in a column other than `_value`, define a custom function
patterned after the [function definition](#function-definition),
but replace `_value` with your desired column.
To convert values in a column other than `_value`, use `map()` and `string()`
as shown in [this example](/flux/v0.x/stdlib/universe/string/#convert-all-values-in-a-column-to-string-values).
{{% /note %}}
##### Supported data types

View File

@ -25,9 +25,8 @@ toTime()
```
{{% note %}}
To convert values in a column other than `_value`, define a custom function
patterned after the [function definition](#function-definition),
but replace `_value` with your desired column.
To convert values in a column other than `_value`, use `map()` and `time()`
as shown in [this example](/flux/v0.x/stdlib/universe/time/#convert-all-values-in-a-column-to-time-values).
{{% /note %}}
##### Supported data types

View File

@ -25,9 +25,8 @@ toUInt()
```
{{% note %}}
To convert values in a column other than `_value`, define a custom function
patterned after the [function definition](#function-definition),
but replace `_value` with your desired column.
To convert values in a column other than `_value`, use `map()` and `uint()`
as shown in [this example](/flux/v0.x/stdlib/universe/uint/#convert-all-values-in-a-column-to-uinteger-values).
{{% /note %}}
##### Supported data types

View File

@ -49,9 +49,13 @@ For details, see [Export a dashboard](/influxdb/cloud/visualize-data/dashboards/
3. Click the name of a Telegraf configuration.
4. Click **Download Config** to save.
#### Request a data backup
#### Export data
To request a backup of data in your {{< cloud-name "short" >}} instance, contact [InfluxData Support](mailto:support@influxdata.com).
To export all your data, query your data out in time-based batches and store it
in to an external system or an InfluxDB OSS instance.
For information about automatically exporting and migrating data from InfluxDB
Cloud to InfluxDB OSS, see: [Migrate data from InfluxDB Cloud to InfluxDB OSS](/influxdb/cloud/migrate-data/migrate-cloud-to-oss/).
### Cancel service

View File

@ -25,3 +25,5 @@ that work with InfluxDB 1.x client libraries and third-party integrations like
[Grafana](https://grafana.com) and others.
<a class="btn" href="/influxdb/cloud/api/v1-compatibility/">View full v1 compatibility API documentation</a>
{{< influxdbu title="Building IoT Appls with InfluxDB" summary="Learn the basics of how to build an IoT application with InfluxDB with this **free** InfluxDB University course." action="Take the course" link="https://university.influxdata.com/courses/building-iot-apps-with-influxdb-tutorial/" >}}

View File

@ -13,3 +13,5 @@ menu:
url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-csharp
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-dart
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-java
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-php
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-r
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-ruby
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala
weight: 201
---
{{< duplicate-oss >}}

View File

@ -12,3 +12,5 @@ menu:
url: https://github.com/influxdata/influxdb-client-swift
weight: 201
---
{{< duplicate-oss >}}

Some files were not shown because too many files have changed in this diff Show More