Merge branch 'master' into alpha-10-task-description

pull/574/head
Nora 2019-10-30 10:44:46 -07:00
commit acbd098542
534 changed files with 28432 additions and 1737 deletions

View File

@ -4,7 +4,7 @@ jobs:
docker:
- image: circleci/node:latest
environment:
HUGO_VERSION: "0.55.1"
HUGO_VERSION: "0.56.3"
S3DEPLOY_VERSION: "2.3.2"
steps:
- checkout
@ -23,7 +23,10 @@ jobs:
command: ./deploy/ci-install-s3deploy.sh
- run:
name: Install NPM dependencies
command: sudo npm i -g postcss-cli autoprefixer
command: sudo npm i -g postcss-cli autoprefixer redoc-cli
- run:
name: Generate API documentation
command: cd api-docs && bash generate-api-docs.sh
- save_cache:
key: install-v1-{{ checksum ".circleci/config.yml" }}
paths:

17
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,17 @@
_Describe the issue here._
##### Relevant URLs
- _Provide relevant URLs_
##### What products and version are you using?
<!--
For InfluxDB 2.0 documentation issues (typos, missing/inaccurate information,
etc.), create an issue in this repository. For project issues (bugs, unexpected
behavior, etc.), create an issue in the appropriate project repository.
For example, report:
- InfluxDB issues at https://github.com/influxdata/influxdb
- Telegraf issues at https://github.com/influxdata/telegraf
- Flux issues at https://github.com/influxdata/flux
-->

8
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,8 @@
Closes #
_Describe your proposed changes here._
- [ ] Signed the [InfluxData CLA](https://www.influxdata.com/legal/cla/)
([if necessary](https://github.com/influxdata/docs-v2/blob/master/CONTRIBUTING.md#sign-the-influxdata-cla))
- [ ] Tests pass (no build errors)
- [ ] Rebased/mergeable

11
.github/SECURITY.md vendored Normal file
View File

@ -0,0 +1,11 @@
# Security Policy
## Reporting a Vulnerability
Reporting a Vulnerability
InfluxData takes security and our users' trust very seriously.
If you believe you have found a security issue in any of our open source projects,
please responsibly disclose it by contacting security@influxdata.com.
More details about security vulnerability reporting, including our GPG key,
can be found here. https://www.influxdata.com/how-to-report-security-vulnerabilities/

1
.gitignore vendored
View File

@ -5,3 +5,4 @@ public
node_modules
*.log
/resources
/content/**/api.html

View File

@ -3,9 +3,12 @@
## Sign the InfluxData CLA
The InfluxData Contributor License Agreement (CLA) is part of the legal framework
for the open-source ecosystem that protects both you and InfluxData.
In order to contribute to any InfluxData project, you must first sign the CLA.
To make substantial contributions to InfluxData documentation, first sign the InfluxData CLA.
What constitutes a "substantial" change is at the discretion of InfluxData documentation maintainers.
[Sign the InfluxData (CLA)](https://www.influxdata.com/legal/cla/)
[Sign the InfluxData CLA](https://www.influxdata.com/legal/cla/)
_**Note:** Typo and broken link fixes are greatly appreciated and do not require signing the CLA._
## Make suggested updates
@ -59,13 +62,16 @@ menu:
v2_0:
name: # Article name that only appears in the left nav
parent: # Specifies a parent group and nests navigation items
weight: # Determines sort order in both the nav tree and in article lists.
weight: # Determines sort order in both the nav tree and in article lists
draft: # If true, will not render page on build
enterprise_all: # If true, specifies the doc as a whole is specific to InfluxDB Enterprise
enterprise_some: # If true, specifies the doc includes some content specific to InfluxDB Enterprise
cloud_all: # If true, specifies the doc as a whole is specific to InfluxDB Cloud
cloud_some: # If true, specifies the doc includes some content specific to InfluxDB Cloud
v2.x/tags: # Tags specific to each version (replace .x" with the appropriate minor version )
related: # Creates links to specific internal and external content at the bottom of the page
- /path/to/related/article
- https://external-link.com, This is an external link
```
#### Title usage
@ -199,6 +205,17 @@ Insert Cloud-specific markdown content here.
{{% /cloud %}}
```
#### InfluxDB Cloud content block
The `{{ cloud-msg }}` shortcode creates a highlighted block of text specific to
InfluxDB Cloud meant to stand out from the rest of the article content.
It's format is similar to note and warning blocks.
```md
{{% cloud-msg %}}
Insert Cloud-specific markdown content here.
{{% /cloud-msg %}}
```
#### InfluxDB Cloud name
The name used to refer to InfluxData's cloud offering is subject to change.
To facilitate easy updates in the future, use the `cloud-name` short-code when
@ -310,6 +327,20 @@ WHERE time > now() - 15m
{{< /code-tabs-wrapper >}}
~~~
### Related content
Use the `related` frontmatter to include links to specific articles at the bottom of an article.
- If the page exists inside of this documentation, just include the path to the page.
It will automatically detect the title of the page.
- If the page exists outside of this documentation, include the full URL and a title for the link.
The link and title must be in that order and must be separated by a comma and a space.
```yaml
related:
- /v2.0/write-data/quick-start
- https://influxdata.com, This is an external link
```
### High-resolution images
In many cases, screenshots included in the docs are taken from high-resolution (retina) screens.
Because of this, the actual pixel dimension is 2x larger than it needs to be and is rendered 2x bigger than it should be.
@ -389,15 +420,20 @@ Below is a list of available icons (some are aliases):
- dashboard
- dashboards
- data-explorer
- delete
- download
- duplicate
- edit
- expand
- export
- eye
- eye-closed
- eye-open
- feedback
- fullscreen
- gear
- graph
- hide
- influx
- influx-icon
- nav-admin
@ -422,7 +458,11 @@ Below is a list of available icons (some are aliases):
- search
- settings
- tasks
- toggle
- trash
- trashcan
- triangle
- view
- wrench
- x
@ -436,12 +476,15 @@ Provide a visual example of the the navigation item using the `nav-icon` shortco
The following case insensitive values are supported:
- admin
- data explorer, data-explorer
- admin, influx
- data-explorer, data explorer
- dashboards
- tasks
- organizations, orgs
- configuration, config
- monitor, alerts, bell
- cloud, usage
- disks, load data, load-data
- settings
- feedback
### InfluxDB UI notification messages
In some cases, documentation references a notification message that appears in
@ -491,6 +534,13 @@ menu:
### Image naming conventions
Save images using the following naming format: `version-context-description.png`. For example, `2-0-visualizations-line-graph.png` or `2-0-tasks-add-new.png`. Specify a version other than 2.0 only if the image is specific to that version.
## InfluxDB API documentation
InfluxData uses [Redoc](https://github.com/Redocly/redoc) to generate the full
InfluxDB API documentation when documentation is deployed.
Redoc generates HTML documentation using the InfluxDB `swagger.yml`.
For more information about generating InfluxDB API documentation, see the
[API Documentation README](https://github.com/influxdata/docs-v2/tree/master/api-docs#readme).
## New Versions of InfluxDB
Version bumps occur regularly in the documentation.
Each minor version has its own directory with unique content.
@ -536,7 +586,10 @@ _This example assumes v2.0 is the most recent version and v2.1 is the new versio
latest_version: v2.1
```
7. Commit the changes and push the new branch to Github.
7. Copy the InfluxDB `swagger.yml` specific to the new version into the
`/api-docs/v<version-number>/` directory.
8. Commit the changes and push the new branch to Github.
These changes lay the foundation for the new version.

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2019 InfluxData, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,38 +1,50 @@
<p align="center">
<img src="/static/img/influx-logo-cubo-dark.png" width="200">
</p>
# InfluxDB 2.0 Documentation
This repository contains the InfluxDB 2.x documentation published at [docs.influxdata.com](https://docs.influxdata.com).
## Contributing
We welcome and encourage community contributions. For information about contributing to the InfluxData documentation, see [Contribution guidelines](CONTRIBUTING.md).
We welcome and encourage community contributions.
For information about contributing to the InfluxData documentation, see [Contribution guidelines](CONTRIBUTING.md).
## Run the docs locally
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site
generator built in Go.
## Reporting a Vulnerability
InfluxData takes security and our users' trust very seriously.
If you believe you have found a security issue in any of our open source projects,
please responsibly disclose it by contacting security@influxdata.com.
More details about security vulnerability reporting,
including our GPG key, can be found at https://www.influxdata.com/how-to-report-security-vulnerabilities/.
### Clone this repository
[Clone this repository](https://help.github.com/articles/cloning-a-repository/)
to your local machine.
## Running the docs locally
### Install Hugo
See the Hugo documentation for information about how to
[download and install Hugo](https://gohugo.io/getting-started/installing/).
1. [**Clone this repository**](https://help.github.com/articles/cloning-a-repository/) to your local machine.
### Install NodeJS & Asset Pipeline Tools
This project uses tools written in NodeJS to build and process stylesheets and javascript.
In order for assets to build correctly, [install NodeJS](https://nodejs.org/en/download/)
and run the following command to install the necessary tools:
2. **Install Hugo**
```sh
npm i -g postcss-cli autoprefixer
```
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site generator built in Go.
See the Hugo documentation for information about how to [download and install Hugo](https://gohugo.io/getting-started/installing/).
### Start the hugo server
Hugo provides a local development server that generates the HTML pages, builds
the static assets, and serves them at `localhost:1313`.
3. **Install NodeJS & Asset Pipeline Tools**
Start the hugo server with:
This project uses tools written in NodeJS to build and process stylesheets and javascript.
In order for assets to build correctly, [install NodeJS](https://nodejs.org/en/download/)
and run the following command to install the necessary tools:
```bash
hugo server
```
```
npm i -g postcss-cli autoprefixer
```
View the docs at [localhost:1313](http://localhost:1313).
4. **Start the Hugo server**
Hugo provides a local development server that generates the HTML pages, builds
the static assets, and serves them at `localhost:1313`.
Start the Hugo server from the repository:
```
$ cd docs-v2/
$ hugo server
```
View the docs at [localhost:1313](http://localhost:1313).

37
api-docs/README.md Normal file
View File

@ -0,0 +1,37 @@
## Generate InfluxDB API docs
InfluxDB uses [Redoc](https://github.com/Redocly/redoc/) and
[redoc-cli](https://github.com/Redocly/redoc/blob/master/cli/README.md) to generate
API documentation from the InfluxDB `swagger.yml`.
To minimize repo size, the generated API documentation HTML is gitignored, therefore
not committed directly to the docs repo.
The InfluxDB docs deployment process uses swagger files in the `api-docs` directory
to generate version-specific API documentation.
### Versioned swagger files
Structure versions swagger files using the following pattern:
```
api-docs/
├── v2.0/
│ └── swagger.yml
├── v2.1/
│ └── swagger.yml
├── v2.2/
│ └── swagger.yml
└── etc...
```
### Generate API docs locally
Because the API documentation HTML is gitignored, you must manually generate it
to view the API docs locally.
From the root of the docs repo, run:
```sh
# Install redoc-cli
npm install -g redoc-cli
# Generate the API docs
cd api-docs && generate-api-docs.sh
```

View File

@ -0,0 +1,42 @@
#!/bin/bash -e
# Get list of versions from directory names
versions="$(ls -d -- */)"
for version in $versions
do
# Trim the trailing slash off the directory name
version="${version%/}"
menu="${version//./_}_ref"
# Generate the frontmatter
frontmatter="---
title: InfluxDB $version API documentation
description: >
The InfluxDB API provides a programmatic interface for interactions with InfluxDB $version.
layout: api
menu:
$menu:
parent: InfluxDB v2 API
name: View full API docs
weight: 102
---
"
# Use Redoc to generate the API html
redoc-cli bundle -t template.hbs \
--title="InfluxDB $version API documentation" \
--options.sortPropsAlphabetically \
--options.menuToggle \
--options.hideHostname \
--templateOptions.version="$version" \
$version/swagger.yml
# Create temp file with frontmatter and Redoc html
echo "$frontmatter" >> $version.tmp
cat redoc-static.html >> $version.tmp
# Remove redoc file and move the tmp file to it's proper place
rm -f redoc-static.html
mv $version.tmp ../content/$version/api.html
done

52
api-docs/template.hbs Normal file
View File

@ -0,0 +1,52 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf8" />
<title>{{title}}</title>
<meta name="description" content="The InfluxDB API provides a programmatic interface for interactions with InfluxDB {{templateOptions.version}}.">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="shortcut icon" href="/img/favicon.png" type="image/png" sizes="32x32">
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-45024174-12"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-45024174-12');
</script>
<meta name="google-site-verification" content="_V6CNhaIIgVsTO9max_ECw7DUfPL-ZGE7G03MQgEGMU" />
<style>
body {
padding: 0;
margin: 0;
}
</style>
{{#unless disableGoogleFont}}<link href="https://fonts.googleapis.com/css?family=Roboto+Mono:500,500i,700,700i|Roboto:400,400i,700,700i|Rubik:400,400i,500,500i,700,700i" rel="stylesheet">{{/unless}}
{{{redocHead}}}
<link rel="stylesheet" type="text/css" href="/api.css">
</head>
<body>
<div id="loading">
<div class="spinner"></div>
</div>
<div id="influx-header">
<a href="/{{templateOptions.version}}">InfluxDB {{templateOptions.version}} Docs</a>
</div>
{{{redocHTML}}}
<script type="text/javascript">
function removeFadeOut( el, speed ) {
var seconds = speed/1000;
el.style.transition = "opacity "+seconds+"s ease";
el.style.opacity = 0;
setTimeout(function() {
el.parentNode.removeChild(el);
}, speed);
}
removeFadeOut(document.getElementById('loading'), 500);
</script>
</body>
</html>

9851
api-docs/v2.0/swagger.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -35,9 +35,9 @@ $('.article a[href^="#"]:not(' + elementWhiteList + ')').click(function (e) {
///////////////////////////// Left Nav Interactions /////////////////////////////
$(".children-toggle").click(function(e) {
e.preventDefault()
$(this).toggleClass('open');
$(this).siblings('.children').toggleClass('open');
e.preventDefault()
$(this).toggleClass('open');
$(this).siblings('.children').toggleClass('open');
})
//////////////////////////// Mobile Contents Toggle ////////////////////////////
@ -52,28 +52,28 @@ $('#contents-toggle-btn').click(function(e) {
function tabbedContent(container, tab, content) {
// Add the active class to the first tab in each tab group,
// in case it wasn't already set in the markup.
$(container).each(function () {
$(tab, this).removeClass('is-active');
$(tab + ':first', this).addClass('is-active');
});
// Add the active class to the first tab in each tab group,
// in case it wasn't already set in the markup.
$(container).each(function () {
$(tab, this).removeClass('is-active');
$(tab + ':first', this).addClass('is-active');
});
$(tab).on('click', function(e) {
e.preventDefault();
$(tab).on('click', function(e) {
e.preventDefault();
// Make sure the tab being clicked is marked as active, and make the rest inactive.
$(this).addClass('is-active').siblings().removeClass('is-active');
// Make sure the tab being clicked is marked as active, and make the rest inactive.
$(this).addClass('is-active').siblings().removeClass('is-active');
// Render the correct tab content based on the position of the tab being clicked.
const activeIndex = $(tab).index(this);
$(content).each(function(i) {
if (i === activeIndex) {
$(this).show();
$(this).siblings(content).hide();
}
});
});
// Render the correct tab content based on the position of the tab being clicked.
const activeIndex = $(tab).index(this);
$(content).each(function(i) {
if (i === activeIndex) {
$(this).show();
$(this).siblings(content).hide();
}
});
});
}
tabbedContent('.code-tabs-wrapper', '.code-tabs p a', '.code-tab-content');
@ -82,8 +82,8 @@ tabbedContent('.tabs-wrapper', '.tabs p a', '.tab-content');
/////////////////////////////// Truncate Content ///////////////////////////////
$(".truncate-toggle").click(function(e) {
e.preventDefault()
$(this).closest('.truncate').toggleClass('closed');
e.preventDefault()
$(this).closest('.truncate').toggleClass('closed');
})
//////////////////// Replace Missing Images with Placeholder ///////////////////
@ -92,3 +92,11 @@ $(".article--content img").on("error", function() {
$(this).attr("src", "/img/coming-soon.svg");
$(this).attr("style", "max-width:500px;");
});
////////////////////////// Inject tooltips on load //////////////////////////////
$('.tooltip').each( function(){
$toolTipText = $('<div/>').addClass('tooltip-text').text($(this).attr('data-tooltip-text'));
$toolTipElement = $('<div/>').addClass('tooltip-container').append($toolTipText);
$(this).prepend($toolTipElement);
});

View File

@ -0,0 +1,50 @@
// Count tag elements
function countTag(tag) {
return $(".visible[data-tags*='" + tag + "']").length
}
function getFilterCounts() {
$('#plugin-filters label').each(function() {
var tagName = $('input', this).attr('name').replace(/[\W]+/, "-");
var tagCount = countTag(tagName);
$(this).attr('data-count', '(' + tagCount + ')');
if (tagCount <= 0) {
$(this).fadeTo(200, 0.25);
} else {
$(this).fadeTo(400, 1.0);
}
})
}
// Get initial filter count on page load
getFilterCounts()
$("#plugin-filters input").click(function() {
// List of tags to hide
var tagArray = $("#plugin-filters input:checkbox:checked").map(function(){
return $(this).attr('name').replace(/[\W]+/, "-");
}).get();
// List of tags to restore
var restoreArray = $("#plugin-filters input:checkbox:not(:checked)").map(function(){
return $(this).attr('name').replace(/[\W]+/, "-");
}).get();
// Actions for filter select
if ( $(this).is(':checked') ) {
$.each( tagArray, function( index, value ) {
$(".plugin-card.visible:not([data-tags~='" + value + "'])").removeClass('visible').fadeOut()
})
} else {
$.each( restoreArray, function( index, value ) {
$(".plugin-card:not(.visible)[data-tags~='" + value + "']").addClass('visible').fadeIn()
})
$.each( tagArray, function( index, value ) {
$(".plugin-card.visible:not([data-tags~='" + value + "'])").removeClass('visible').hide()
})
}
// Refresh filter count
getFilterCounts()
});

View File

@ -0,0 +1,266 @@
@import "tools/color-palette";
@import "tools/icomoon";
// Fonts
$rubik: 'Rubik', sans-serif;
$roboto: 'Roboto', sans-serif;
$roboto-mono: 'Roboto Mono', monospace;
// Font weights
$medium: 500;
$bold: 700;
//////////////////////////////////// LOADER ////////////////////////////////////
#loading {
position: fixed;
width: 100vw;
height: 100vh;
z-index: 1000;
background-color: $g20-white;
opacity: 1;
transition: opacity .5s;
}
@keyframes spinner {
to {transform: rotate(360deg);}
}
.spinner:before {
content: '';
box-sizing: border-box;
position: absolute;
top: 50%;
left: 50%;
width: 50px;
height: 50px;
margin-top: -25px;
margin-left: -25px;
border-radius: 50%;
border: 3px solid $g16-pearl;
border-top-color: $cp-comet;
animation: spinner .6s linear infinite;
}
//////////////////////////////// InfluxDB Header ///////////////////////////////
#influx-header {
font-family: $rubik;
padding: 15px 20px ;
display: block;
background-color: $wp-violentdark;
a {
color: $g20-white;
text-decoration: none;
transition: color .2s;
&:hover {
color: $b-pool;
}
&:before {
content: '\e918';
font-family: 'icomoon';
margin-right: .65rem;
}
}
}
////////////////////////////////////////////////////////////////////////////////
.cjtbAK {
h1,h2,h3,h4,h5,h6,
p,li,th,td {
font-family: $rubik !important;
}
}
#redoc {
h1,h2,h3,h4,h5,h6 {
font-weight: $medium !important;
}
}
// Section title padding
.dluJDj {
padding: 20px 0;
}
// Page h1
.dTJWQH {
color: $g7-graphite;
font-size: 2rem;
}
// Download button
.jIdpVJ {
background: $b-dodger;
color: $g20-white;
border: none;
border-radius: 3px;
font-family: $rubik;
font-size: .85rem;
font-weight: $medium;
transition: background-color .2s;
&:hover {
background-color: $b-pool;
}
}
// Tag h1s
.WxWXp {
color: $g7-graphite;
font-size: 1.75rem;
}
// Summaru h2s and table headers
.ioYTqA, .bxcHYI, .hoUoen {
color: $g7-graphite;
}
// h3s
.espozG {
color: $g8-storm;
}
// Links
.bnFPhO a { color: $b-dodger;
&:visited {color: $b-dodger;}
}
.redoc-json {
font-family: $roboto-mono !important;
}
// Inline Code
.flfxUM code,
.gDsWLk code,
.kTVySD {
font-family: $roboto-mono !important;
color: $cp-marguerite;
background: $cp-titan;
border-color: $cp-titan;
}
// Required tags
.jsTAxL {
color: $o-curacao;
}
///////////////////////////// RESPONSE COLOR BLOCKS ////////////////////////////
// Green
.hLVzSF {
background-color: rgba($gr-wasabi, .5);
color: $gr-emerald;
}
// Red
.byLrBg {
background-color: rgba($o-marmelade, .35);
color: $o-curacao;
}
/////////////////////////////////// LEFT NAV ///////////////////////////////////
// Left nav background
.gZdDsM {
background-color: $g19-ghost;
}
.gpbcFk:hover, .sc-eTuwsz.active {
background-color: rgb(237, 237, 237);
}
// List item text
.SmuWE, .gcUzvG, .bbViyS, .sc-hrWEMg label {
font-family: $rubik !important;
}
.fyUykq {
font-weight: $medium;
}
// Request method tags
.cFwMcp {
&.post { background-color: $b-curious; }
&.get { background-color: $gr-canopy; }
&.put { background-color: $cp-comet; }
&.patch { background-color: $ch-keylime; }
&.delete { background-color: $o-curacao; }
}
// Active nav section
.gcUzvG, .iNzLCk:hover {
color: $m-magenta;
}
/////////////////////////////// RIGHT CODE COLUMN //////////////////////////////
// Right column backgrounds
.dtUibw, .fLUKgj {
background-color: $wp-jagger;
h3,h4,h5,h6 {
font-family: $rubik !important;
font-weight: $medium !important;
}
}
// Code backgrounds
.irpqyy > .react-tabs__tab-panel {
background-color: $wp-telopea;
}
.dHLKeu, .fVaxnA {
padding-left: 10px;
background-color: $wp-telopea;
}
// Response code tabs
.irpqyy > ul > li {
background-color: $wp-telopea;
border-radius: 3px;
&.react-tabs__tab--selected{ color: $cp-blueviolet; }
&.tab-error { color: $o-fire; }
&.tab-success { color: $gr-viridian; }
}
// Request methods
.bNYCAJ,
.jBjYbV,
.hOczRB,
.fRsrDc,
.hPskZd {
font-family: $rubik;
font-weight: $medium;
letter-spacing: .04em;
border-radius: 3px;
}
.bNYCAJ { background-color: $b-curious; } /* Post */
.jBjYbV { background-color: $gr-canopy; } /* Get */
.hOczRB { background-color: $cp-comet; } /* Put */
.fRsrDc { background-color: $ch-chartreuse; color: $ch-olive; } /* Patch */
.hPskZd { background-color: $o-curacao; } /* Delete */
// Content type block
.gzAoUb {
background-color: rgba($wp-jagger, .4);
font-family: $rubik;
}
.iENVAs { font-family: $roboto-mono; }
.dpMbau { font-family: $rubik; }
// Code controls
.fCJmC {
font-family: $rubik;
span { border-radius: 3px; }
}
// Code blocks
.kZHJcC { font-family: $roboto-mono; }
.jCgylq {
.token.string {
color: $gr-honeydew;
& + a { color: $b-malibu; }
}
.token.boolean { color: #f955b0; }
}

View File

@ -18,7 +18,8 @@
}
}
h2,h3,h4,h5,h6 {
& + .highlight pre { margin-top: .5rem; }
& + .highlight pre { margin-top: .5rem }
& + pre { margin-top: .5rem }
& + .code-tabs-wrapper { margin-top: 0; }
}
h1 {
@ -61,7 +62,7 @@
p,li {
color: $article-text;
line-height: 1.6rem;
line-height: 1.7rem;
}
p {
@ -106,10 +107,12 @@
"article/lists",
"article/note",
"article/pagination-btns",
"article/related",
"article/scrollbars",
"article/tabbed-content",
"article/tables",
"article/tags",
"article/telegraf-plugins",
"article/truncate",
"article/warn";

View File

@ -26,22 +26,22 @@
&.ui-toggle {
display: inline-block;
position: relative;
width: 34px;
height: 22px;
background: #1C1C21;
border: 2px solid #383846;
width: 28px;
height: 16px;
background: $b-pool;
border-radius: .7rem;
vertical-align: text-bottom;
vertical-align: text-top;
margin-top: 2px;
.circle {
display: inline-block;
position: absolute;
border-radius: 50%;
height: 12px;
width: 12px;
background: #22ADF6;
top: 3px;
right: 3px;
height: 8px;
width: 8px;
background: $g20-white;
top: 4px;
right: 4px;
}
}
}

View File

@ -1,9 +1,10 @@
.cards {
display: flex;
justify-content: space-between;
flex-direction: column;
flex-direction: row;
position: relative;
overflow: hidden;
border-radius: $radius 0 0 $radius;
min-height: 700px;
background: linear-gradient(55deg, $landing-lg-gradient-left, $landing-lg-gradient-right );
a {
@ -21,62 +22,110 @@
}
}
.main {
width: 66%;
padding: 5rem 2vw 5rem 4.5vw;
display: flex;
justify-content: center;
flex-direction: column;
text-align: center;
z-index: 1;
}
.group {
display: flex;
flex-wrap: wrap;
width: 34%;
justify-content: flex-end;
}
.card {
text-align: center;
z-index: 1;
&.full {
width: 100%;
padding: 5rem 2rem;
}
&.sm {
display: flex;
flex-direction: column;
justify-content: center;
text-align: left;
width: 90%;
position: relative;
margin-bottom: 1px;
padding: 2rem 3.5vw 2rem 3vw;
min-height: 140px;
background: $landing-sm-bg;
transition: background-color .4s, width .2s;
&:last-child{ margin-bottom: 0; }
&.quarter {
flex-grow: 2;
margin: 1px;
padding: 1.5rem;
background: rgba($landing-sm-gradient-overlay, .65);
transition: background-color .4s;
&:hover {
background: rgba($landing-sm-gradient-overlay, .9);
background: $landing-sm-bg-hover;
width: 100%;
h3 {
transform: translateY(-1.2rem);
font-weight: $medium;
font-size: 1.2rem;
}
p {
opacity: 1;
transition-delay: 100ms;
}
}
h3 {
font-size: 1.1rem;
transition: all .2s;
}
p {
position: absolute;
width: 80%;
color: $g20-white;
font-size: .95rem;
line-height: 1.25rem;
opacity: 0;
transition: opacity .2s;
}
}
h1,h2,h3,h4 {
font-weight: 300;
text-align: center;
color: $g20-white;
}
h1 {
margin: 0 0 1.25rem;
font-size: 2.25rem;
font-size: 2.5rem;
z-index: 1;
}
h3 { font-size: 1.25rem;}
&#get-started {
text-align: center;
.btn {
display: inline-block;
padding: .85rem 1.5rem;
color: $g20-white;
font-weight: bold;
background: rgba($g20-white, .25);
border: 2px solid rgba($g20-white, .5);
border-radius: $radius;
padding: 1.25rem;
margin: 0 20% .35rem;
color: $landing-btn-text;
font-size: 1.1rem;
font-weight: $medium;
background: $landing-btn-bg;
transition: background-color .2s, color .2s;
border-radius: $radius;
&.oss:after {
content: 'alpha';
display: inline-block;
vertical-align: top;
font-style: italic;
font-size: .75em;
margin-left: .45rem;
padding: .1rem .3rem .12rem;
border-radius: $radius;
border: 1px solid rgba($landing-btn-text, .5);
transition: border-color .2s;
}
&:hover {
background: $g20-white;
color: $b-pool;
background: $landing-btn-bg-hover;
color: $landing-btn-text-hover;
&:after { border-color: rgba($landing-btn-text-hover, .5) }
}
}
}
@ -97,17 +146,59 @@
}
}
@media (max-width: 1150px) {
.cards {
flex-direction: column;
.main { width: 100%; }
.group {
width: 100%;
.card.sm {
margin-right: 1px;
padding: 2rem;
flex-grow: 2;
width: 49%;
text-align: center;
background: $landing-sm-bg-alt;
h3 {
margin: 0 0 .5rem;
font-size: 1.1rem;
font-weight: $medium;
}
p {
opacity: .6;
position: relative;
width: auto;
margin: 0;
}
&:hover {
background: $landing-sm-bg-hover;
h3 { transform: none; }
p { opacity: 1; }
}
}
}
}
}
@include media(small) {
.cards {
.group { flex-direction: column; }
.card{
&.full { padding: 2.5rem;}
&.quarter {
.group {
flex-direction: column;
.card.sm {
width: 100%;
max-width: 100%;
padding: 1.25rem;
}
}
.card{
h1 { font-size: 2rem; }
&.main {
padding: 2.5rem;
&#get-started .btn {
font-size: 1rem;
margin: 0 0 .35rem;
}
}
}
}
}

View File

@ -8,7 +8,7 @@ code,pre {
p,li,table,h2,h3,h4,h5,h6 {
code {
padding: .15rem .45rem .25rem;
padding: .1rem .4rem .2rem;
border-radius: $radius;
color: $article-code;
white-space: nowrap;
@ -54,7 +54,9 @@ pre {
overflow-y: hidden;
code {
padding: 0;
line-height: 1.4rem;
font-size: .95rem;
line-height: 1.5rem;
white-space: pre;
}
}

View File

@ -0,0 +1,15 @@
.related {
border-top: 1px solid $article-hr;
padding-top: 1.5rem;
h4 { font-size: 1.15rem; }
ul {
list-style: none;
padding: 0;
margin-top: 0;
}
li {
margin: .5rem 0;
line-height: 1.25rem;
}
}

View File

@ -2,8 +2,8 @@
.tags {
border-top: 1px solid $article-hr;
padding-top: 1.5rem;
margin-top: 2rem;
padding-top: 1.75rem;
margin: 2rem 0 1rem;
.tag {
background: $body-bg;
@ -15,3 +15,9 @@
font-size: .8rem;
}
}
.related + .tags {
border: none;
padding-top: 0;
margin: 1.5rem 0 1rem;
}

View File

@ -0,0 +1,217 @@
/////////////////////// Styles for Telegraf plugin cards ///////////////////////
.plugin-card {
position: relative;
padding: 1rem 1.5rem;
margin-bottom: .5rem;
justify-content: center;
align-items: center;
background: rgba($body-bg, .4);
border-radius: $radius;
h3 {
padding: 0;
margin-top: .25rem;
}
&.new h3:after {
content: "New";
margin-left: .3rem;
padding: .25rem .5rem;
font-style: italic;
color: $nav-active;
font-size: 1.2rem;
}
p {
&.meta {
margin: .75rem 0;
font-weight: $medium;
line-height: 1.75rem;
.deprecated {
margin-left: .5rem;
font-style: italic;
color: $article-code-accent7;
}
}
}
& .info {
& > p:last-child { margin-bottom: .5rem; }
& > ul:last-child { margin-bottom: .5rem; }
& > ol:last-child { margin-bottom: .5rem; }
}
.github-link {
position: absolute;
top: 0;
right: 0.5rem;
opacity: 0;
transition: opacity .2s, background .2s, color 2s;
.icon-github {
font-size: 1.2rem;
margin: 0 .25rem 0 0;
}
}
&:hover {
.github-link { opacity: 1; }
}
// Special use-case for using block quotes in the yaml provided by the data file
blockquote {
border-color: $article-note-base;
background: rgba($article-note-base, .12);
h3,h4,h5,h6 { color: $article-note-heading; }
p, li {
color: $article-note-text;
font-size: 1rem;
font-style: normal;
}
strong { color: inherit; }
a {
color: $article-note-link;
code:after {
border-color: transparent rgba($article-note-code, .35) transparent transparent;
}
&:hover {
color: $article-note-link-hover;
code:after {
border-color: transparent $article-note-link-hover transparent transparent;
}
}
}
ol li:before { color: $article-note-text; }
code, pre{
color: $article-note-code;
background: $article-note-code-bg;
}
}
}
//////////////////////////////// Plugin Filters ////////////////////////////////
#plugin-filters {
display: flex;
flex-flow: row wrap;
align-items: flex-start;
.filter-category {
flex: 1 1 200px;
margin: 0 1.25rem 1.25rem 0;
max-width: 33%;
&.two-columns {
flex: 1 2 400px;
max-width: 66%;
.filter-list {
columns: 2;
}
}
}
h5 {
border-bottom: 1px solid rgba($article-text, .25);
padding-bottom: .65rem;
}
.filter-list {
padding: 0;
margin: .5rem 0 0;
list-style: none;
li {
margin: 0;
line-height: 1.35rem;
}
}
label {
display: block;
padding: .25rem 0;
color: $article-text;
position: relative;
&:after {
content: attr(data-count);
margin-left: .25rem;
font-size: .85rem;
opacity: .5;
}
}
.checkbox {
display: inline-block;
height: 1.15em;
width: 1.15em;
background: rgba($article-text, .05);
margin-right: .3rem;
vertical-align: text-top;
border-radius: $radius;
cursor: pointer;
border: 1.5px solid rgba($article-text, .2);
user-select: none;
}
input[type='checkbox'] {
margin-right: -1.1rem ;
padding: 0;
vertical-align: top;
opacity: 0;
cursor: pointer;
& + .checkbox:after {
content: "";
display: block;
position: absolute;
height: .5rem;
width: .5rem;
border-radius: 50%;
background: $article-link;
top: .65rem;
left: .35rem;
opacity: 0;
transform: scale(2);
transition: all .2s;
}
&:checked + .checkbox:after {
opacity: 1;
transform: scale(1);
}
}
}
////////////////////////////////////////////////////////////////////////////////
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
@media(max-width: 1100px) {
#plugin-filters {
.filter-category {
max-width: 50%;
&.two-columns, &.three-columns {
max-width: 100%;
}
}
}
}
@include media(small) {
#plugin-filters{
.filter-category {
max-width: 100%;
}
}
.plugin-card {
.github-link {
opacity: 1;
padding: .25rem .35rem .35rem;
line-height: 0;
.icon-github { margin: 0; }
.hide { display: none; }
}
}
}

View File

@ -0,0 +1,4 @@
// InfluxData API Docs style overrides
// These override styles generated by ReDoc
@import "layouts/api-overrides";

View File

@ -1,9 +1,10 @@
// InfluxData Docs Default Theme (Light)
// Import Tools
@import "tools/icomoon";
@import "tools/media-queries.scss";
@import "tools/mixins.scss";
@import "tools/icomoon",
"tools/media-queries.scss",
"tools/mixins.scss",
"tools/tooltips";
// Import default light theme
@import "themes/theme-light.scss";

View File

@ -86,7 +86,7 @@ $article-note-table-row-alt: #3B2862;
$article-note-table-scrollbar: $np-deepnight;
$article-note-shadow: $np-deepnight;
$article-note-code: $cp-comet;
$article-note-code-bg: $wp-telopea;
$article-note-code-bg: $wp-jaguar;
$article-note-code-accent1: #567375;
$article-note-code-accent2: $b-pool;
$article-note-code-accent3: $gr-viridian;
@ -168,5 +168,17 @@ $error-page-btn-hover-text: $b-dodger;
// Landing Page colors
$landing-lg-gradient-left: $wp-violentdark;
$landing-lg-gradient-right: $cp-minsk;
$landing-sm-gradient-overlay: $b-dodger;
$landing-sm-bg: $cp-victoria;
$landing-sm-bg-alt: $cp-victoria;
$landing-sm-bg-hover: $b-dodger;
$landing-btn-text: $g20-white;
$landing-btn-bg: $b-dodger;
$landing-btn-text-hover: $b-dodger;
$landing-btn-bg-hover: $g20-white;
$landing-artwork-color: $cp-minsk;
// Tooltip colors
$tooltip-color: $ch-chartreuse;
$tooltip-color-alt: $ch-canary;
$tooltip-bg: $g20-white;
$tooltip-text: $cp-minsk;

View File

@ -46,7 +46,7 @@ $nav-active: $m-magenta !default;
// Article Content
$article-bg: $g20-white !default;
$article-heading: $cp-purple !default;
$article-heading: $cp-marguerite !default;
$article-heading-alt: $g7-graphite !default;
$article-text: $g8-storm !default;
$article-bold: $g8-storm !default;
@ -167,7 +167,19 @@ $error-page-btn-hover: $b-pool !default;
$error-page-btn-hover-text: $g20-white !default;
// Landing Page colors
$landing-lg-gradient-left: $cp-marguerite !default;
$landing-lg-gradient-right: $b-pool !default;
$landing-sm-gradient-overlay: $cp-blueviolet !default;
$landing-lg-gradient-left: $cp-jakarta !default;
$landing-lg-gradient-right: $wp-heart !default;
$landing-sm-bg: $wp-seance !default;
$landing-sm-bg-alt: $wp-jagger !default;
$landing-sm-bg-hover: $b-dodger !default;
$landing-btn-text: $g20-white !default;
$landing-btn-bg: $b-dodger !default;
$landing-btn-text-hover: $b-dodger !default;
$landing-btn-bg-hover: $g20-white !default;
$landing-artwork-color: rgba($g20-white, .15) !default;
// Tooltip colors
$tooltip-color: $m-magenta !default;
$tooltip-color-alt: $wp-trance !default;
$tooltip-bg: $m-lavander !default;
$tooltip-text: $g20-white !default;

View File

@ -24,6 +24,7 @@ $g19-ghost: #FAFAFC;
$g20-white: #FFFFFF; // Brand color
// Warm Purples - Magentas
$wp-jaguar: #1d0135;
$wp-telopea: #23043E;
$wp-violentdark: #2d0749;
$wp-violet: #32094E;

View File

@ -1,10 +1,10 @@
@font-face {
font-family: 'icomoon';
src: url('fonts/icomoon.eot?972u0y');
src: url('fonts/icomoon.eot?972u0y#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?972u0y') format('truetype'),
url('fonts/icomoon.woff?972u0y') format('woff'),
url('fonts/icomoon.svg?972u0y#icomoon') format('svg');
src: url('fonts/icomoon.eot?9r9zke');
src: url('fonts/icomoon.eot?9r9zke#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?9r9zke') format('truetype'),
url('fonts/icomoon.woff?9r9zke') format('woff'),
url('fonts/icomoon.svg?9r9zke#icomoon') format('svg');
font-weight: normal;
font-style: normal;
}
@ -24,9 +24,24 @@
-moz-osx-font-smoothing: grayscale;
}
.icon-ui-disks-nav:before {
content: "\e93c";
}
.icon-ui-wrench-nav:before {
content: "\e93d";
}
.icon-ui-eye-closed:before {
content: "\e956";
}
.icon-ui-eye-open:before {
content: "\e957";
}
.icon-ui-chat:before {
content: "\e93a";
}
.icon-ui-bell:before {
content: "\e93b";
}
.icon-ui-cloud:before {
content: "\e93f";
}
@ -216,6 +231,9 @@
.icon-loop2:before {
content: "\ea2e";
}
.icon-github:before {
content: "\eab0";
}
.icon-tux:before {
content: "\eabd";
}

View File

@ -0,0 +1,91 @@
@import "themes/theme-light.scss";
// Font weights
$medium: 500;
$bold: 700;
// Border radius
$radius: 3px;
////////////////////////////////// Tool Tips //////////////////////////////////
.tooltip {
position: relative;
display: inline;
font-weight: $medium;
color: $tooltip-color;
&:hover {
.tooltip-container { visibility: visible; }
.tooltip-text {
opacity: 1;
transform: translate(-50%,-2.5rem);
}
}
.tooltip-container {
position: absolute;
top: 0;
left: 50%;
transform: translateX(-50%);
overflow: visible;
visibility: hidden;
}
.tooltip-text {
font-weight: $medium;
position: absolute;
border-radius: $radius;
padding: .15rem .75rem;
font-size: 0.9rem;
line-height: 1.75rem;
left: 50%;
transform: translate(-50%,-1.75rem);
transition: all 0.2s ease;
white-space: nowrap;
opacity: 0;
color: $tooltip-text;
background-color: $tooltip-bg;
&:after {
content: '';
position: absolute;
left: 50%;
bottom: -14px;
transform: translateX(-50%);
border-top: 8px solid $tooltip-bg;
border-right: 8px solid transparent;
border-bottom: 8px solid transparent;
border-left: 8px solid transparent;
}
}
}
th .tooltip {
color: $tooltip-color-alt;
&:hover {
.tooltip-container { visibility: visible; }
.tooltip-text {
opacity: 1;
transform: translate(-50%,1.75rem);
}
}
.tooltip-text {
transform: translate(-50%,1rem);
&:after {
content: '';
position: absolute;
height: 0;
left: 50%;
top: -14px;
transform: translateX(-50%);
border-top: 8px solid transparent;
border-right: 8px solid transparent;
border-bottom: 8px solid $tooltip-bg;
border-left: 8px solid transparent;
}
}
}

View File

@ -10,8 +10,8 @@ menu:
#### Welcome
Welcome to the InfluxDB v2.0 documentation!
InfluxDB is an open source time series database designed to handle high write and query loads.
InfluxDB is an open source time series database designed to handle high write and query workloads.
This documentation is meant to help you learn how to use and leverage InfluxDB to meet your needs.
Common use cases include infrastructure monitoring, IoT data collection, events handling and more.
Common use cases include infrastructure monitoring, IoT data collection, events handling, and more.
If your use case involves time series data, InfluxDB is purpose-built to handle it.

View File

@ -1,12 +0,0 @@
---
title: About InfluxDB Cloud 2.0
description: Important information about InfluxDB Cloud 2.0 including release notes and known issues.
weight: 10
menu:
v2_0_cloud:
name: About InfluxDB Cloud
---
Important information about InfluxDB Cloud 2.0 including known issues and release notes.
{{< children >}}

View File

@ -1,15 +0,0 @@
---
title: Known issues in InfluxDB Cloud
description: Information related to known issues in InfluxDB Cloud 2.
weight: 102
menu:
v2_0_cloud:
name: Known issues
parent: About InfluxDB Cloud
---
The following issues currently exist in {{< cloud-name >}}:
- IDPE 2868: Users can delete a token with an active Telegraf configuration pointed to it.
- [TELEGRAF-5600](https://github.com/influxdata/telegraf/issues/5600): Improve error message in Telegraf when the bucket it's reporting to is not found.
- [INFLUXDB-12687](https://github.com/influxdata/influxdb/issues/12687): Create organization button should only be displayed for users with permissions to create an organization.

View File

@ -0,0 +1,12 @@
---
title: Manage your InfluxDB Cloud 2.0 Account
description: >
View and manage information related to your InfluxDB Cloud 2.0 account such as
pricing plans, data usage, account cancelation, etc.
weight: 3
menu:
v2_0_cloud:
name: Account management
---
{{< children >}}

View File

@ -0,0 +1,99 @@
---
title: Add payment method and view billing
list_title: Add payment and view billing
description: >
Add your InfluxDB Cloud payment method and view billing information.
weight: 103
menu:
v2_0_cloud:
parent: Account management
name: Add payment and view billing
---
- Hover over the **Usage** icon in the left navigation bar and select **Billing**.
{{< nav-icon "cloud" >}}
Complete the following procedures as needed:
- [Add or update your {{< cloud-name >}} payment method](#add-or-update-your-influxdb-cloud-2-0-payment-method)
- [Add or update your contact information](#add-or-update-your-contact-information)
- [Send notifications when usage exceeds an amount](#send-notifications-when-usage-exceeds-an-amount)
View information about:
- [Pay As You Go billing](#view-pay-as-you-go-billing-information)
- [Free plan](#view-free-plan-information)
- [Exceeded rate limits](#exceeded-rate-limits)
- [Billing cycle](#billing-cycle)
- [Declined or late payments](#declined-or-late-payments)
### Add or update your InfluxDB Cloud 2.0 payment method
1. On the Billing page:
- To update, click the **Change Payment** button on the Billing page.
- In the **Payment Method** section:
- Enter your cardholder name and number
- Select your expiration month and year
- Enter your CVV code and select your card type
- Enter your card billing address
2. Click **Add Card**.
### Add or update your contact information
1. On the Billing page:
- To update, click the **Edit Information** button.
- In the **Contact Information** section, enter your name, company, and address.
2. Click **Save Contact Info**.
### Send notifications when usage exceeds an amount
1. On the Billing page, click **Notification Settings**.
2. Select the **Send email notification** toggle, and then enter the email address to notify.
3. Enter the dollar amount to trigger a notification email. By default, an email is triggered when the amount exceeds $10. (Whole dollar amounts only. For example, $10.50 is not a supported amount.)
### View Pay As You Go billing information
- On the Billing page, view your billing information, including:
- Account balance
- Last billing update (updated hourly)
- Past invoices
- Payment method
- Contact information
- Notification settings
### View Free plan information
- On the Billing page, view the total limits available for the Free plan.
### Exceeded rate limits
If you exceed your plan's [rate limits](/v2.0/cloud/pricing-plans/), {{< cloud-name >}} provides a notification in the {{< cloud-name "short" >}} user interface (UI) and adds a rate limit event to your **Usage** page for review.
All rate-limited requests are rejected; including both read and write requests.
_Rate-limited requests are **not** queued._
_To remove rate limits, [upgrade to a Pay As You Go Plan](/v2.0/cloud/account-management/upgrade-to-payg/)._
#### Rate-limited HTTP response code
When a request exceeds your plan's rate limit, the InfluxDB API returns the following response:
```
HTTP 429 “Too Many Requests”
Retry-After: xxx (seconds to wait before retrying the request)
```
### Billing cycle
Billing occurs on the first day of the month for the previous month. For example, if you start the Pay As You Go plan on September 15, you're billed on October 1 for your usage from September 15-30.
### Declined or late payments
| Timeline | Action |
|:----------------------------|:------------------------------------------------------------------------------------------------------------------------|
| **Initial declined payment**| We'll retry charge every 72 hours. During this period, update your payment method to successfully process your payment. |
| **One week later** | Account disabled except data writes. Update your payment method to successfully process your payment and enable your account. |
| **10-14 days later** | Account completely disabled. During this period, you must contact us at support@influxdata.com to process your payment and enable your account. |
| **21 days later** | Account suspended. Contact support@influxdata.com to settle your final bill and retrieve a copy of your data or access to InfluxDB Cloud dashboards, tasks, Telegraf configurations, and so on.|

View File

@ -0,0 +1,49 @@
---
title: View InfluxDB Cloud data usage
list_title: View data usage
description: >
View your InfluxDB Cloud 2.0 data usage and rate limit notifications.
weight: 103
menu:
v2_0_cloud:
parent: Account management
name: View data usage
---
To view your {{< cloud-name >}} data usage, hover over the **Usage** icon in the
left navigation bar and select **Usage**.
{{< nav-icon "usage" >}}
The usage page provides data usage information for time frame specified in the
drop-down at the top of the Usage page.
- **Writes:** Total data in MB written to your {{< cloud-name "short" >}} instance.
- **Reads:** Total data in MB sent as responses to queries from your {{< cloud-name "short" >}} instance.
- **Query Duration:** Total time spent processing queries in seconds.
- **Storage Usage:** Total disk usage in gigabytes.
- **API Request Count:** The total number of query and write API requests received
during the specified time frame.
- **Usage over the specified time period:** A line graph that visualizes usage over the specified time period.
- **Rate Limits over the specified time period:** A list of rate limit events over
the specified time period.
{{< img-hd src="/img/2-0-cloud-usage.png" />}}
## Exceeded rate limits
If you exceed your plan's [rate limits](/v2.0/cloud/pricing-plans/), {{< cloud-name >}}
will provide a notification in the {{< cloud-name "short" >}} user interface (UI)
and add a rate limit event to your **Usage** page for review.
All rate-limited requests are rejected; including both read and write requests.
_Rate-limited requests are **not** queued._
_To remove rate limits, [upgrade to a Pay As You Go Plan](/v2.0/cloud/account-management/upgrade-to-payg/)._
### Rate-limited HTTP response code
When a request exceeds your plan's rate limit, the InfluxDB API returns the following response:
```
HTTP 429 “Too Many Requests”
Retry-After: xxx (seconds to wait before retrying the request)
```

View File

@ -0,0 +1,63 @@
---
title: Cancel your InfluxDB Cloud subscription
description: >
Cancel your InfluxDB Cloud 2.0 account at any time by stopping all read and write
requests, backing up data, and contacting InfluxData Support.
weight: 104
menu:
v2_0_cloud:
parent: Account management
name: Cancel InfluxDB Cloud
---
To cancel your {{< cloud-name >}} subscription, complete the following steps:
1. [Stop reading and writing data](#stop-reading-and-writing-data).
2. [Export data and other artifacts](#export-data-and-other-artifacts).
3. [Cancel service](#cancel-service).
### Stop reading and writing data
To stop being charged for {{< cloud-name "short" >}}, pause all writes and queries.
### Export data and other artifacts
To export data and artifacts, follow the steps below.
{{% note %}}
Exported data and artifacts can be used in an InfluxDB OSS instance.
{{% /note %}}
#### Export tasks
For details, see [Export a task](/v2.0/process-data/manage-tasks/export-task/).
#### Export dashboards
For details, see [Export a dashboard](/v2.0/visualize-data/dashboards/export-dashboard/).
#### Telegraf configurations
**To save a Telegraf configuration:**
1. Click in the **Settings** icon in the navigation bar.
{{< nav-icon "settings" >}}
2. Select the **Telegraf** tab. A list of existing Telegraf configurations appears.
3. Click on the name of a Telegraf configuration.
4. Click **Download Config** to save.
#### Data backups
To request a backup of data in your {{< cloud-name "short" >}} instance, contact [InfluxData Support](mailto:support@influxdata.com).
### Cancel service
1. Hover over the Usage icon in the left navigation bar and select Billing.
{{< nav-icon "usage" >}}
2. Click **Cancel Service**.
3. Select **I understand and agree to these conditions**, and then click **I understand, Cancel Service.**
4. Click **Confirm and Cancel Service**. Your payment method is charged your final balance immediately upon cancellation of service.

View File

@ -0,0 +1,30 @@
---
title: Upgrade to a Pay As You Go Plan
description: >
Upgrade to a Pay As You Go Plan to remove rate limits from your InfluxDB Cloud 2.0 account.
weight: 102
menu:
v2_0_cloud:
parent: Account management
name: Upgrade to Pay As You Go
---
To upgrade to a Pay As You Go Plan:
1. Hover over the **Usage** icon in the left navigation bar and select **Billing**.
{{< nav-icon "usage" >}}
2. Click **Upgrade to Pay As You Go**.
3. Review the terms and pricing associated with the Pay As You Go Plan.
4. Click **Sounds Good To Me**.
5. Enter your contact information.
Traditionally this would be "shipping" information, but InfluxData does not ship anything.
This information should be the primary location where the service is consumed.
All service updates, security notifications and other important information are
sent using the information you provide.
The address is used to determine any applicable sales tax.
security notifications, etc.
6. Enter your payment information and click **Add Card**.
7. Review the plan details, contact information, and credit card information.
8. Click **Confirm & Order**.

View File

@ -1,93 +1,142 @@
---
title: Get started with InfluxDB Cloud 2.0 Beta
title: Get started with InfluxDB Cloud 2.0
description: >
Sign up for and get started with InfluxDB Cloud 2.0 Beta.
Sign up now, sign in, and get started exploring and using the InfluxDB Cloud 2.0 time series platform.
weight: 1
menu:
v2_0_cloud:
name: Get started with InfluxDB Cloud
---
{{< cloud-name >}} is a fully managed and hosted version of the InfluxDB 2.0.
To get started, complete the tasks below.
{{% cloud-msg %}}
InfluxDB v2.0 alpha documentation applies to {{< cloud-name "short" >}} unless otherwise specified.
{{% /cloud-msg %}}
{{< cloud-name >}} is a fully managed, hosted, multi-tenanted version of the
InfluxDB 2.0 time series data platform.
The core of {{< cloud-name "short" >}} is built on the foundation of the open source
version of InfluxDB 2.0, which is much more than a database.
It is a time series data platform that collects, stores, processes and visualizes metrics and events.
_See the differences between {{< cloud-name "short">}} and InfluxDB OSS
[below](#differences-between-influxdb-cloud-and-influxdb-oss)._
## Start for free
Start using {{< cloud-name >}} at no cost with the [Free Plan](/v2.0/cloud/pricing-plans/#free-plan).
Use it as much and as long as you like within the plan's rate-limits.
Limits are designed to let you monitor 5-10 sensors, stacks or servers comfortably.
Once you're ready to grow, [upgrade to the Pay As You Go Plan](/v2.0/cloud/account-management/upgrade-to-payg/).
## Sign up
1. Go to [InfluxDB Cloud 2.0]({{< cloud-link >}}), enter your email and password,
and then click **Sign Up**.
1. Go to [InfluxDB Cloud 2.0]({{< cloud-link >}}), enter your email address and password,
and click **Sign Up**.
2. InfluxDB Cloud requires email verification to complete the sign up process.
Verify your email address by opening the email sent to the address you provided
and clicking **Verify Your Email**.
3. Select a region for you {{< cloud-name >}} instance.
Currently, {{< cloud-name >}} AWS - US West (Oregon) is the only region available.
_To suggest regions to add, click **Let us know** under Regions._
4. Review the terms of the agreement, and then select
**I have viewed and agree to InfluxDB Cloud 2.0 Services Subscription Agreement
and InfluxData Global Data Processing Agreement.**.
2. Open email from cloudbeta@influxdata.com (subject: Please verify your email for InfluxDB Cloud),
and then click **Verify Your Email**. The Welcome to InfluxDB Cloud 2.0 page is displayed.
For details on the agreements, see the [InfluxDB Cloud 2.0: Services Subscription Agreement](https://www.influxdata.com/legal/terms-of-use/)
and the [InfluxData Global Data Processing Agreement](https://www.influxdata.com/legal/influxdata-global-data-processing-agreement/).
3. Currently, {{< cloud-name >}} us-west-2 region is the only region available.
To suggest regions to add, click the **Let us know** link under Regions.
5. Click **Continue**. {{< cloud-name >}} opens with a default organization
and bucket (both created from your email address).
4. Review the terms of the beta agreement, and then select
**I viewed and agree to InfluxDB Cloud 2.0 Beta Agreement**.
_To update organization and bucket names, see [Update an organization](/v2.0/organizations/update-org/)
and [Update a bucket](/v2.0/organizations/buckets/update-bucket/#update-a-bucket-s-name-in-the-influxdb-ui)._
5. Click **Continue**. InfluxDB Cloud 2.0 opens with a default organization
(created from your email) and bucket (created from your email local-part).
{{% cloud-msg %}}
All InfluxDB 2.0 documentation applies to {{< cloud-name "short" >}} unless otherwise specified.
References to the InfluxDB user interface (UI) or localhost:9999 refer to your
{{< cloud-name >}} UI.
{{% /cloud-msg %}}
## Log in
Log in to [InfluxDB Cloud 2.0](https://us-west-2-1.aws.cloud2.influxdata.com) using the credentials created above.
## Sign in
Sign in to [InfluxDB Cloud 2.0](https://cloud2.influxdata.com) using your email address and password.
<a class="btn" href="https://cloud2.influxdata.com">Sign in to InfluxDB Cloud 2.0 now</a>
## Collect and write data
Collect and write data to InfluxDB using Telegraf, the InfluxDB v2 API, `influx`
command line interface (CLI), the InfluxDB user interface (UI), or client libraries.
Collect and write data to InfluxDB using the Telegraf plugins, the InfluxDB v2 API, the `influx`
command line interface (CLI), the InfluxDB UI (the user interface for InfluxDB 2.0), or the InfluxDB v2 API client libraries.
### Use Telegraf
Use Telegraf to quickly write data to {{< cloud-name >}}.
Create new Telegraf configurations automatically in the UI or manually update an
Create new Telegraf configurations automatically in the InfluxDB UI, or manually update an
existing Telegraf configuration to send data to your {{< cloud-name "short" >}} instance.
For details, see [Automatically configure Telegraf](/v2.0/write-data/use-telegraf/auto-config/#create-a-telegraf-configuration)
and [Manually update Telegraf configurations](/v2.0/write-data/use-telegraf/manual-config/).
### API, CLI, and client libraries
For information about using the InfluxDB API, CLI, and client libraries to write data,
For information about using the InfluxDB v2 API, `influx` CLI, and client libraries to write data,
see [Write data to InfluxDB](/v2.0/write-data/).
{{% note %}}
#### InfluxDB Cloud instance endpoint
When using Telegraf, the API, CLI, or client libraries to interact with your {{< cloud-name "short" >}}
When using Telegraf, the InfluxDB v2 API, the `influx` CLI, or the client libraries to interact with your {{< cloud-name "short" >}}
instance, extract the "host" or "endpoint" of your instance from your {{< cloud-name "short" >}} UI URL.
For example:
```
https://us-west-2-1.aws.cloud2.influxdata.com
```
{{% /note %}}
## Query and visualize data
Once you've set up {{< cloud-name "short" >}} to collect data, you can do the following:
- Query data using Flux, the UI, and the `influx` command line interface. See [Query data](/v2.0/query-data/).
- Build custom dashboards to visualize your data. See [Visualize data](/v2.0/visualize-data/).
- Query data using Flux, the UI, and the `influx` command line interface.
See [Query data](/v2.0/query-data/).
- Build custom dashboards to visualize your data.
See [Visualize data](/v2.0/visualize-data/).
## Process data
Use InfluxDB tasks to process and downsample data. See [Process data](/v2.0/process-data/).
## View data usage
Once you've set up {{< cloud-name "short" >}} to collect data, view your data usage, including:
- **Writes:** Total kilobytes ingested.
- **Reads:** Total kilobytes sent out for responses to queries.
- **Total Query Duration:** Sum of time spent processing queries in seconds.
- **Storage:** Average disk usage in gigabytes.
Once you're up and running with {{< cloud-name "short" >}}, [monitor your data usage in
your {{< cloud-name "short" >}} UI](/v2.0/cloud/account-management/data-usage/).
You'll see sparkline data over the past 4 hours and a single value that shows usage in the last 5 minutes.
To view your data, click **Usage** in the left navigation menu.
## Differences between InfluxDB Cloud and InfluxDB OSS
{{< cloud-name >}} is API-compatible and functionally compatible with InfluxDB OSS 2.0.
The primary differences between InfluxDB OSS 2.0 and InfluxDB Cloud 2.0 are:
{{< img-hd src="/img/2-0-cloud-usage.png" />}}
- [InfluxDB scrapers](/v2.0/write-data/scrape-data/) that collect data from specified
targets are not available in {{< cloud-name "short" >}}.
- {{< cloud-name "short" >}} instances are currently limited to a single organization with a single user.
- Retrieving data from a file based CSV source using the `file` parameter of the
[`csv.from()`](/v2.0/reference/flux/functions/csv/from) function is not supported;
however you can use raw CSV data with the `csv` parameter.
- Multi-organization accounts and multi-user organizations are currently not
available in {{< cloud-name >}}.
## Review rate limits
To optimize InfluxDB Cloud 2.0 services, [rate limits](/v2.0/cloud/rate-limits/) are in place for Free tier users.
During beta, you can check out our Paid tier for free.
### New features in InfluxDB Cloud 2.0
To upgrade to Paid tier for free, discuss use cases, or increase rate limits,
reach out to <a href="mailto:cloudbeta@influxdata.com?subject={{< cloud-name >}} Feedback">cloudbeta@influxdata.com</a>.
{{% note %}}
#### Known issues and disabled features
_See [Known issues](/v2.0/cloud/about/known-issues/) for information regarding all known issues in InfluxDB Cloud._
{{% /note %}}
- **Free Plan (rate-limited)**: Skip downloading and installing InfluxDB 2.0 and
jump right in to exploring InfluxDB 2.0 technology.
The Free Plan is designed for getting started with InfluxDB and for small hobby projects.
- **Flux support**: [Flux](/v2.0/query-data/get-started/) is a standalone data
scripting and query language that increases productivity and code reuse.
It is the primary language for working with data within InfluxDB 2.0.
Flux can be used with other data sources as well.
This allows users to work with data where it resides.
- **Unified API**: Everything in InfluxDB (ingest, query, storage, and visualization)
is now accessible using a unified [InfluxDB v2 API](/v2.0/reference/api/) that
enables seamless movement between open source and cloud.
- **Integrated visualization and dashboards**: Based on the pioneering Chronograf project,
the new user interface (InfluxDB UI) offers quick and effortless onboarding,
richer user experiences, and significantly quicker results.
- **Usage-based pricing**: The [The Pay As You Go Plan](/v2.0/cloud/pricing-plans/#pay-as-you-go-plan)
offers more flexibility and ensures that you only pay for what you use. To estimate your projected usage costs, use the [InfluxDB Cloud 2.0 pricing calculator](/v2.0/cloud/pricing-calculator/).

View File

@ -0,0 +1,49 @@
---
title: InfluxDB Cloud 2.0 pricing calculator
description: >
Use the InfluxDB Cloud 2.0 pricing calculator to estimate costs by adjusting the number of devices,
plugins, metrics, and writes for the Pay As You Go Plan.
weight: 2
menu:
v2_0_cloud:
name: Pricing calculator
---
Use the {{< cloud-name >}} pricing calculator to estimate costs for the Pay As You Go plan by adjusting your number of devices,
plugins, users, dashboards, writes, and retention. Default configurations include:
| Configuration | Hobby | Standard | Professional | Enterprise |
|:-----------------------------------|-------:|---------:|-------------:|-----------:|
| **Devices** | 8 | 200 | 500 | 1000 |
| **Plugins per device** | 1 | 4 | 4 | 5 |
| **Users** | 1 | 2 | 10 | 20 |
| **Concurrent dashboards per user** | 2 | 2 | 2 | 2 |
| **Writes per minute** | 6 | 4 | 3 | 3 |
| **Average retention in days** | 7 | 30 | 30 | 30 |
Guidelines used to estimate costs for default configurations:
- Average metrics per plugin = 25
- Average KB per value = 0.01
- Number of cells per dashboard = 10
- Average response KB per cell = 0.5
- Average query duration = 75ms
**To estimate costs**
1. Do one of the following:
- Free plan. Hover over the **Usage** icon in the left navigation bar and select **Billing**.
{{< nav-icon "cloud" >}}
Then click the **Pricing Calculator** link at the bottom of the page.
- Pay As You Go plan. Open the pricing calculator [here](https://cloud2.influxdata.com/pricing).
3. Choose your region.
4. Select your configuration:
- **Hobby**. For a single user monitoring a few machines or sensors.
- **Standard**. For a single team requiring real-time visibility and monitoring a single set of use cases.
- **Professional**. For teams monitoring multiple disparate systems or use cases.
- **Enterprise**. For teams monitoring multiple domains and use cases accessing a variety of dashboards.
5. Adjust the default configuration values to match your number of devices, plugins, metrics, and so on. The **Projected Usage** costs are automatically updated as you adjust your configuration.
6. Click **Get started with InfluxDB Cloud** [to get started](https://v2.docs.influxdata.com/v2.0/cloud/get-started/).

View File

@ -0,0 +1,65 @@
---
title: InfluxDB Cloud 2.0 pricing plans
description: >
InfluxDB Cloud 2.0 provides two pricing plans to fit your needs the rate-limited
Free Plan and the Pay As You Go Plan.
aliases:
- /v2.0/cloud/rate-limits/
weight: 2
menu:
v2_0_cloud:
name: Pricing plans
---
InfluxDB Cloud 2.0 offers two pricing plans:
- [Free Plan](#free-plan)
- [Pay As You Go Plan](#pay-as-you-go-plan)
To estimate your projected usage costs, use the [InfluxDB Cloud 2.0 pricing calculator](/v2.0/cloud/pricing-calculator/).
## Free Plan
All new {{< cloud-name >}} accounts start with a rate-limited Free Plan.
Use this plan as much and as long as you want within the Free Plan rate limits:
#### Free Plan rate limits
- **Writes:** 3MB every 5 minutes
- **Query:** 30MB every 5 minutes
- **Storage:** 72-hour data retention
- **Series cardinality:** 10,000
- **Create:**
- Up to 5 dashboards
- Up to 5 tasks
- Up to 2 buckets
- Up to 2 checks
- Up to 2 notification rules
- Unlimited Slack notification endpoints
_To remove rate limits, [upgrade to a Pay As You Go Plan](/v2.0/cloud/account-management/upgrade-to-payg/)._
## Pay As You Go Plan
The Pay As You Go Plan offers more flexibility and ensures you only pay for what you [use]((/v2.0/cloud/account-management/data-usage/).
#### Pay As You Go Plan rate limits
To protect against any intentional or unintentional harm, Pay As You Go Plans include soft rate limits:
- **Writes:** 300MB every 5 minutes
- **Ingest batch size:** 50MB
- **Queries:** 3000MB every 5 minutes
- **Storage:** Unlimited retention
- **Series cardinality:** 1,000,000
- **Create:**
- Unlimited dashboards
- Unlimited tasks
- Unlimited buckets
- Unlimited users
- Unlimited checks
- Unlimited notification rules
- Unlimited PagerDuty, Slack, and HTTP notification endpoints
_To request higher rate limits, contact [InfluxData Support](mailto:support@influxdata.com)._

View File

@ -1,36 +0,0 @@
---
title: InfluxDB Cloud 2.0 rate limits
description: Rate limits for Free tier users optimize InfluxDB Cloud 2.0 services.
weight: 2
menu:
v2_0_cloud:
name: Rate limits
---
To optimize InfluxDB Cloud 2.0 services, the following rate limits are in place for Free tier users.
To increase your rate limits, contact <a href="mailto:cloudbeta@influxdata.com?subject={{ $cloudName }} Feedback">cloudbeta@influxdata.com</a>.
- `write` endpoint:
- 5 concurrent API calls
- 3000 KB (10 KB/s) of data written in a 5 minute window
- `query` endpoint:
- 20 concurrent API calls
- 3000 MB (10 MB/s) of data returned in a 5 minute window
- 5 dashboards
- 5 tasks
- 2 buckets
- 72 hour retention period
## View data usage
To view data usage, click **Usage** in the left navigation bar.
{{< nav-icon "usage" >}}
## HTTP response codes
When a request exceeds the rate limit for the endpoint, the InfluxDB API returns:
- HTTP 429 “Too Many Requests”
- Retry-After: xxx (seconds to wait before retrying the request)

View File

@ -10,7 +10,11 @@ enterprise_all: true
#cloud_all: true
cloud_some: true
draft: true
"v2.0/tags": [influxdb]
"v2.0/tags": [influxdb, functions]
related:
- /v2.0/write-data/
- /v2.0/write-data/quick-start
- https://influxdata.com, This is an external link
---
This is a paragraph. Lorem ipsum dolor ({{< icon "trash" >}}) sit amet, consectetur adipiscing elit. Nunc rutrum, metus id scelerisque euismod, erat ante suscipit nibh, ac congue enim risus id est. Etiam tristique nisi et tristique auctor. Morbi eu bibendum erat. Sed ullamcorper, dui id lobortis efficitur, mauris odio pharetra neque, vel tempor odio dolor blandit justo.

View File

@ -27,7 +27,7 @@ This article describes how to get started with InfluxDB OSS. To get started with
### Download and install InfluxDB v2.0 alpha
Download InfluxDB v2.0 alpha for macOS.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.10_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.18_darwin_amd64.tar.gz" download>InfluxDB v2.0 alpha (macOS)</a>
### Unpackage the InfluxDB binaries
Unpackage the downloaded archive.
@ -36,7 +36,7 @@ _**Note:** The following commands are examples. Adjust the file paths to your ow
```sh
# Unpackage contents to the current working directory
gunzip -c ~/Downloads/influxdb_2.0.0-alpha.8_darwin_amd64.tar.gz | tar xopf -
gunzip -c ~/Downloads/influxdb_2.0.0-alpha.18_darwin_amd64.tar.gz | tar xopf -
```
If you choose, you can place `influx` and `influxd` in your `$PATH`.
@ -44,7 +44,7 @@ You can also prefix the executables with `./` to run then in place.
```sh
# (Optional) Copy the influx and influxd binary to your $PATH
sudo cp influxdb_2.0.0-alpha.8_darwin_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb_2.0.0-alpha.18_darwin_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}
@ -90,8 +90,8 @@ influxd --reporting-disabled
### Download and install InfluxDB v2.0 alpha
Download the InfluxDB v2.0 alpha package appropriate for your chipset.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.10_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.10_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.18_linux_amd64.tar.gz" download >InfluxDB v2.0 alpha (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_2.0.0-alpha.18_linux_arm64.tar.gz" download >InfluxDB v2.0 alpha (arm)</a>
### Place the executables in your $PATH
Unpackage the downloaded archive and place the `influx` and `influxd` executables in your system `$PATH`.
@ -100,10 +100,10 @@ _**Note:** The following commands are examples. Adjust the file names, paths, an
```sh
# Unpackage contents to the current working directory
tar xvzf path/to/influxdb_2.0.0-alpha.10_linux_amd64.tar.gz
tar xvzf path/to/influxdb_2.0.0-alpha.18_linux_amd64.tar.gz
# Copy the influx and influxd binary to your $PATH
sudo cp influxdb_2.0.0-alpha.10_linux_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb_2.0.0-alpha.18_linux_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}

View File

@ -0,0 +1,38 @@
---
title: Monitor data and send alerts
seotitle: Monitor data and send alerts
description: >
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints.
menu:
v2_0:
name: Monitor & alert
weight: 6
v2.0/tags: [monitor, alert, checks, notification, endpoints]
---
Monitor your time series data and send alerts by creating checks, notification
rules, and notification endpoints.
## The monitoring workflow
1. A [check](/v2.0/reference/glossary/#check) in InfluxDB queries data and assigns a status with a `_level` based on specific conditions.
2. InfluxDB stores the output of a check in the `statuses` measurement in the `_monitoring` system bucket.
3. [Notification rules](/v2.0/reference/glossary/#notification-rule) check data in the `statuses`
measurement and, based on conditions set in the notification rule, send a message
to a [notification endpoint](/v2.0/reference/glossary/#notification-endpoint).
4. InfluxDB stores notifications in the `notifications` measurement in the `_monitoring` system bucket.
## Monitor your data
To get started, do the following:
1. [Create checks](/v2.0/monitor-alert/checks/create/) to monitor data and assign a status.
2. [Add notification endpoints](/v2.0/monitor-alert/notification-endpoints/create/)
to send notifications to third parties.
3. [Create notification rules](/v2.0/monitor-alert/notification-rules/create) to check
statuses and send notifications to your notifications endpoints.
## Manage your monitoring and alerting pipeline
{{< children >}}

View File

@ -0,0 +1,19 @@
---
title: Manage checks
seotitle: Manage monitoring checks in InfluxDB
description: >
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
menu:
v2_0:
parent: Monitor & alert
weight: 101
v2.0/tags: [monitor, checks, notifications, alert]
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions.
Learn how to create and manage checks:
{{< children >}}

View File

@ -0,0 +1,155 @@
---
title: Create checks
seotitle: Create monitoring checks in InfluxDB
description: >
Create a check in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 201
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
Create a check in the InfluxDB user interface (UI).
Checks query data and apply a status to each point based on specified conditions.
## Check types
There are two types of checks a threshold check and a deadman check.
#### Threshold check
A threshold check assigns a status based on a value being above, below,
inside, or outside of defined thresholds.
[Create a threshold check](#create-a-threshold-check).
#### Deadman check
A deadman check assigns a status to data when a series or group doesn't report
in a specified amount of time.
[Create a deadman check](#create-a-deadman-check).
## Parts of a check
A check consists of two parts a query and check configuration.
#### Check query
- Specifies the dataset to monitor.
- May include tags to narrow results.
#### Check configuration
- Defines check properties, including the check interval and status message.
- Evaluates specified conditions and applies a status (if applicable) to each data point:
- `crit`
- `warn`
- `info`
- `ok`
- Stores status in the `_level` column.
## Create a check in the InfluxDB UI
1. Click **Monitoring & Alerting** in the sidebar in the InfluxDB UI.
{{< nav-icon "alerts" >}}
2. In the top right corner of the **Checks** column, click **{{< icon "plus" >}} Create**
and select the [type of check](#check-types) to create.
3. Click **Name this check** in the top left corner and provide a unique name for the check.
#### Configure the check query
1. Select the **bucket**, **measurement**, **field** and **tag sets** to query.
2. If creating a threshold check, select an **aggregate function**.
Aggregate functions aggregate data between the specified check intervals and
return a single value for the check to process.
In the **Aggregate functions** column, select an interval from the interval drop-down list
(for example, "Every 5 minutes") and an aggregate function from the list of functions.
3. Click **Submit** to run the query and preview the results.
To see the raw query results, click the **{{< icon "toggle" >}} View Raw Data** toggle.
#### Configure the check
1. Click **2. Check** near the top of the window.
2. In the **Properties** column, configure the following:
##### Schedule Every
Select the interval to run the check (for example, "Every 5 minutes").
This interval matches the aggregate function interval for the check query.
_Changing the interval here will update the aggregate function interval._
##### Offset
Delay the execution of a task to account for any late data.
Offset queries do not change the queried time range.
{{% note %}}Your offset must be shorter than your [check interval](#schedule-every).
{{% /note %}}
##### Tags
Add custom tags to the query output.
Each custom tag appends a new column to each row in the query output.
The column label is the tag key and the column value is the tag value.
Use custom tags to associate additional metadata with the check.
Common metadata tags across different checks lets you easily group and organize checks.
You can also use custom tags in [notification rules](/v2.0/monitor-alert/notification-rules/create/).
3. In the **Status Message Template** column, enter the status message template for the check.
Use [Flux string interpolation](/v2.0/reference/flux/language/string-interpolation/)
to populate the message with data from the query.
{{% note %}}
#### Flux only interpolates string values
Flux currently interpolates only string values.
Use the [string() function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/string/)
to convert non-string values to strings.
```js
count = 12
"I currently have ${string(v: count)} cats."
```
{{% /note %}}
Check data is represented as an object, `r`.
Access specific column values using dot notation: `r.columnName`.
Use data from the following columns:
- columns included in the query output
- [custom tags](#tags) added to the query output
- `_check_id`
- `_check_name`
- `_level`
- `_source_measurement`
- `_type`
###### Example status message template
```
From ${r._check_name}:
${r._field} is ${r._level}.
Its value is ${string(v: r._value)}.
```
When a check generates a status, it stores the message in the `_message` column.
4. Define check conditions that assign statuses to points.
Condition options depend on your check type.
##### Configure a threshold check
1. In the **Thresholds** column, click the status name (CRIT, WARN, INFO, or OK)
to define conditions for that specific status.
2. From the **When value** drop-down list, select a threshold: is above, is below,
is inside of, is outside of.
3. Enter a value or values for the threshold.
You can also use the threshold sliders in the data visualization to define threshold values.
##### Configure a deadman check
1. In the **Deadman** column, enter a duration for the deadman check in the **for** field.
For example, `90s`, `5m`, `2h30m`, etc.
2. Use the **set status to** drop-down list to select a status to set on a dead series.
3. In the **And stop checking after** field, enter the time to stop monitoring the series.
For example, `30m`, `2h`, `3h15m`, etc.
5. Click the green **{{< icon "check" >}}** in the top right corner to save the check.
## Clone a check
Create a new check by cloning an existing check.
1. In the **Checks** column, hover over the check you want to clone.
2. Click the **{{< icon "clone" >}}** icon, then **Clone**.

View File

@ -0,0 +1,34 @@
---
title: Delete checks
seotitle: Delete monitoring checks in InfluxDB
description: >
Delete checks in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 204
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
If you no longer need a check, use the InfluxDB user interface (UI) to delete it.
{{% warn %}}
Deleting a check cannot be undone.
{{% /warn %}}
1. Click **Monitoring & Alerting** in the sidebar.
{{< nav-icon "alerts" >}}
2. In the **Checks** column, hover over the check you want to delete, click the
**{{< icon "delete" >}}** icon, then **Delete**.
After a check is deleted, all statuses generated by the check remain in the `_monitoring`
bucket until the retention period for the bucket expires.
{{% note %}}
You can also [disable a check](/v2.0/monitor-alert/checks/update/#enable-or-disable-a-check)
without having to delete it.
{{% /note %}}

View File

@ -0,0 +1,62 @@
---
title: Update checks
seotitle: Update monitoring checks in InfluxDB
description: >
Update, rename, enable or disable checks in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 203
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
Update checks in the InfluxDB user interface (UI).
Common updates include:
- [Update check queries and logic](#update-check-queries-and-logic)
- [Enable or disable a check](#enable-or-disable-a-check)
- [Rename a check](#rename-a-check)
- [Add or update a check description](#add-or-update-a-check-description)
- [Add a label to a check](#add-a-label-to-a-check)
To update checks, click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
## Update check queries and logic
1. In the **Checks** column, click the name of the check you want to update.
The check builder appears.
2. To edit the check query, click **1. Query** at the top of the check builder window.
3. To edit the check logic, click **2. Check** at the top of the check builder window.
_For details about using the check builder, see [Create checks](/v2.0/monitor-alert/checks/create/)._
## Enable or disable a check
In the **Checks** column, click the {{< icon "toggle" >}} toggle next to a check
to enable or disable it.
## Rename a check
1. In the **Checks** column, hover over the name of the check you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the check name.
2. Enter a new name and click out of the name field or press enter to save.
_You can also rename a check in the [check builder](#update-check-queries-and-logic)._
## Add or update a check description
1. In the **Checks** column, hover over the check description you want to update.
2. Click the **{{< icon "edit" >}}** icon that appears next to the description.
2. Enter a new description and click out of the name field or press enter to save.
## Add a label to a check
1. In the **Checks** column, click **Add a label** next to the check you want to add a label to.
The **Add Labels** box opens.
2. To add an existing label, select the label from the list.
3. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **Create Label**.
4. To remove a label, hover over the label under to a rule and click **{{< icon "x" >}}**.

View File

@ -0,0 +1,43 @@
---
title: View checks
seotitle: View monitoring checks in InfluxDB
description: >
View check details and statuses and notifications generated by checks in the InfluxDB UI.
menu:
v2_0:
parent: Manage checks
weight: 202
related:
- /v2.0/monitor-alert/notification-rules/
- /v2.0/monitor-alert/notification-endpoints/
---
View check details and statuses and notifications generated by checks in the InfluxDB user interface (UI).
- [View a list of all checks](#view-a-list-of-all-checks)
- [View check details](#view-check-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-check)
- [View notifications triggered by a check](#view-notifications-triggered-by-a-check)
To view checks, click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
## View a list of all checks
The **Checks** column on the Monitoring & Alerting landing page displays all existing checks.
## View check details
In the **Checks** column, click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a check
1. In the **Checks** column, hover over the check, click the **{{< icon "view" >}}**
icon, then **View History**.
The Statuses History page displays statuses generated by the selected check.
## View notifications triggered by a check
1. In the **Checks** column, hover over the check, click the **{{< icon "view" >}}**
icon, then **View History**.
2. In the top left corner, click **Notifications**.
The Notifications History page displays notifications initiated by the selected check.

View File

@ -0,0 +1,20 @@
---
title: Manage notification endpoints
list_title: Manage notification endpoints
description: >
Create, read, update, and delete endpoints in the InfluxDB UI.
v2.0/tags: [monitor, endpoints, notifications, alert]
menu:
v2_0:
parent: Monitor & alert
weight: 102
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
Notification endpoints store information to connect to a third party service.
If you're using the Free plan, create a Slack endpoint.
If you're using the Pay as You Go plan, create a connection to a HTTP, Slack, or PagerDuty endpoint.
{{< children >}}

View File

@ -0,0 +1,45 @@
---
title: Create notification endpoints
description: >
Create notification endpoints to send alerts on your time series data.
menu:
v2_0:
name: Create endpoints
parent: Manage notification endpoints
weight: 201
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
To send notifications about changes in your data, start by creating a notification endpoint to a third party service. After creating notification endpoints, [create notification rules](/v2.0/monitor-alert/notification-rules/create) to send alerts to third party services on [check statuses](/v2.0/monitor-alert/checks/create).
## Create a notification endpoint in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Next to **Notification Endpoints**, click **Create**.
3. From the **Destination** drop-down list, select a destination endpoint to send notifications.
The following endpoints are available for InfluxDB 2.0 OSS, the InfluxDB Cloud 2.0 Free Plan,
and the InfluxDB Cloud 2.0 Pay As You Go (PAYG) Plan:
| Endpoint | OSS | Free Plan _(Cloud)_ | PAYG Plan _(Cloud)_ |
|:-------- |:--------: |:-------------------: |:----------------------------:|
| **Slack** | **{{< icon "check" >}}** | **{{< icon "check" >}}** | **{{< icon "check" >}}** |
| **PagerDuty** | **{{< icon "check" >}}** | | **{{< icon "check" >}}** |
| **HTTP** | **{{< icon "check" >}}** | | **{{< icon "check" >}}** |
4. In the **Name** and **Description** fields, enter a name and description for the endpoint.
5. Enter enter information to connect to the endpoint:
- For HTTP, enter the **URL** to send the notification. Select the **auth method** to use: **None** for no authentication. To authenticate with a username and password, select **Basic** and then enter credentials in the **Username** and **Password** fields. To authenticate with a token, select **Bearer**, and then enter the authentication token in the **Token** field.
- For Slack, create an [Incoming WebHook](https://api.slack.com/incoming-webhooks#posting_with_webhooks) in Slack, and then enter your webHook URL in the **Slack Incoming WebHook URL** field.
- For PagerDuty:
- [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service), [add an integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service), and then enter the PagerDuty integration key for your new service in the **Routing Key** field.
- The **Client URL** provides a useful link in your PagerDuty notification. Enter any URL that you'd like to use to investigate issues. This URL is sent as the `client_url` property in the PagerDuty trigger event. By default, the **Client URL** is set to your Monitoring & Alerting History page, and the following included in the PagerDuty trigger event: `"client_url": "https://twodotoh.a.influxcloud.net/orgs/<your-org-ID>/alert-history”`
6. Click **Create Notification Endpoint**.

View File

@ -0,0 +1,24 @@
---
title: Delete notification endpoints
description: >
Delete a notification endpoint in the InfluxDB UI.
menu:
v2_0:
name: Delete endpoints
parent: Manage notification endpoints
weight: 204
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
If notifications are no longer sent to an endpoint, complete the steps below to delete the endpoint, and then [update notification rules](/v2.0/monitor-alert/notification-rules/update) with a new notification endpoint as needed.
## Delete a notification endpoint in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, find the rule you want to delete.
3. Click the delete icon, then click **Delete** to confirm.

View File

@ -0,0 +1,65 @@
---
title: Update notification endpoints
description: >
Update notification endpoints in the InfluxDB UI.
menu:
v2_0:
name: Update endpoints
parent: Manage notification endpoints
weight: 203
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
To update the notification endpoint details, complete the procedures below as needed. To update the notification endpoint selected for a notification rule, see [update notification rules](/v2.0/monitor-alert/notification-rules/update/).
## Add a label to notification endpoint
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, click **Add a label** next to the endpoint you want to add a label to. The **Add Labels** box opens.
3. To add an existing label, select the label from the list.
4. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **Create Label**.
5. To remove a label, hover over the label under an endpoint and click X.
## Disable notification endpoint
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, find the endpoint you want to disable.
3. Click the blue toggle to disable the notification endpoint.
## Update the name or description for notification endpoint
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, hover over the name or description of the endpoint.
3. Click the pencil icon to edit the field.
4. Click outside of the field to save your changes.
## Change endpoint details
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Endpoints**, click the endpoint to update.
3. Update details as needed, and then click **Edit a Notification Endpoint**. For details about each field, see [Create notification endpoints](/v2.0/monitor-alert/notification-endpoints/create/).

View File

@ -0,0 +1,43 @@
---
title: View notification endpoint history
seotitle: View notification endpoint details and history
description: >
View notification endpoint details and history in the InfluxDB UI.
menu:
v2_0:
name: View endpoint history
parent: Manage notification endpoints
weight: 202
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-rules/
---
View notification endpoint details and history in the InfluxDB user interface (UI).
- [View notification endpoints](#view-notification-endpoints)
- [View notification endpoint details](#view-notification-endpoint-details)
- [View history notification endpoint history](#view-notification-endpoint-history), including statues and notifications sent to the endpoint
## View notification endpoints
- Click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
In the **Notification Endpoints** column, view existing notification endpoints.
## View notification endpoint details
1. Click **Monitoring & Alerting** in the InfluxDB UI sidebar.
2. In the **Notification Endpoints** column, click the name of the notification endpoint you want to view.
3. View the notification endpoint destination, name, and information to connect to the endpoint.
## View notification endpoint history
1. Click **Monitoring & Alerting** in the InfluxDB UI sidebar.
2. In the **Notification Endpoints** column, hover over the notification endpoint, click the **{{< icon "view" >}}** icon, then **View History**.
The Check Statuses History page displays:
- Statuses generated for the selected notification endpoint
- Notifications sent to the selected notification endpoint

View File

@ -0,0 +1,17 @@
---
title: Manage notification rules
description: >
Manage notification rules in InfluxDB.
weight: 103
v2.0/tags: [monitor, notifications, alert]
menu:
v2_0:
parent: Monitor & alert
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
The following articles provide information on managing your notification rules:
{{< children >}}

View File

@ -0,0 +1,42 @@
---
title: Create notification rules
description: >
Create notification rules to send alerts on your time series data.
weight: 201
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
Once you've set up checks and notification endpoints, create notification rules to alert you.
_For details, see [Manage checks](/v2.0/monitor-alert/checks/) and
[Manage notification endpoints](/v2.0/monitor-alert/notification-endpoints/)._
## Create a new notification rule in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, click **+Create**.
3. Complete the **About** section:
1. In the **Name** field, enter a name for the notification rule.
2. In the **Schedule Every** field, enter how frequently the rule should run.
3. In the **Offset** field, enter an offset time. For example,if a task runs on the hour, a 10m offset delays the task to 10 minutes after the hour. Time ranges defined in the task are relative to the specified execution time.
4. In the **Conditions** section, build a condition using a combination of status and tag keys.
- Next to **When status is equal to**, select a status from the drop-down field.
- Next to **AND When**, enter one or more tag key-value pairs to filter by.
5. In the **Message** section, select an endpoint to notify.
6. Click **Create Notification Rule**.
## Clone an existing notification rule in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, hover over the rule you want to clone.
3. Click the clone icon and select **Clone**. The cloned rule appears.

View File

@ -0,0 +1,21 @@
---
title: Delete notification rules
description: >
If you no longer need to receive an alert, delete the associated notification rule.
weight: 204
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
## Delete a notification rule in the UI
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, find the rule you want to delete.
3. Click the delete icon, then click **Delete** to confirm.

View File

@ -0,0 +1,51 @@
---
title: Update notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 203
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
## Add a label to notification rules
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, click **Add a label** next to the rule you want to add a label to. The **Add Labels** box opens.
3. To add an existing label, select the label from the list.
4. To create and add a new label:
- In the search field, enter the name of the new label. The **Create Label** box opens.
- In the **Description** field, enter an optional description for the label.
- Select a color for the label.
- Click **Create Label**.
5. To remove a label, hover over the label under to a rule and click X.
## Disable notification rules
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, find the rule you want to disable.
3. Click the blue toggle to disable the notification rule.
## Update the name or description for notification rules
1. Select the **Monitoring and Alerting** icon from the sidebar.
{{< nav-icon "alerts" >}}
2. Under **Notification Rules**, hover over the name or description of a rule.
3. Click the pencil icon to edit the field.
4. Click outside of the field to save your changes.

View File

@ -0,0 +1,42 @@
---
title: View notification rules
description: >
Update notification rules to update the notification message or change the schedule or conditions.
weight: 202
menu:
v2_0:
parent: Manage notification rules
related:
- /v2.0/monitor-alert/checks/
- /v2.0/monitor-alert/notification-endpoints/
---
View notification rule details and statuses and notifications generated by notification rules in the InfluxDB user interface (UI).
- [View a list of all notification rules](#view-a-list-of-all-notification-rules)
- [View notification rule details](#view-notification-rule-details)
- [View statuses generated by a check](#view-statuses-generated-by-a-notification-rule)
- [View notifications triggered by a notification rule](#view-notifications-triggered-by-a-notification-rule)
To view notification rules, click **Monitoring & Alerting** in the InfluxDB UI sidebar.
{{< nav-icon "alerts" >}}
## View a list of all notification rules
The **Notification Rules** column on the Monitoring & Alerting landing page displays all existing checks.
## View notification rule details
In the **Notification Rules** column, click the name of the check you want to view.
The check builder appears.
Here you can view the check query and logic.
## View statuses generated by a notification rule
1. In the **Notification Rules** column, hover over the check, click the **{{< icon "view" >}}**
icon, then **View History**.
The Statuses History page displays statuses generated by the selected check.
## View notifications triggered by a notification rule
1. In the **Notification Rules** column, hover over the notification rule, click the **{{< icon "view" >}}**
icon, then **View History**.
2. In the top left corner, click **Notifications**.
The Notifications History page displays notifications initiated by the selected notification rule.

View File

@ -14,16 +14,16 @@ to create a bucket.
## Create a bucket in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Click **{{< icon "plus" >}} Create Bucket** in the upper right.
4. Enter a **Name** for the bucket.
5. Select **How often to clear data?**:
Select **Never** to retain data forever.
Select **Periodically** to define a specific retention policy.
5. Select when to **Delete Data**:
- **Never** to retain data forever.
- **Older than** to choose a specific retention policy.
5. Click **Create** to create the bucket.
## Create a bucket using the influx CLI
@ -32,7 +32,7 @@ Use the [`influx bucket create` command](/v2.0/reference/cli/influx/bucket/creat
to create a new bucket. A bucket requires the following:
- A name
- The name or ID of the organization to which it belongs
- The name or ID of the organization the bucket belongs to
- A retention period in nanoseconds
```sh

View File

@ -14,13 +14,13 @@ to delete a bucket.
## Delete a bucket in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Hover over the bucket you would like to delete.
4. Click **Delete** and **Confirm** to delete the bucket.
4. Click **{{< icon "delete" >}} Delete Bucket** and **Confirm** to delete the bucket.
## Delete a bucket using the influx CLI

View File

@ -8,6 +8,7 @@ menu:
parent: Manage buckets
weight: 202
---
Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update a bucket.
Note that updating an bucket's name will affect any assets that reference the bucket by name, including the following:
@ -23,23 +24,22 @@ If you change a bucket name, be sure to update the bucket in the above places as
## Update a bucket's name in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
3. Hover over the name of the bucket you want to rename in the list.
4. Click **Rename**.
5. Review the information in the window that appears and click **I understand, let's rename my bucket**.
6. Update the bucket's name and click **Change Bucket Name**.
2. Select **Buckets**.
3. Click **Rename** under the bucket you want to rename.
4. Review the information in the window that appears and click **I understand, let's rename my bucket**.
5. Update the bucket's name and click **Change Bucket Name**.
## Update a bucket's retention policy in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Click the name of the bucket you want to update from the list.
4. In the window that appears, edit the bucket's retention policy.
5. Click **Save Changes**.
@ -50,7 +50,7 @@ Use the [`influx bucket update` command](/v2.0/reference/cli/influx/bucket/updat
to update a bucket. Updating a bucket requires the following:
- The bucket ID _(provided in the output of `influx bucket find`)_
- The name or ID of the organization to which the bucket belongs
- The name or ID of the organization the bucket belongs to.
##### Update the name of a bucket
```sh

View File

@ -11,18 +11,17 @@ weight: 202
## View buckets in the InfluxDB UI
1. Click the **Settings** tab in the navigation bar.
1. Click **Load Data** in the navigation bar.
{{< nav-icon "settings" >}}
{{< nav-icon "load data" >}}
2. Select the **Buckets** tab.
2. Select **Buckets**.
3. Click on a bucket to view details.
## View buckets using the influx CLI
Use the [`influx bucket find` command](/v2.0/reference/cli/influx/bucket/find)
to view a buckets in an organization. Viewing bucket requires the following:
to view a buckets in an organization.
```sh
influx bucket find

View File

@ -7,7 +7,7 @@ description: >
menu:
v2_0:
name: Process data
weight: 5
weight: 4
v2.0/tags: [tasks]
---

View File

@ -32,7 +32,7 @@ A separate bucket where aggregated, downsampled data is stored.
To downsample data, it must be aggregated in some way.
What specific method of aggregation you use depends on your specific use case,
but examples include mean, median, top, bottom, etc.
View [Flux's aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates/)
View [Flux's aggregate functions](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/)
for more information and ideas.
## Create a destination bucket
@ -47,7 +47,7 @@ The example task script below is a very basic form of data downsampling that doe
1. Defines a task named "cq-mem-data-1w" that runs once a week.
2. Defines a `data` variable that represents all data from the last 2 weeks in the
`mem` measurement of the `system-data` bucket.
3. Uses the [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/)
3. Uses the [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/)
to window the data into 1 hour intervals and calculate the average of each interval.
4. Stores the aggregated data in the `system-data-downsampled` bucket under the
`my-org` organization.

View File

@ -54,8 +54,8 @@ in form fields when creating the task.
{{% /note %}}
## Define a data source
Define a data source using Flux's [`from()` function](/v2.0/reference/flux/functions/built-in/inputs/from/)
or any other [Flux input functions](/v2.0/reference/flux/functions/built-in/inputs/).
Define a data source using Flux's [`from()` function](/v2.0/reference/flux/stdlib/built-in/inputs/from/)
or any other [Flux input functions](/v2.0/reference/flux/stdlib/built-in/inputs/).
For convenience, consider creating a variable that includes the sourced data with
the required time range and any relevant filters.
@ -88,7 +88,7 @@ specific use case.
The example below illustrates a task that downsamples data by calculating the average of set intervals.
It uses the `data` variable defined [above](#define-a-data-source) as the data source.
It then windows the data into 5 minute intervals and calculates the average of each
window using the [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/).
window using the [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/).
```js
data
@ -104,7 +104,7 @@ _See [Common tasks](/v2.0/process-data/common-tasks) for examples of tasks commo
In the vast majority of task use cases, once data is transformed, it needs to sent and stored somewhere.
This could be a separate bucket with a different retention policy, another measurement, or even an alert endpoint _(Coming)_.
The example below uses Flux's [`to()` function](/v2.0/reference/flux/functions/built-in/outputs/to)
The example below uses Flux's [`to()` function](/v2.0/reference/flux/stdlib/built-in/outputs/to)
to send the transformed data to another bucket:
```js

View File

@ -8,6 +8,8 @@ menu:
name: Create a task
parent: Manage tasks
weight: 201
related:
- /v2.0/reference/cli/influx/task/create
---
InfluxDB provides multiple ways to create tasks both in the InfluxDB user interface (UI)
@ -36,9 +38,9 @@ The InfluxDB UI provides multiple ways to create a task:
3. Select the **Task** option.
4. Specify the task options. See [Task options](/v2.0/process-data/task-options)
for detailed information about each option.
5. Click **Save as Task**.
5. Select a token to use from the **Token** dropdown.
6. Click **Save as Task**.
{{< img-hd src="/img/2-0-data-explorer-save-as-task.png" title="Add a task from the Data Explorer"/>}}
### Create a task in the Task UI
1. Click on the **Tasks** icon in the left navigation menu.
@ -49,10 +51,9 @@ The InfluxDB UI provides multiple ways to create a task:
3. Select **New Task**.
4. In the left panel, specify the task options.
See [Task options](/v2.0/process-data/task-options) for detailed information about each option.
5. In the right panel, enter your task script.
6. Click **Save** in the upper right.
{{< img-hd src="/img/2-0-tasks-create-edit.png" title="Create a task" />}}
5. Select a token to use from the **Token** dropdown.
6. In the right panel, enter your task script.
7. Click **Save** in the upper right.
### Import a task
1. Click on the **Tasks** icon in the left navigation menu.

View File

@ -8,6 +8,8 @@ menu:
name: Delete a task
parent: Manage tasks
weight: 206
related:
- /v2.0/reference/cli/influx/task/delete
---
## Delete a task in the InfluxDB UI

View File

@ -8,6 +8,9 @@ menu:
name: Run a task
parent: Manage tasks
weight: 203
related:
- /v2.0/reference/cli/influx/task/run
- /v2.0/reference/cli/influx/task/retry
---
InfluxDB data processing tasks generally run in defined intervals or at a specific time,

View File

@ -7,6 +7,8 @@ menu:
name: View run history
parent: Manage tasks
weight: 203
related:
- /v2.0/reference/cli/influx/task/run/find
---
When an InfluxDB task runs, a "run" record is created in the task's history.

View File

@ -8,6 +8,8 @@ menu:
name: Update a task
parent: Manage tasks
weight: 204
related:
- /v2.0/reference/cli/influx/task/update
---
## Update a task in the InfluxDB UI
@ -15,13 +17,14 @@ To view your tasks, click the **Tasks** icon in the left navigation menu.
{{< nav-icon "tasks" >}}
Click on the name of a task to update it.
#### Update a task's Flux script
1. In the list of tasks, click the **Name** of the task you want to update.
2. In the left panel, modify the task options.
3. In the right panel, modify the task script.
4. Click **Save** in the upper right.
{{< img-hd src="/img/2-0-tasks-create-edit.png" alt="Update a task" />}}
#### Update the status of a task
In the list of tasks, click the {{< icon "toggle" >}} toggle to the left of the

View File

@ -8,6 +8,8 @@ menu:
name: View tasks
parent: Manage tasks
weight: 202
related:
- /v2.0/reference/cli/influx/task/find
---
## View tasks in the InfluxDB UI

View File

@ -13,10 +13,10 @@ v2.0/tags: [query]
There are multiple ways to execute queries with InfluxDB.
This guide covers the different options:
1. [Data Explorer](#data-explorer)
2. [Influx REPL](#influx-repl)
3. [Influx query command](#influx-query-command)
5. [InfluxDB API](#influxdb-api)
- [Data Explorer](#data-explorer)
- [Influx REPL](#influx-repl)
- [Influx query command](#influx-query-command)
- [InfluxDB API](#influxdb-api)
## Data Explorer
Queries can be built, executed, and visualized in InfluxDB UI's Data Explorer.
@ -60,35 +60,50 @@ In your request, set the following:
- Your organization via the `org` or `orgID` URL parameters.
- `Authorization` header to `Token ` + your authentication token.
- `accept` header to `application/csv`.
- `content-type` header to `application/vnd.flux`.
- `Accept` header to `application/csv`.
- `Content-type` header to `application/vnd.flux`.
- Your plain text query as the request's raw data.
This lets you POST the Flux query in plain text and receive the annotated CSV response.
InfluxDB returns the query results in [annotated CSV](/v2.0/reference/annotated-csv/).
{{% note %}}
#### Use gzip to compress the query response
To compress the query response, set the `Accept-Encoding` header to `gzip`.
This saves network bandwidth, but increases server-side load.
{{% /note %}}
Below is an example `curl` command that queries InfluxDB:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Multi-line](#)
[Single-line](#)
[Without compression](#)
[With compression](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'accept:application/csv' \
-H 'content-type:application/vnd.flux' \
-d 'from(bucket:“test”)
|> range(start:-1000h)
|> group(columns:[“_measurement”], mode:“by”)
|> sum()'
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'Accept: application/csv' \
-H 'Content-type: application/vnd.flux' \
-d 'from(bucket:"example-bucket")
|> range(start:-1000h)
|> group(columns:["_measurement"], mode:"by")
|> sum()'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```bash
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS -H 'Authorization: Token TOKENSTRINGHERE' -H 'accept:application/csv' -H 'content-type:application/vnd.flux' -d 'from(bucket:“test”) |> range(start:-1000h) |> group(columns:[“_measurement”], mode:“by”) |> sum()'
curl http://localhost:9999/api/v2/query?org=my-org -XPOST -sS \
-H 'Authorization: Token YOURAUTHTOKEN' \
-H 'Accept: application/csv' \
-H 'Content-type: application/vnd.flux' \
-H 'Accept-Encoding: gzip' \
-d 'from(bucket:"example-bucket")
|> range(start:-1000h)
|> group(columns:["_measurement"], mode:"by")
|> sum()'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -9,13 +9,16 @@ menu:
v2_0:
name: Get started with Flux
parent: Query data
related:
- /v2.0/reference/flux/
- /v2.0/reference/flux/stdlib/
---
Flux is InfluxData's functional data scripting language designed for querying,
analyzing, and acting on data.
This multi-part getting started guide walks through important concepts related to Flux,
how to query time series data from InfluxDB using Flux, and introduces Flux syntax and functions.
This multi-part getting started guide walks through important concepts related to Flux.
It covers querying time series data from InfluxDB using Flux, and introduces Flux syntax and functions.
## Flux design principles
Flux is designed to be usable, readable, flexible, composable, testable, contributable, and shareable.
@ -23,7 +26,7 @@ Its syntax is largely inspired by [2018's most popular scripting language](https
Javascript, and takes a functional approach to data exploration and processing.
The following example illustrates querying data stored from the last five minutes,
filtering by the `cpu` measurement and the `cpu=cpu-usage` tag, windowing the data in 1 minute intervals,
filtering by the `cpu` measurement and the `cpu=cpu-total` tag, windowing the data in 1 minute intervals,
and calculating the average of each window:
```js
@ -44,6 +47,7 @@ Flux uses pipe-forward operators (`|>`) extensively to chain operations together
After each function or operation, Flux returns a table or collection of tables containing data.
The pipe-forward operator pipes those tables into the next function or operation where
they are further processed or manipulated.
This makes it easy to chain together functions to build sophisticated queries.
### Tables
Flux structures all data in tables.

View File

@ -7,6 +7,11 @@ menu:
name: Query InfluxDB
parent: Get started with Flux
weight: 201
related:
- /v2.0/query-data/guides/
- /v2.0/reference/flux/stdlib/built-in/inputs/from
- /v2.0/reference/flux/stdlib/built-in/transformations/range
- /v2.0/reference/flux/stdlib/built-in/transformations/filter
---
This guide walks through the basics of using Flux to query data from InfluxDB.
@ -18,8 +23,8 @@ Every Flux query needs the following:
## 1. Define your data source
Flux's [`from()`](/v2.0/reference/flux/functions/built-in/inputs/from) function defines an InfluxDB data source.
It requires a [`bucket`](/v2.0/reference/flux/functions/built-in/inputs/from#bucket) parameter.
Flux's [`from()`](/v2.0/reference/flux/stdlib/built-in/inputs/from) function defines an InfluxDB data source.
It requires a [`bucket`](/v2.0/reference/flux/stdlib/built-in/inputs/from#bucket) parameter.
The following examples use `example-bucket` as the bucket name.
```js
@ -31,9 +36,9 @@ Flux requires a time range when querying time series data.
"Unbounded" queries are very resource-intensive and as a protective measure,
Flux will not query the database without a specified range.
Use the pipe-forward operator (`|>`) to pipe data from your data source into the [`range()`](/v2.0/reference/flux/functions/built-in/transformations/range)
Use the pipe-forward operator (`|>`) to pipe data from your data source into the [`range()`](/v2.0/reference/flux/stdlib/built-in/transformations/range)
function, which specifies a time range for your query.
It accepts two properties: `start` and `stop`.
It accepts two parameters: `start` and `stop`.
Ranges can be **relative** using negative [durations](/v2.0/reference/flux/language/lexical-elements#duration-literals)
or **absolute** using [timestamps](/v2.0/reference/flux/language/lexical-elements#date-and-time-literals).
@ -101,7 +106,7 @@ from(bucket:"example-bucket")
```
## 4. Yield your queried data
Use Flux's `yield()` function to output the filtered tables as the result of the query.
Flux's `yield()` function outputs the filtered tables as the result of the query.
```js
from(bucket:"example-bucket")
@ -114,16 +119,17 @@ from(bucket:"example-bucket")
|> yield()
```
{{% note %}}
Flux automatically assume a `yield()` function at
Flux automatically assumes a `yield()` function at
the end of each script in order to output and visualize the data.
`yield()` is only necessary when including multiple queries in the same Flux query.
Explicitly calling `yield()` is only necessary when including multiple queries in the same Flux query.
Each set of returned data needs to be named using the `yield()` function.
{{% /note %}}
## Congratulations!
You have now queried data from InfluxDB using Flux.
This is a barebones query that can be transformed in other ways.
The query shown here is a barebones example.
Flux queries can be extended in many ways to form powerful scripts.
<div class="page-nav-btns">
<a class="btn prev" href="/v2.0/query-data/get-started/">Get started with Flux</a>

View File

@ -7,6 +7,8 @@ menu:
name: Syntax basics
parent: Get started with Flux
weight: 203
related:
- /v2.0/reference/flux/language/types/
---
@ -184,10 +186,8 @@ topN = (tables=<-, n) => tables |> sort(desc: true) |> limit(n: n)
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
_More information about creating custom functions is available in the [Custom functions](/v2.0/query-data/guides/custom-functions) documentation._
Using the `cpuUsageUser` data stream variable defined above, find the top five data
points with the custom `topN` function and yield the results.
Using this new custom function `topN` and the `cpuUsageUser` data stream variable defined above,
we can find the top five data points and yield the results.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
@ -213,6 +213,8 @@ cpuUsageUser |> topN(n:5) |> yield()
This query will return the five data points with the highest user CPU usage over the last hour.
_More information about creating custom functions is available in the [Custom functions](/v2.0/query-data/guides/custom-functions) documentation._
<div class="page-nav-btns">
<a class="btn prev" href="/v2.0/query-data/get-started/transform-data/">Transform your data</a>
</div>

View File

@ -7,15 +7,19 @@ menu:
name: Transform data
parent: Get started with Flux
weight: 202
related:
- /v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow
- /v2.0/reference/flux/stdlib/built-in/transformations/window
---
When [querying data from InfluxDB](/v2.0/query-data/get-started/query-influxdb),
you often need to transform that data in some way.
Common examples are aggregating data into averages, downsampling data, etc.
This guide demonstrates using [Flux functions](/v2.0/reference/flux/functions) to transform your data.
This guide demonstrates using [Flux functions](/v2.0/reference/flux/stdlib) to transform your data.
It walks through creating a Flux script that partitions data into windows of time,
averages the `_value`s in each window, and outputs the averages as a new table.
(Remember, Flux structures all data in [tables](/v2.0/query-data/get-started/#tables).)
It's important to understand how the "shape" of your data changes through each of these operations.
@ -36,13 +40,13 @@ from(bucket:"example-bucket")
## Flux functions
Flux provides a number of functions that perform specific operations, transformations, and tasks.
You can also [create custom functions](/v2.0/query-data/guides/custom-functions) in your Flux queries.
_Functions are covered in detail in the [Flux functions](/v2.0/reference/flux/functions) documentation._
_Functions are covered in detail in the [Flux functions](/v2.0/reference/flux/stdlib) documentation._
A common type of function used when transforming data queried from InfluxDB is an aggregate function.
Aggregate functions take a set of `_value`s in a table, aggregate them, and transform
them into a new value.
This example uses the [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean)
This example uses the [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean)
to average values within each time window.
{{% note %}}
@ -52,7 +56,7 @@ It's just good to understand the steps in the process.
{{% /note %}}
## Window your data
Flux's [`window()` function](/v2.0/reference/flux/functions/built-in/transformations/window) partitions records based on a time value.
Flux's [`window()` function](/v2.0/reference/flux/stdlib/built-in/transformations/window) partitions records based on a time value.
Use the `every` parameter to define a duration of each window.
For this example, window data in five minute intervals (`5m`).
@ -75,7 +79,7 @@ When visualized, each table is assigned a unique color.
## Aggregate windowed data
Flux aggregate functions take the `_value`s in each table and aggregate them in some way.
Use the [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean) to average the `_value`s of each table.
Use the [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean) to average the `_value`s of each table.
```js
from(bucket:"example-bucket")
@ -101,7 +105,7 @@ Aggregate functions don't infer what time should be used for the aggregate value
Therefore the `_time` column is dropped.
A `_time` column is required in the [next operation](#unwindow-aggregate-tables).
To add one, use the [`duplicate()` function](/v2.0/reference/flux/functions/built-in/transformations/duplicate)
To add one, use the [`duplicate()` function](/v2.0/reference/flux/stdlib/built-in/transformations/duplicate)
to duplicate the `_stop` column as the `_time` column for each windowed table.
```js
@ -146,7 +150,7 @@ process helps to understand how data changes "shape" as it is passed through eac
Flux provides (and allows you to create) "helper" functions that abstract many of these steps.
The same operation performed in this guide can be accomplished using the
[`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow).
[`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow).
```js
from(bucket:"example-bucket")

View File

@ -27,9 +27,9 @@ Conditional expressions are most useful in the following contexts:
- When defining variables.
- When using functions that operate on a single row at a time (
[`filter()`](/v2.0/reference/flux/functions/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/functions/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce) ).
[`filter()`](/v2.0/reference/flux/stdlib/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/stdlib/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce) ).
## Examples
@ -72,7 +72,7 @@ from(bucket: "example-bucket")
### Conditionally transform column values with map()
The following example uses the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map/)
The following example uses the [`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map/)
to conditionally transform column values.
It sets the `level` column to a specific string based on `_value` column.
@ -87,8 +87,7 @@ from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|> map(fn: (r) => ({
_time: r._time,
_value: r._value,
r with
level:
if r._value >= 95.0000001 and r._value <= 100.0 then "critical"
else if r._value >= 85.0000001 and r._value <= 95.0 then "warning"
@ -104,10 +103,8 @@ from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|> map(fn: (r) => ({
// Retain the _time column in the mapped row
_time: r._time,
// Retain the _value column in the mapped row
_value: r._value,
// Retain all existing columns in the mapped row
r with
// Set the level column value based on the _value column
level:
if r._value >= 95.0000001 and r._value <= 100.0 then "critical"
@ -122,8 +119,8 @@ from(bucket: "example-bucket")
{{< /code-tabs-wrapper >}}
### Conditionally increment a count with reduce()
The following example uses the [`aggregateWindow()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow/)
and [`reduce()`](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/)
The following example uses the [`aggregateWindow()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow/)
and [`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/)
functions to count the number of records in every five minute window that exceed a defined threshold.
{{< code-tabs-wrapper >}}

View File

@ -70,14 +70,14 @@ functionName = (tables=<-) => tables |> functionOperations
###### Multiply row values by x
The example below defines a `multByX` function that multiplies the `_value` column
of each row in the input table by the `x` parameter.
It uses the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map)
It uses the [`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map)
to modify each `_value`.
```js
// Function definition
multByX = (tables=<-, x) =>
tables
|> map(fn: (r) => r._value * x)
|> map(fn: (r) => ({ r with _value: r._value * x}))
// Function usage
from(bucket: "example-bucket")
@ -104,9 +104,9 @@ Defaults are overridden by explicitly defining the parameter in the function cal
###### Get the winner or the "winner"
The example below defines a `getWinner` function that returns the record with the highest
or lowest `_value` (winner versus "winner") depending on the `noSarcasm` parameter which defaults to `true`.
It uses the [`sort()` function](/v2.0/reference/flux/functions/built-in/transformations/sort)
It uses the [`sort()` function](/v2.0/reference/flux/stdlib/built-in/transformations/sort)
to sort records in either descending or ascending order.
It then uses the [`limit()` function](/v2.0/reference/flux/functions/built-in/transformations/limit)
It then uses the [`limit()` function](/v2.0/reference/flux/stdlib/built-in/transformations/limit)
to return the first record from the sorted table.
```js

View File

@ -10,9 +10,9 @@ weight: 301
---
To aggregate your data, use the Flux
[built-in aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates/)
[built-in aggregate functions](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/)
or create custom aggregate functions using the
[`reduce()`function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/).
[`reduce()`function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/).
## Aggregate function characteristics
Aggregate functions all have the same basic characteristics:
@ -22,7 +22,7 @@ Aggregate functions all have the same basic characteristics:
## How reduce() works
The `reduce()` function operates on one row at a time using the function defined in
the [`fn` parameter](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#fn).
the [`fn` parameter](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/#fn).
The `fn` function maps keys to specific values using two [objects](/v2.0/query-data/get-started/syntax-basics/#objects)
specified by the following parameters:
@ -32,7 +32,7 @@ specified by the following parameters:
| `accumulator` | An object that contains values used in each row's aggregate calculation. |
{{% note %}}
The `reduce()` function's [`identity` parameter](/v2.0/reference/flux/functions/built-in/transformations/aggregates/reduce/#identity)
The `reduce()` function's [`identity` parameter](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/#identity)
defines the initial `accumulator` object.
{{% /note %}}
@ -49,6 +49,11 @@ in an input table.
)
```
{{% note %}}
To preserve existing columns, [use the `with` operator](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/#preserve-columns)
when mapping values in the `r` object.
{{% /note %}}
To illustrate how this function works, take this simplified table for example:
```txt
@ -145,7 +150,7 @@ and the `reduce()` function to aggregate rows in each input table.
### Create a custom average function
This example illustrates how to create a function that averages values in a table.
_This is meant for demonstration purposes only.
The built-in [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean/)
The built-in [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean/)
does the same thing and is much more performant._
{{< code-tabs-wrapper >}}

View File

@ -0,0 +1,69 @@
---
title: Check if a value exists
seotitle: Use Flux to check if a value exists
description: >
Use the Flux `exists` operator to check if an object contains a key or if that
key's value is `null`.
v2.0/tags: [exists]
menu:
v2_0:
name: Check if a value exists
parent: How-to guides
weight: 209
---
Use the Flux `exists` operator to check if an object contains a key or if that
key's value is `null`.
```js
p = {firstName: "John", lastName: "Doe", age: 42}
exists p.firstName
// Returns true
exists p.height
// Returns false
```
Use `exists` with row functions (
[`filter()`](/v2.0/reference/flux/stdlib/built-in/transformations/filter/),
[`map()`](/v2.0/reference/flux/stdlib/built-in/transformations/map/),
[`reduce()`](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/reduce/))
to check if a row includes a column or if the value for that column is `null`.
#### Filter out null values
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) => exists r._value)
```
#### Map values based on existence
```js
from(bucket: "default")
|> range(start: -30s)
|> map(fn: (r) => ({
r with
human_readable:
if exists r._value then "${r._field} is ${string(v:r._value)}."
else "${r._field} has no value."
}))
```
#### Ignore null values in a custom aggregate function
```js
customSumProduct = (tables=<-) =>
tables
|> reduce(
identity: {sum: 0.0, product: 1.0},
fn: (r, accumulator) => ({
r with
sum:
if exists r._value then r._value + accumulator.sum
else accumulator.sum,
product:
if exists r._value then r.value * accumulator.product
else accumulator.product
})
)
```

View File

@ -28,7 +28,7 @@ Understanding how modifying group keys shapes output data is key to successfully
grouping and transforming data into your desired output.
## group() Function
Flux's [`group()` function](/v2.0/reference/flux/functions/built-in/transformations/group) defines the
Flux's [`group()` function](/v2.0/reference/flux/stdlib/built-in/transformations/group) defines the
group key for output tables, i.e. grouping records based on values for specific columns.
###### group() example

View File

@ -6,7 +6,7 @@ menu:
v2_0:
name: Create histograms
parent: How-to guides
weight: 207
weight: 208
---
@ -14,7 +14,7 @@ Histograms provide valuable insight into the distribution of your data.
This guide walks through using Flux's `histogram()` function to transform your data into a **cumulative histogram**.
## histogram() function
The [`histogram()` function](/v2.0/reference/flux/functions/built-in/transformations/histogram) approximates the
The [`histogram()` function](/v2.0/reference/flux/stdlib/built-in/transformations/histogram) approximates the
cumulative distribution of a dataset by counting data frequencies for a list of "bins."
A **bin** is simply a range in which a data point falls.
All data points that are less than or equal to the bound are counted in the bin.
@ -41,7 +41,7 @@ Flux provides two helper functions for generating histogram bins.
Each generates an array of floats designed to be used in the `histogram()` function's `bins` parameter.
### linearBins()
The [`linearBins()` function](/v2.0/reference/flux/functions/built-in/misc/linearbins) generates a list of linearly separated floats.
The [`linearBins()` function](/v2.0/reference/flux/stdlib/built-in/misc/linearbins) generates a list of linearly separated floats.
```js
linearBins(start: 0.0, width: 10.0, count: 10)
@ -50,17 +50,36 @@ linearBins(start: 0.0, width: 10.0, count: 10)
```
### logarithmicBins()
The [`logarithmicBins()` function](/v2.0/reference/flux/functions/built-in/misc/logarithmicbins) generates a list of exponentially separated floats.
The [`logarithmicBins()` function](/v2.0/reference/flux/stdlib/built-in/misc/logarithmicbins) generates a list of exponentially separated floats.
```js
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true)
logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinity: true)
// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf]
```
## Histogram visualization
The [Histogram visualization type](/v2.0/visualize-data/visualization-types/histogram/)
automatically converts query results into a binned and segmented histogram.
{{< img-hd src="/img/2-0-visualizations-histogram-example.png" alt="Histogram visualization" />}}
Use the [Histogram visualization controls](/v2.0/visualize-data/visualization-types/histogram/#histogram-controls)
to specify the number of bins and define groups in bins.
### Histogram visualization data structure
Because the Histogram visualization uses visualization controls to creates bins and groups,
**do not** structure query results as histogram data.
{{% note %}}
Output of the [`histogram()` function](#histogram-function) is **not** compatible
with the Histogram visualization type.
View the example [below](#visualize-errors-by-severity).
{{% /note %}}
## Examples
### Generating a histogram with linear bins
### Generate a histogram with linear bins
```js
from(bucket:"example-bucket")
|> range(start: -5m)
@ -105,7 +124,7 @@ Table: keys: [_start, _stop, _field, _measurement, host]
2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 75 30
```
### Generating a histogram with logarithmic bins
### Generate a histogram with logarithmic bins
```js
from(bucket:"example-bucket")
|> range(start: -5m)
@ -139,3 +158,22 @@ Table: keys: [_start, _stop, _field, _measurement, host]
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 128 30
2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 256 30
```
### Visualize errors by severity
Use the [Telegraf Syslog plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/syslog)
to collect error information from your system.
Query the `severity_code` field in the `syslog` measurement:
```js
from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "syslog" and
r._field == "severity_code"
)
```
In the Histogram visualization options, select `_time` as the **X Column**
and `severity` as the **Group By** option:
{{< img-hd src="/img/2-0-visualizations-histogram-errors.png" alt="Logs by severity histogram" />}}

View File

@ -10,7 +10,7 @@ menu:
weight: 205
---
The [`join()` function](/v2.0/reference/flux/functions/built-in/transformations/join) merges two or more
The [`join()` function](/v2.0/reference/flux/stdlib/built-in/transformations/join) merges two or more
input streams, whose values are equal on a set of common columns, into a single output stream.
Flux allows you to join on any columns common between two data streams and opens the door
for operations such as cross-measurement joins and math across measurements.
@ -205,7 +205,7 @@ These represent the columns with values unique to the two input tables.
## Calculate and create a new table
With the two streams of data joined into a single table, use the
[`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map)
[`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map)
to build a new table by mapping the existing `_time` column to a new `_time`
column and dividing `_value_mem` by `_value_proc` and mapping it to a
new `_value` column.
@ -213,9 +213,10 @@ new `_value` column.
```js
join(tables: {mem:memUsed, proc:procTotal}, on: ["_time", "_stop", "_start", "host"])
|> map(fn: (r) => ({
_time: r._time,
_value: r._value_mem / r._value_proc
}))
_time: r._time,
_value: r._value_mem / r._value_proc
})
)
```
{{% truncate %}}

View File

@ -0,0 +1,108 @@
---
title: Manipulate timestamps with Flux
description: >
Use Flux to process and manipulate timestamps.
menu:
v2_0:
name: Manipulate timestamps
parent: How-to guides
weight: 209
---
Every point stored in InfluxDB has an associated timestamp.
Use Flux to process and manipulate timestamps to suit your needs.
- [Convert timestamp format](#convert-timestamp-format)
- [Time-related Flux functions](#time-related-flux-functions)
## Convert timestamp format
### Convert nanosecond epoch timestamp to RFC3339
Use the [`time()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/time/)
to convert a **nanosecond** epoch timestamp to an RFC3339 timestamp.
```js
time(v: 1568808000000000000)
// Returns 2019-09-18T12:00:00.000000000Z
```
### Convert RFC3339 to nanosecond epoch timestamp
Use the [`uint()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/unit/)
to convert an RFC3339 timestamp to a nanosecond epoch timestamp.
```js
uint(v: 2019-09-18T12:00:00.000000000Z)
// Returns 1568808000000000000
```
### Calculate the duration between two timestamps
Flux doesn't support mathematical operations using [time type](/v2.0/reference/flux/language/types/#time-types) values.
To calculate the duration between two timestamps:
1. Use the `uint()` function to convert each timestamp to a nanosecond epoch timestamp.
2. Subtract one nanosecond epoch timestamp from the other.
3. Use the `duration()` function to convert the result into a duration.
```js
time1 = uint(v: 2019-09-17T21:12:05Z)
time2 = uint(v: 2019-09-18T22:16:35Z)
duration(v: time2 - time1)
// Returns 25h4m30s
```
{{% note %}}
Flux doesn't support duration column types.
To store a duration in a column, use the [`string()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/string/)
to convert the duration to a string.
{{% /note %}}
## Time-related Flux functions
### Retrieve the current time
Use the [`now()` function](/v2.0/reference/flux/stdlib/built-in/misc/now/) to
return the current UTC time in RFC3339 format.
```js
now()
```
### Add a duration to a timestamp
The [`experimental.addDuration()` function](/v2.0/reference/flux/stdlib/experimental/addduration/)
adds a duration to a specified time and returns the resulting time.
{{% warn %}}
By using `experimental.addDuration()`, you accept the
[risks of experimental functions](/v2.0/reference/flux/stdlib/experimental/#use-experimental-functions-at-your-own-risk).
{{% /warn %}}
```js
import "experimental"
experimental.addDuration(
d: 6h,
to: 2019-09-16T12:00:00Z,
)
// Returns 2019-09-16T18:00:00.000000000Z
```
### Subtract a duration from a timestamp
The [`experimental.subDuration()` function](/v2.0/reference/flux/stdlib/experimental/subduration/)
subtracts a duration from a specified time and returns the resulting time.
{{% warn %}}
By using `experimental.subDuration()`, you accept the
[risks of experimental functions](/v2.0/reference/flux/stdlib/experimental/#use-experimental-functions-at-your-own-risk).
{{% /warn %}}
```js
import "experimental"
experimental.subDuration(
d: 6h,
from: 2019-09-16T12:00:00Z,
)
// Returns 2019-09-16T06:00:00.000000000Z
```

View File

@ -40,7 +40,7 @@ Otherwise, you will get an error similar to:
Error: type error: float != int
```
To convert operands to the same type, use [type-conversion functions](/v2.0/reference/flux/functions/built-in/transformations/type-conversions/)
To convert operands to the same type, use [type-conversion functions](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/)
or manually format operands.
The operand data type determines the output data type.
For example:
@ -82,21 +82,21 @@ percent(sample: 20.0, total: 80.0)
To transform multiple values in an input stream, your function needs to:
- [Handle piped-forward data](/v2.0/query-data/guides/custom-functions/#functions-that-manipulate-piped-forward-data).
- Use the [`map()` function](/v2.0/reference/flux/functions/built-in/transformations/map) to iterate over each row.
- Use the [`map()` function](/v2.0/reference/flux/stdlib/built-in/transformations/map) to iterate over each row.
The example `multiplyByX()` function below includes:
- A `tables` parameter that represents the input data stream (`<-`).
- An `x` parameter which is the number by which values in the `_value` column are multiplied.
- A `map()` function that iterates over each row in the input stream.
It uses the `_time` value of the input stream to define the `_time` value in the output stream.
It uses the `with` operator to preserve existing columns in each row.
It also multiples the `_value` column by `x`.
```js
multiplyByX = (x, tables=<-) =>
tables
|> map(fn: (r) => ({
_time: r._time,
r with
_value: r._value * x
})
)
@ -115,17 +115,17 @@ The `map()` function iterates over each row in the piped-forward data and define
a new `_value` by dividing the original `_value` by 1073741824.
```js
from(bucket: "default")
from(bucket: "example-bucket")
|> range(start: -10m)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "active"
)
|> map(fn: (r) => ({
_time: r._time,
_value: r._value / 1073741824
})
)
r with
_value: r._value / 1073741824
})
)
```
You could turn that same calculation into a function:
@ -134,7 +134,7 @@ You could turn that same calculation into a function:
bytesToGB = (tables=<-) =>
tables
|> map(fn: (r) => ({
_time: r._time,
r with
_value: r._value / 1073741824
})
)
@ -146,14 +146,14 @@ data
#### Include partial gigabytes
Because the original metric (bytes) is an integer, the output of the operation is an integer and does not include partial GBs.
To calculate partial GBs, convert the `_value` column and its values to floats using the
[`float()` function](/v2.0/reference/flux/functions/built-in/transformations/type-conversions/float)
[`float()` function](/v2.0/reference/flux/stdlib/built-in/transformations/type-conversions/float)
and format the denominator in the division operation as a float.
```js
bytesToGB = (tables=<-) =>
tables
|> map(fn: (r) => ({
_time: r._time,
r with
_value: float(v: r._value) / 1073741824.0
})
)
@ -195,7 +195,7 @@ usageToFloat = (tables=<-) =>
// Define the data source and filter user and system CPU usage
// from 'cpu-total' in the 'cpu' measurement
from(bucket: "default")
from(bucket: "example-bucket")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
@ -213,7 +213,8 @@ from(bucket: "default")
// Map over each row and calculate the percentage of
// CPU used by the user vs the system
|> map(fn: (r) => ({
_time: r._time,
// Preserve existing columns in each row
r with
usage_user: r.usage_user / (r.usage_user + r.usage_system) * 100.0,
usage_system: r.usage_system / (r.usage_user + r.usage_system) * 100.0
})
@ -232,7 +233,7 @@ usageToFloat = (tables=<-) =>
})
)
from(bucket: "default")
from(bucket: "example-bucket")
|> range(start: timeRangeStart, stop: timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "cpu" and
@ -243,7 +244,7 @@ from(bucket: "default")
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|> usageToFloat()
|> map(fn: (r) => ({
_time: r._time,
r with
usage_user: r.usage_user / (r.usage_user + r.usage_system) * 100.0,
usage_system: r.usage_system / (r.usage_user + r.usage_system) * 100.0
})

View File

@ -12,7 +12,7 @@ menu:
weight: 206
---
The [`sort()`function](/v2.0/reference/flux/functions/built-in/transformations/sort)
The [`sort()`function](/v2.0/reference/flux/stdlib/built-in/transformations/sort)
orders the records within each table.
The following example orders system uptime first by region, then host, then value.
@ -26,7 +26,7 @@ from(bucket:"example-bucket")
|> sort(columns:["region", "host", "_value"])
```
The [`limit()` function](/v2.0/reference/flux/functions/built-in/transformations/limit)
The [`limit()` function](/v2.0/reference/flux/stdlib/built-in/transformations/limit)
limits the number of records in output tables to a fixed number, `n`.
The following example shows up to 10 records from the past hour.
@ -52,6 +52,6 @@ from(bucket:"example-bucket")
```
You now have created a Flux query that sorts and limits data.
Flux also provides the [`top()`](/v2.0/reference/flux/functions/built-in/transformations/selectors/top)
and [`bottom()`](/v2.0/reference/flux/functions/built-in/transformations/selectors/bottom)
Flux also provides the [`top()`](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/top)
and [`bottom()`](/v2.0/reference/flux/stdlib/built-in/transformations/selectors/bottom)
functions to perform both of these functions at the same time.

View File

@ -0,0 +1,217 @@
---
title: Query SQL data sources
seotitle: Query SQL data sources with InfluxDB
description: >
The Flux `sql` package provides functions for working with SQL data sources.
Use `sql.from()` to query SQL databases like PostgreSQL and MySQL
v2.0/tags: [query, flux, sql]
menu:
v2_0:
parent: How-to guides
weight: 207
---
The [Flux](/v2.0/reference/flux) `sql` package provides functions for working with SQL data sources.
[`sql.from()`](/v2.0/reference/flux/stdlib/sql/from/) lets you query SQL data sources
like [PostgreSQL](https://www.postgresql.org/) and [MySQL](https://www.mysql.com/)
and use the results with InfluxDB dashboards, tasks, and other operations.
- [Query a SQL data source](#query-a-sql-data-source)
- [Join SQL data with data in InfluxDB](#join-sql-data-with-data-in-influxdb)
- [Use SQL results to populate dashboard variables](#use-sql-results-to-populate-dashboard-variables)
- [Sample sensor data](#sample-sensor-data)
## Query a SQL data source
To query a SQL data source:
1. Import the `sql` package in your Flux query
2. Use the `sql.from()` function to specify the driver, data source name (DSN),
and query used to query data from your SQL data source:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[PostgreSQL](#)
[MySQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
import "sql"
sql.from(
driverName: "postgres",
dataSourceName: "postgresql://user:password@localhost",
query: "SELECT * FROM example_table"
)
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "sql"
sql.from(
driverName: "mysql",
dataSourceName: "user:password@tcp(localhost:3306)/db",
query: "SELECT * FROM example_table"
)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
_See the [`sql.from()` documentation](/v2.0/reference/flux/stdlib/sql/from/) for
information about required function parameters._
## Join SQL data with data in InfluxDB
One of the primary benefits of querying SQL data sources from InfluxDB
is the ability to enrich query results with data stored outside of InfluxDB.
Using the [air sensor sample data](#sample-sensor-data) below, the following query
joins air sensor metrics stored in InfluxDB with sensor information stored in PostgreSQL.
The joined data lets you query and filter results based on sensor information
that isn't stored in InfluxDB.
```js
// Import the "sql" package
import "sql"
// Query data from PostgreSQL
sensorInfo = sql.from(
driverName: "postgres",
dataSourceName: "postgresql://localhost?sslmode=disable",
query: "SELECT * FROM sensors"
)
// Query data from InfluxDB
sensorMetrics = from(bucket: "example-bucket")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "airSensors")
// Join InfluxDB query results with PostgreSQL query results
join(tables: {metric: sensorMetrics, info: sensorInfo}, on: ["sensor_id"])
```
## Use SQL results to populate dashboard variables
Use `sql.from()` to [create dashboard variables](/v2.0/visualize-data/variables/create-variable/)
from SQL query results.
The following example uses the [air sensor sample data](#sample-sensor-data) below to
create a variable that lets you select the location of a sensor.
```js
import "sql"
sql.from(
driverName: "postgres",
dataSourceName: "postgresql://localhost?sslmode=disable",
query: "SELECT * FROM sensors"
)
|> rename(columns: {location: "_value"})
|> keep(columns: ["_value"])
```
Use the variable to manipulate queries in your dashboards.
{{< img-hd src="/img/2-0-sql-dashboard-variable.png" alt="Dashboard variable from SQL query results" />}}
---
## Sample sensor data
The [sample data generator](#download-and-run-the-sample-data-generator) and
[sample sensor information](#import-the-sample-sensor-information) simulate a
group of sensors that measure temperature, humidity, and carbon monoxide
in rooms throughout a building.
Each collected data point is stored in InfluxDB with a `sensor_id` tag that identifies
the specific sensor it came from.
Sample sensor information is stored in PostgreSQL.
**Sample data includes:**
- Simulated data collected from each sensor and stored in the `airSensors` measurement in **InfluxDB**:
- temperature
- humidity
- co
- Information about each sensor stored in the `sensors` table in **PostgreSQL**:
- sensor_id
- location
- model_number
- last_inspected
### Import and generate sample sensor data
#### Download and run the sample data generator
`air-sensor-data.rb` is a script that generates air sensor data and stores the data in InfluxDB.
To use `air-sensor-data.rb`:
1. [Create a bucket](/v2.0/organizations/buckets/create-bucket/) to store the data.
2. Download the sample data generator. _This tool requires [Ruby](https://www.ruby-lang.org/en/)._
<a class="btn download" href="/downloads/air-sensor-data.rb" download>Download Air Sensor Generator</a>
3. Give `air-sensor-data.rb` executable permissions:
```
chmod +x air-sensor-data.rb
```
4. Start the generator. Specify your organization, bucket, and authorization token.
_For information about retrieving your token, see [View tokens](/v2.0/security/tokens/view-tokens/)._
```
./air-sensor-data.rb -o your-org -b your-bucket -t YOURAUTHTOKEN
```
The generator begins to write data to InfluxDB and will continue until stopped.
Use `ctrl-c` to stop the generator.
_**Note:** Use the `--help` flag to view other configuration options._
5. [Query your target bucket](/v2.0/query-data/execute-queries/) to ensure the
generated data is writing successfully.
The generator doesn't catch errors from write requests, so it will continue running
even if data is not writing to InfluxDB successfully.
```
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) => r._measurement == "airSensors")
```
#### Import the sample sensor information
1. [Download and install PostgreSQL](https://www.postgresql.org/download/).
2. Download the sample sensor information CSV.
<a class="btn download" href="/downloads/sample-sensor-info.csv" download>Download Sample Data</a>
3. Use a PostgreSQL client (`psql` or a GUI) to create the `sensors` table:
```
CREATE TABLE sensors (
sensor_id character varying(50),
location character varying(50),
model_number character varying(50),
last_inspected date
);
```
4. Import the downloaded CSV sample data.
_Update the `FROM` file path to the path of the downloaded CSV sample data._
```
COPY sensors(sensor_id,location,model_number,last_inspected)
FROM '/path/to/sample-sensor-info.csv' DELIMITER ',' CSV HEADER;
```
5. Query the table to ensure the data was imported correctly:
```
SELECT * FROM sensors;
```
#### Import the sample data dashboard
Download and import the Air Sensors dashboard to visualize the generated data:
<a class="btn download" href="/downloads/air-sensors-dashboard.json" download>Download Air Sensors dashboard</a>
_For information about importing a dashboard, see [Create a dashboard](/v2.0/visualize-data/dashboards/create-dashboard/#create-a-new-dashboard)._

View File

@ -86,7 +86,7 @@ Table: keys: [_start, _stop, _field, _measurement]
{{% /truncate %}}
## Windowing data
Use the [`window()` function](/v2.0/reference/flux/functions/built-in/transformations/window)
Use the [`window()` function](/v2.0/reference/flux/stdlib/built-in/transformations/window)
to group your data based on time bounds.
The most common parameter passed with the `window()` is `every` which
defines the duration of time between windows.
@ -170,14 +170,14 @@ When visualized in the InfluxDB UI, each window table is displayed in a differen
![Windowed data](/img/simple-windowed-data.png)
## Aggregate data
[Aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates) take the values
[Aggregate functions](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates) take the values
of all rows in a table and use them to perform an aggregate operation.
The result is output as a new value in a single-row table.
Since windowed data is split into separate tables, aggregate operations run against
each table separately and output new tables containing only the aggregated value.
For this example, use the [`mean()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/mean)
For this example, use the [`mean()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/mean)
to output the average of each window:
```js
@ -241,7 +241,7 @@ These represent the lower and upper bounds of the time window.
Many Flux functions rely on the `_time` column.
To further process your data after an aggregate function, you need to re-add `_time`.
Use the [`duplicate()` function](/v2.0/reference/flux/functions/built-in/transformations/duplicate) to
Use the [`duplicate()` function](/v2.0/reference/flux/stdlib/built-in/transformations/duplicate) to
duplicate either the `_start` or `_stop` column as a new `_time` column.
```js
@ -329,7 +329,7 @@ With the aggregate values in a single table, data points in the visualization ar
You have now created a Flux query that windows and aggregates data.
The data transformation process outlined in this guide should be used for all aggregation operations.
Flux also provides the [`aggregateWindow()` function](/v2.0/reference/flux/functions/built-in/transformations/aggregates/aggregatewindow)
Flux also provides the [`aggregateWindow()` function](/v2.0/reference/flux/stdlib/built-in/transformations/aggregates/aggregatewindow)
which performs all these separate functions for you.
The following Flux query will return the same results:

View File

@ -0,0 +1,220 @@
---
title: Annotated CSV syntax
description: >
Annotated CSV format is used to encode HTTP responses and results returned to the Flux `csv.from()` function.
weight: 6
menu:
v2_0_ref:
name: Annotated CSV
---
Annotated CSV (comma-separated values) format is used to encode HTTP responses and results returned to the Flux [`csv.from()` function](https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/csv/from/).
CSV tables must be encoded in UTF-8 and Unicode Normal Form C as defined in [UAX15](http://www.unicode.org/reports/tr15/). Line endings must be CRLF (Carriage Return Line Feed) as defined by the `text/csv` MIME type in [RFC 4180](https://tools.ietf.org/html/rfc4180).
## Examples
In this topic, you'll find examples of valid CSV syntax for responses to the following query:
```js
from(bucket:"mydb/autogen")
|> range(start:2018-05-08T20:50:00Z, stop:2018-05-08T20:51:00Z)
|> group(columns:["_start","_stop", "region", "host"])
|> yield(name:"my-result")
```
## CSV response format
Flux supports encodings listed below.
### Tables
A table may have the following rows and columns.
#### Rows
- **Annotation rows**: describe column properties.
- **Header row**: defines column labels (one header row per table).
- **Record row**: describes data in the table (one record per row).
##### Example
Encoding of a table with and without a header row.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Header row](#)
[Without header row](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
result,table,_start,_stop,_time,region,host,_value
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
#### Columns
In addition to the data columns, a table may include the following columns:
- **Annotation column**: Only used in annotation rows. Always the first column. Displays the name of an annotation. Value can be empty or a supported [annotation](#annotations). You'll notice a space for this column for the entire length of the table, so rows appear to start with `,`.
- **Result column**: Contains the name of the result specified by the query.
- **Table column**: Contains a unique ID for each table in a result.
### Multiple tables and results
If a file or data stream contains multiple tables or results, the following requirements must be met:
- A table column indicates which table a row belongs to.
- All rows in a table are contiguous.
- An empty row delimits a new table boundary in the following cases:
- Between tables in the same result that do not share a common table schema.
- Between concatenated CSV files.
- Each new table boundary starts with new annotation and header rows.
##### Example
Encoding of two tables in the same result with the same schema (header row) and different schema.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Same schema](#)
[Different schema](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
result,table,_start,_stop,_time,region,host,_value
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,result,table,_start,_stop,_time,location,device,min,max
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,USA,5825,62.73,68.42
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,USA,2175,12.83,56.12
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,USA,6913,51.62,54.25
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
### Dialect options
Flux supports the following dialect options for `text/csv` format.
| Option | Description| Default |
| :-------- | :--------- | :-------|
| **header** | If true, the header row is included.| `true`|
| **delimiter** | Character used to delimit columns. | `,`|
| **quoteChar** | Character used to quote values containing the delimiter. |`"`|
| **annotations** | List of annotations to encode (datatype, group, or default). |`empty`|
| **commentPrefix** | String prefix to identify a comment. Always added to annotations. |`#`|
### Annotations
Annotation rows describe column properties, and start with `#` (or commentPrefix value). The first column in an annotation row always contains the annotation name. Subsequent columns contain annotation values as shown in the table below.
|Annotation name | Values| Description |
| :-------- | :--------- | :-------|
| **datatype** | a [valid data type](#valid-data-types) | Describes the type of data. |
| **group** | boolean flag `true` or `false` | Indicates the column is part of the group key.|
| **default** | a [valid data type](#valid-data-types) |Value to use for rows with an empty string value.|
{{% note %}}
To encode a table with its group key, the `datatype`, `group`, and `default` annotations must be included. If a table has no rows, the `default` annotation provides the group key values.
{{% /note %}}
##### Example
Example encoding of datatype, group, and default annotations.
```js
import "csv"
a = "#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
#group,false,false,false,false,false,false,false,false
#default,,,,,,,,
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
"
csv.from(csv:a) |> yield()
```
### Valid data types
| Datatype | Flux type | Description |
| :-------- | :--------- | :-----------------------------------------------------------------------------|
| boolean | bool | a truth value, one of "true" or "false" |
| unsignedLong | uint | an unsigned 64-bit integer |
| long | int | a signed 64-bit integer |
| double | float | an IEEE-754 64-bit floating-point number |
| string | string | a UTF-8 encoded string |
| base64Binary | bytes | a base64 encoded sequence of bytes as defined in RFC 4648 |
| dateTime | time | an instant in time, may be followed with a colon : and a description of the format |
| duration | duration | a length of time represented as an unsigned 64-bit integer number of nanoseconds |
## Errors
If an error occurs during execution, a table returns with:
- An error column that contains an error message.
- A reference column with a unique reference code to identify more information about the error.
- A second row with error properties.
If an error occurs:
- Before results materialize, the HTTP status code indicates an error. Error details are encoded in the csv table.
- After partial results are sent to the client, the error is encoded as the next table and remaining results are discarded. In this case, the HTTP status code remains 200 OK.
##### Example
Encoding for an error with the datatype annotation:
```js
#datatype,string,long
,error,reference
,Failed to parse query,897
```
Encoding for an error that occurs after a valid table has been encoded:
```js
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double,result,table,_start,_stop,_time,region,host,_value
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
```js
#datatype,string,long
,error,reference,query terminated: reached maximum allowed memory limits,576
```

View File

@ -4,7 +4,7 @@ description: >
The InfluxDB v2 API provides a programmatic interface for interactions with InfluxDB.
Access the InfluxDB API using the `/api/v2/` endpoint.
menu: v2_0_ref
weight: 2
weight: 3
v2.0/tags: [api]
---
@ -15,15 +15,37 @@ Access the InfluxDB API using the `/api/v2/` endpoint.
InfluxDB uses [authentication tokens](/v2.0/security/tokens/) to authorize API requests.
Include your authentication token as an `Authorization` header in each request.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[InfluxDB OSS](#)
[InfluxDB Cloud](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
curl --request GET \
--url http://localhost:9999/api/v2/ \
curl --request POST \
--url http://localhost:9999/api/v2/write?org=my-org&bucket=example-bucket \
--header 'Authorization: Token YOURAUTHTOKEN'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
# Use the hostname of your InfluxDB Cloud UI
curl --request POST \
--url https://us-west-2-1.aws.cloud2.influxdata.com/api/v2/write?org=my-org&bucket=example-bucket \
--header 'Authorization: Token YOURAUTHTOKEN'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## View Influx v2 API Documentation
Full InfluxDB v2 API documentation is built into the `influxd` service.
To view the API documentation, [start InfluxDB](/v2.0/get-started/#start-influxdb)
and visit the `/docs` endpoint in a browser.
## View InfluxDB v2 API Documentation
<a class="btn" href="/v2.0/api/">InfluxDB v2.0 API documentation</a>
<a class="btn" href="http://localhost:9999/docs" target="\_blank">localhost:9999/docs</a>
### View InfluxDB API documentation locally
InfluxDB API documentation is built into the `influxd` service and represents
the API specific to the current version of InfluxDB.
To view the API documentation locally, [start InfluxDB](/v2.0/get-started/#start-influxdb)
and visit the `/docs` endpoint in a browser ([localhost:9999/docs](http://localhost:9999/docs)).
## InfluxDB client libraries
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
For information about supported client libraries, see [InfluxDB client libraries](/v2.0/reference/client-libraries/).

View File

@ -8,7 +8,7 @@ v2.0/tags: [cli]
menu:
v2_0_ref:
name: Command line tools
weight: 3
weight: 4
---
InfluxDB provides command line tools designed to aid in managing and working

View File

@ -19,6 +19,12 @@ from which you can run Flux commands.
influx repl [flags]
```
{{% note %}}
Use **ctrl + d** to exit the REPL.
{{% /note %}}
To use the Flux REPL, you must first authenticate with a [token](/v2.0/security/tokens/view-tokens/).
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|

View File

@ -12,15 +12,24 @@ The `influxd inspect` commands and subcommands inspecting on-disk InfluxDB time
## Usage
```sh
influxd inspect [command]
influxd inspect [subcommand]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [report-tsm](/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report |
| Subcommand | Description |
|:---------- |:----------- |
| [build-tsi](/v2.0/reference/cli/influxd/inspect/build-tsi/) | Rebuild the TSI index and series file. |
| [dump-tsi](/v2.0/reference/cli/influxd/inspect/dump-tsi/) | Output low level TSI information |
| [dumpwal](/v2.0/reference/cli/influxd/inspect/dumpwal/) | Output TSM data from WAL files |
| [export-blocks](/v2.0/reference/cli/influxd/inspect/export-blocks/) | Export block data |
| [export-index](/v2.0/reference/cli/influxd/inspect/export-index/) | Export TSI index data |
| [report-tsi](/v2.0/reference/cli/influxd/inspect/report-tsi/) | Report the cardinality of TSI files |
| [report-tsm](/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report |
| [verify-seriesfile](/v2.0/reference/cli/influxd/inspect/verify-seriesfile/) | Verify the integrity of series files |
| [verify-tsm](/v2.0/reference/cli/influxd/inspect/verify-tsm/) | Check the consistency of TSM files |
| [verify-wal](/v2.0/reference/cli/influxd/inspect/verify-wal/) | Check for corrupt WAL files |
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | help for inspect |
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for `inspect` |

View File

@ -0,0 +1,58 @@
---
title: influxd inspect build-tsi
description: >
The `influxd inspect build-tsi` command rebuilds the TSI index and, if necessary,
the series file.
v2.0/tags: [tsi]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect build-tsi` command rebuilds the TSI index and, if necessary,
the series file.
## Usage
```sh
influxd inspect build-tsi [flags]
```
InfluxDB builds the index by reading all Time-Structured Merge tree (TSM) indexes
and Write Ahead Log (WAL) entries in the TSM and WAL data directories.
If the series file directory is missing, it rebuilds the series file.
If the TSI index directory already exists, the command will fail.
### Adjust performance
Use the following options to adjust the performance of the indexing process:
##### --max-log-file-size
`--max-log-file-size` determines how much of an index to store in memory before
compacting it into memory-mappable index files.
If you find the memory requirements of your TSI index are too high, consider
decreasing this setting.
##### --max-cache-size
`--max-cache-size` defines the maximum cache size.
The indexing process replays WAL files into a `tsm1.Cache`.
If the maximum cache size is too low, the indexing process will fail.
Increase `--max-cache-size` to account for the size of your WAL files.
##### --batch-size
`--batch-size` defines the size of the batches written into the index.
Altering the batch size can improve performance but may result in significantly
higher memory usage.
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `--batch-size` | The size of the batches to write to the index. Defaults to `10000`. [See above](#batch-size). | integer |
| `--concurrency` | Number of workers to dedicate to shard index building. Defaults to `GOMAXPROCS` (8 by default). | integer |
| `-h`, `--help` | Help for `build-tsi`. | |
| `--max-cache-size` | Maximum cache size. Defaults to `1073741824`. [See above](#max-cache-size). | uinteger |
| `--max-log-file-size` | Maximum log file size. Defaults to `1048576`. [See above](#max-log-file-size) . | integer |
| `--sfile-path` | Path to the series file directory. Defaults to `~/.influxdbv2/engine/_series`. | string |
| `--tsi-path` | Path to the TSI index directory. Defaults to `~/.influxdbv2/engine/index`. | string |
| `--tsm-path` | Path to the TSM data directory. Defaults to `~/.influxdbv2/engine/data`. | string |
| `-v`, `--verbose` | Enable verbose output. | |
| `--wal-path` | Path to the WAL data directory. Defaults to `~/.influxdbv2/engine/wal`. | string |

View File

@ -0,0 +1,33 @@
---
title: influxd inspect dump-tsi
description: >
The `influxd inspect dump-tsi` command outputs low-level information about `tsi1` files.
v2.0/tags: [tsi, inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect dump-tsi` command outputs low-level information about
Time Series Index (`tsi1`) files.
## Usage
```sh
influxd inspect dump-tsi [flags]
```
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for `dump-tsi`. | |
| `--index-path` | Path to data engine index directory (defaults to `~/.influxdbv2/engine/index`). | string |
| `--measurement-filter` | Regular expression measurement filter. | string |
| `--measurements` | Show raw measurement data. | |
| `--series` | Show raw series data. | |
| `--series-path` | Path to series file (defaults to `~/.influxdbv2/engine/_series`). | string |
| `--tag-key-filter` | Regular expression tag key filter. | string |
| `--tag-keys` | Show raw tag key data. | |
| `--tag-value-filter` | Regular expression tag value filter. | string |
| `--tag-value-series` | Show raw series data for each value. | |
| `--tag-values` | Show raw tag value data. | |

View File

@ -0,0 +1,68 @@
---
title: influxd inspect dumpwal
description: >
The `influxd inspect dumpwal` command outputs data from WAL files.
v2.0/tags: [wal, inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect dumpwal` command outputs data from Write Ahead Log (WAL) files.
Given a list of file path globs (patterns that match `.wal` file paths),
the command parses and prints out entries in each file.
## Usage
```sh
influxd inspect dumpwal [flags] <globbing-patterns>
```
## Output details
The `--find-duplicates` flag determines the `influxd inspect dumpwal` output.
**Without `--find-duplicates`**, the command outputs the following for each file
that matches the specified [globbing patterns](#globbing-patterns):
- The file name
- For each entry in a file:
- The type of the entry (`[write]` or `[delete-bucket-range]`)
- The formatted entry contents
**With `--find-duplicates`**, the command outputs the following for each file
that matches the specified [globbing patterns](#globbing-patterns):
- The file name
- A list of keys with timestamps in the wrong order
## Arguments
### Globbing patterns
Globbing patterns provide partial paths used to match file paths and names.
##### Example globbing patterns
```sh
# Match any file or folder starting with "foo"
foo*
# Match any file or folder starting with "foo" and ending with .txt
foo*.txt
# Match any file or folder ending with "foo"
*foo
# Match foo/bar/baz but not foo/bar/bin/baz
foo/*/baz
# Match foo/baz and foo/bar/baz and foo/bar/bin/baz
foo/**/baz
# Matches cat but not can or c/t
/c?t
```
## Flags
| Flag | Description |
|:---- |:----------- |
| `--find-duplicates` | Ignore dumping entries; only report keys in the WAL that are out of order. |
| `-h`, `--help` | Help for `dumpwal`. |

View File

@ -0,0 +1,24 @@
---
title: influxd inspect export-blocks
description: >
The `influxd inspect export-blocks` command exports all blocks in one or more
TSM1 files to another format for easier inspection and debugging.
v2.0/tags: [inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect export-blocks` command exports all blocks in one or more
TSM1 files to another format for easier inspection and debugging.
## Usage
```sh
influxd inspect export-blocks [flags]
```
## Flags
| Flag | Description |
|:---- |:----------- |
| `-h`, `--help` | Help for `export-blocks`. |

View File

@ -0,0 +1,26 @@
---
title: influxd inspect export-index
description: >
The `influxd inspect export-index` command exports all series in a TSI index to
SQL format for inspection and debugging.
v2.0/tags: [inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect export-index` command exports all series in a TSI index to
SQL format for inspection and debugging.
## Usage
```sh
influxd inspect export-index [flags]
```
## Flags
| Flag | Description | Input type |
|:---- |:----------- |:----------:|
| `-h`, `--help` | Help for `export-index`. | |
| `--index-path` | Path to the index directory. Defaults to `~/.influxdbv2/engine/index`). | string |
| `--series-path` | Path to series file. Defaults to `~/.influxdbv2/engine/_series`). | string |

View File

@ -0,0 +1,44 @@
---
title: influxd inspect report-tsi
description: >
The `influxd inspect report-tsi` command analyzes Time Series Index (TSI) files
in a storage directory and reports the cardinality of data stored in the files.
v2.0/tags: [tsi, cardinality, inspect]
menu:
v2_0_ref:
parent: influxd inspect
weight: 301
---
The `influxd inspect report-tsi` command analyzes Time Series Index (TSI) files
in a storage directory and reports the cardinality of data stored in the files
by organization and bucket.
## Output details
`influxd inspect report-tsi` outputs the following:
- All organizations and buckets in the index.
- The series cardinality within each organization and bucket.
- Time to read the index.
When the `--measurements` flag is included, series cardinality is grouped by:
- organization
- bucket
- measurement
## Usage
```sh
influxd inspect report-tsi [flags]
```
## Flags
| Flag | Description | Input Type |
|:---- |:----------- |:----------:|
| `--bucket-id` | Process data for specified bucket ID. _Requires `org-id` flag to be set._ | string |
| `-h`, `--help` | View help for `report-tsi`. | |
| `-m`, `--measurements` | Group cardinality by measurements. | |
| `-o`, `--org-id` | Process data for specified organization ID. | string |
| `--path` | Specify path to index. Defaults to `~/.influxdbv2/engine/index`. | string |
| `--series-file` | Specify path to series file. Defaults to `~/.influxdbv2/engine/_series`. | string |
| `-t`, `-top` | Limit results to the top n. | integer |

Some files were not shown because too many files have changed in this diff Show More