IOx documentation (#4730)

* fix iox ui details (#4660)

* fixed left nav for iox

* updated nav order

* one more nav fix

* added sql data types doc to iox

* removed, need to create separate branch

* IOx get started (#4676)

* WIP iox get started

* WIP iox get started

* WIP iox get started

* WIP iox get-started

* WIP get-started docs

* iox get started setup

* added custom times and datepicker to iox getting started

* finished sample data date picker

* WIP get started querying

* wrapped up new getting started content

* fixed unclosed shortcode

* fixed js bug, updated get started to address PR feedback

* removed influxdbu banner from iox-get-started

* fixed typos

* Migrate data to IOx (#4704)

* WIP iox get started

* WIP iox get started

* WIP iox get started

* WIP iox get-started

* WIP get-started docs

* iox get started setup

* added custom times and datepicker to iox getting started

* finished sample data date picker

* WIP get started querying

* wrapped up new getting started content

* fixed unclosed shortcode

* fixed js bug, updated get started to address PR feedback

* removed influxdbu banner from iox-get-started

* add tsm to iox migration guide

* WIP 1.x iox migration

* WIP iox migration guides

* iox migration landing page content

* updated migration docs to address PR feedback

* one last PR feedback update

* added sql reference for review

* moved reference to sql folder

* removed file

* Schema recommendations for IOx (#4701)

* WIP iox get started

* WIP iox get started

* WIP iox get started

* WIP iox get-started

* WIP get-started docs

* iox get started setup

* added custom times and datepicker to iox getting started

* finished sample data date picker

* WIP get started querying

* wrapped up new getting started content

* fixed unclosed shortcode

* fixed js bug, updated get started to address PR feedback

* removed influxdbu banner from iox-get-started

* schema design recommendations

* add heading color styles

* fixed typos and formatting

* fixed typos

* fixed line protocol descrepencies

* fixed typo

* IOx landing page and notification (#4717)

* updated cloud iox landing page

* added state of the docs notification, removed addition resources from nav

* updated iox page titles

* updated duplicate oss and product data

* add order by doc (#4710)

* add order by doc

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/order-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* added select doc (#4708)

* added select doc

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/select.md

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* having clause (#4713)

* added having clause

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/having.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* added sql-data-types branch and corresponding doc (#4700)

* added sql-data-types branch and corresponding doc

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* Update content/influxdb/cloud-iox/sql-data-types.md

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* added interval

* fixed formatting

* more format fixes

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* add limit doc (#4711)

* add limit doc

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/limit.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* added group by (#4721)

* added group by

* Update content/influxdb/cloud-iox/group-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/group-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/group-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/group-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-iox/group-by.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

---------

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* IOx SQL query guides (#4720)

* WIP basic sql query guide

* WIP query guides

* WIP query docs, updated query format

* fleshed out sql aggregate query doc

* updated aggregate query guide, added explore schema guide

* fixed getting started link

* IOx Grafana and Superset documentation (#4723)

* iox grafana and superset documentation

* updates to the superset and grafana docs

* chore(grafana): Rework the documentation for a release instead of from source. (#4724)

* chore(grafana): Rework the documentation for a release instead of from source.

* chore: Typo.

* chore: v0.1.0 will be the first release.

* updates to address PR feedback

* a few minor updates to the grafana doc

* another minor update to grafana

* fixed grafana archive name

---------

Co-authored-by: Brett Buddin <brett@buddin.org>

* rearranged docs

* fix order by description

* updated more sql reference doc descriptions

* Add SQL selector functions (#4725)

* WIP selector functions

* WIP selector fns

* wrapped up sql selector functions

* relocated function docs

* add iox regions doc

* add messaging to guide users to the correct docs (#4728)

* minor changes

* added Flux reference

* updated algolia tagging

* add delete information to iox docs (#4727)

* fixed typos

* Add write content to the IOx docs (#4729)

* ported telegraf write docs to iox

* write content and updated reference

* updated node deps

* added link to selectors reference

---------

Co-authored-by: lwandzura <51929958+lwandzura@users.noreply.github.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: Brett Buddin <brett@buddin.org>
pull/4735/head
Scott Anderson 2023-01-31 11:07:26 -07:00 committed by GitHub
parent d5208abfa3
commit 3071b80a11
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
108 changed files with 7209 additions and 315 deletions

View File

@ -137,7 +137,7 @@ $('.article--content table').each(function() {
table.find('td').each(function() {
let cellContent = $(this)[0].innerText
if (/\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.*Z/.test(cellContent)) {
if (/^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.*Z/.test(cellContent)) {
$(this).addClass('nowrap')
}
})

View File

@ -0,0 +1,152 @@
// Placeholder start date used in InfluxDB getting started docs
const defaultStartDate = "2022-01-01"
// Return yyyy-mm-dd formatted string from a Date object
function formatDate(dateObj) {
return dateObj.toISOString().replace(/T.*$/, '')
}
// Return yesterday's date
function yesterday() {
const yesterday = new Date()
yesterday.setDate(yesterday.getDate() - 1)
return formatDate(yesterday)
}
// Split a date string into year, month, and day
function datePart(date) {
datePartRegex = /(\d{4})-(\d{2})-(\d{2})/
year = date.replace(datePartRegex, "$1")
month = date.replace(datePartRegex, "$2")
day = date.replace(datePartRegex, "$3")
return {year: year, month: month, day: day}
}
////////////////////////// SESSION / COOKIE MANAGMENT //////////////////////////
cookieID = 'influxdb_get_started_date'
function setStartDate(setDate) {
Cookies.set(cookieID, setDate)
}
function getStartDate() {
return Cookies.get(cookieID)
}
function removeStartDate() {
Cookies.remove(cookieID)
}
////////////////////////////////////////////////////////////////////////////////
// If the user has not set the startDate cookie, default the startDate to yesterday
var startDate = (getStartDate() != undefined) ? getStartDate() : yesterday()
// Convert a time value to a Unix timestamp (seconds)
function timeToUnixSeconds(time) {
unixSeconds = new Date(time).getTime() / 1000
return unixSeconds
}
// Default time values in getting started sample data
let times = [
{"rfc3339": `${defaultStartDate}T08:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T08:00:00Z`)}, // 1641024000
{"rfc3339": `${defaultStartDate}T09:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T09:00:00Z`)}, // 1641027600
{"rfc3339": `${defaultStartDate}T10:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T10:00:00Z`)}, // 1641031200
{"rfc3339": `${defaultStartDate}T11:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T11:00:00Z`)}, // 1641034800
{"rfc3339": `${defaultStartDate}T12:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T12:00:00Z`)}, // 1641038400
{"rfc3339": `${defaultStartDate}T13:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T13:00:00Z`)}, // 1641042000
{"rfc3339": `${defaultStartDate}T14:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T14:00:00Z`)}, // 1641045600
{"rfc3339": `${defaultStartDate}T15:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T15:00:00Z`)}, // 1641049200
{"rfc3339": `${defaultStartDate}T16:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T16:00:00Z`)}, // 1641052800
{"rfc3339": `${defaultStartDate}T17:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T17:00:00Z`)}, // 1641056400
{"rfc3339": `${defaultStartDate}T18:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T18:00:00Z`)}, // 1641060000
{"rfc3339": `${defaultStartDate}T19:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T19:00:00Z`)}, // 1641063600
{"rfc3339": `${defaultStartDate}T20:00:00Z`, "unix": timeToUnixSeconds(`${defaultStartDate}T20:00:00Z`)} // 1641067200
]
function updateTimestamps(newStartDate) {
// Update the times array with replacement times
times = times.map(x => {
var newStartTimestamp = x.rfc3339.replace(/^.*T/, newStartDate + "T")
return {
"rfc3339": x.rfc3339,
"unix": x.unix,
"rfc3339_new": newStartTimestamp,
"unix_new": timeToUnixSeconds(newStartTimestamp)
}
})
$('.get-started-timestamps').each(function() {
var wrapper = $(this)[0]
times.forEach(function(x) {
oldDatePart = datePart(x.rfc3339.replace(/T.*$/, ""))
newDatePart = datePart(x.rfc3339_new.replace(/T.*$/, ""))
rfc3339Regex = new RegExp(`${oldDatePart.year}-${oldDatePart.month}-${oldDatePart.day}`, 'g')
rfc3339Repl = `${newDatePart.year}-${newDatePart.month}-${newDatePart.day}`
wrapper.innerHTML =
wrapper.innerHTML
.replaceAll(x.unix, x.unix_new)
.replaceAll(rfc3339Regex, rfc3339Repl)
})
console.log(times)
})
// Create a new seed times array with new start time for next change
times = times.map(x => {
var newStartTimestamp = x.rfc3339.replace(/^.*T/, newStartDate + "T")
return {
"rfc3339": newStartTimestamp,
"unix": timeToUnixSeconds(newStartTimestamp)
}
})
}
/////////////////////// MODAL INTERACTIONS / DATE PICKER ///////////////////////
// Date picker form element
var datePickerEl = $('#custom-date-selector')
// Initialize the date picker with the current startDate
const elem = datePickerEl[0];
const datepicker = new Datepicker(elem, {
defaultViewDate: startDate,
format: 'yyyy-mm-dd',
nextArrow: '>',
prevArrow: '<',
});
//////////////////////////////////// ACTIONS ///////////////////////////////////
// Initial update to yesterdays date ON PAGE LOAD
// Conditionally set the start date cookie it startDate is equal to the default value
updateTimestamps(startDate);
if (startDate === yesterday()) {
setStartDate(startDate);
}
// Sumbit new date
$('#submit-custom-date').click(function() {
let newDate = datepicker.getDate()
if (newDate != undefined) {
newDate = formatDate(newDate)
updateTimestamps(newDate);
setStartDate(newDate)
toggleModal('#influxdb-gs-date-select')
} else {
toggleModal('#influxdb-gs-date-select')
}
})

View File

@ -12,7 +12,7 @@ var elementSelector = ".article--content pre:not(.preserve)"
// Return the page context (cloud, oss/enterprise, other)
function context() {
if (/\/influxdb\/cloud\//.test(window.location.pathname)) {
if (/\/influxdb\/cloud/.test(window.location.pathname)) {
return "cloud"
} else if (/\/(enterprise_|influxdb).*\/v[1-2]\.[0-9]{1,2}\//.test(window.location.pathname)) {
return "oss/enterprise"

View File

@ -23,6 +23,8 @@
& + .highlight pre { margin-top: .5rem }
& + pre { margin-top: .5rem }
& + .code-tabs-wrapper { margin-top: 0; }
&.green { color: $gr-rainforest; }
&.orange { color: $r-dreamsicle; }
}
h1 {
font-weight: normal;

View File

@ -0,0 +1,65 @@
.custom-time-trigger {
display: block;
position: fixed;
bottom: 1.5rem;
right: 1.5rem;
border-radius: $radius;
box-shadow: 2px 2px 6px $sidebar-search-shadow;
z-index: 1;
color: $g20-white;
background: $g5-pepper;
&:before {
content: "";
position: absolute;
bottom: 0;
right: 0;
width: 100%;
height: 100%;
border-radius: $radius;
@include gradient($article-btn-gradient, 320deg);
z-index: -1;
opacity: 0;
transition: opacity .2s;
}
&:hover:before {opacity: 1}
a {
display: block;
padding: 1rem;
line-height: 1rem;
&:before {
content: "Select custom date for sample data";
display: inline-block;
overflow: hidden;
font-size: .9rem;
font-style: italic;
white-space: nowrap;
width: 0;
opacity: 0;
transition: width .2s, opacity .2s;
}
&:hover {
cursor: pointer;
&:before{
width: 240px;
opacity: 1;
}
}
}
}
///////////////////////////////// MEDIA QUERIES ////////////////////////////////
@include media(small) {
.custom-time-trigger {
bottom: .75rem;
right: .75rem;
}
}

View File

@ -336,10 +336,11 @@ body.home {
@include gradient($grad-coolDusk);
&:first-child {
border-bottom: 1px solid rgba($body-bg, .5);
a:after {border-radius: 6px 6px 0 0;}
}
&:not(:last-child) {border-bottom: 1px solid rgba($body-bg, .5);}
&:last-child a:after {border-radius: 0 0 6px 6px;}
a {

View File

@ -112,6 +112,7 @@
@import "modals/url-selector";
@import "modals/page-feedback";
@import "modals/flux-versions";
@import "modals/_influxdb-gs-datepicker"
}

View File

@ -1,6 +1,6 @@
#docs-notifications {
position: fixed;
top: 10px;
top: 65px;
right: 10px;
z-index: 100;
width: calc(100vw - 20px);
@ -19,7 +19,7 @@
.notification-content {
padding: 1.25rem 2.35rem .5rem 1.25rem;
margin-bottom: 10px;
font-size: .95rem;
font-size: 1.05rem;
color: $g20-white;
}
@ -27,7 +27,7 @@
position: absolute;
top: 8px;
right: 8px;
font-size: 1.1rem;
font-size: 1.2rem;
cursor: pointer;
transition: color .2s;
font-weight: bold;
@ -57,13 +57,13 @@
&:first-child { margin-top: 0; }
}
h1,h2 { font-size: 1.5rem; }
h3 { font-size: 1.25rem; }
h4 { font-size: 1.1rem; }
h5 { font-size: 1rem; }
h6 { font-size: .95rem; font-style: italic; }
h1,h2 { font-size: 1.6rem; }
h3 { font-size: 1.35rem; }
h4 { font-size: 1.2rem; }
h5 { font-size: 1.1rem; }
h6 { font-size: 1.05rem; font-style: italic; }
p,li { line-height: 1.4rem; }
p,li { line-height: 1.5rem; }
p { margin: 0 0 .75rem; }

View File

@ -135,7 +135,7 @@
///////////////// Data model table in InfluxDB 2.4+ get started ////////////////
#series-diagram {
.series-diagram {
display: flex;
width: fit-content;
max-width: 100%;
@ -146,12 +146,12 @@
table {margin: 0;}
&:after {
content: "series";
content: "Series";
top: 4rem;
right: -3.4rem;
right: -3.5rem;
}
& + #series-diagram {margin-bottom: 3rem;}
&:last-child {margin-bottom: 3rem;}
}
@ -159,13 +159,13 @@ table tr.point{
border: 2px solid $tooltip-bg;
&:after {
content: "point";
content: "Point";
bottom: -.8rem;
left: 1rem;
}
}
#series-diagram, table tr.point {
.series-diagram, table tr.point {
position: relative;
&:after {
color: $tooltip-text;
@ -179,6 +179,69 @@ table tr.point{
}
}
.sql table {
tr.points {
position: relative;
td:first-child{
&:before, &:after {
display: block;
border-radius: $radius;
position: absolute;
font-size: .9rem;
font-weight: $medium;
padding: .2rem .5rem;
line-height: .9rem;
z-index: 1;
top: -.25rem;
opacity: 0;
transition: opacity .2s, top .2s;
}
&:before {
content: "Point 1";
color: $g20-white;
background: $br-new-magenta;
}
&:after {
content: "Point 2";
color: $tooltip-text;
background: $tooltip-bg;
left: 5rem;
}
}
&:hover { td:first-child {
&:before, &:after {
opacity: 1;
top: -.65rem;
}
}}
}
span.point{
position: relative;
display: inline-block;
&.one:before{
content: "";
display: block;
position: absolute;
width: 100%;
height: 2px;
border-top: 2px solid $br-new-magenta;
bottom: -2px;
}
&.two:after{
content: "";
display: block;
position: absolute;
width: 100%;
height: 2px;
border-top: 2px solid $tooltip-bg;
bottom: -8px;
}
}
}
////////////// Line protocol anatomy in InfluxDB 2.4+ get started //////////////
#line-protocol-anatomy {
@ -286,7 +349,7 @@ table tr.point{
}
}
#series-diagram {
.series-diagram {
width: auto;
}
};

View File

@ -0,0 +1,47 @@
// This stylesheet covers the date picker modal window
// Datepicker-specific styles are in /assets/styles/tools/_datepicker.scss
#influxdb-gs-date-select {
width: auto;
max-width: 260px;
p {margin-bottom: 1.5rem;}
a.btn {
position: relative;
display: inline-block;
margin: 1.25rem 0 .5rem;
padding: 0.85rem 1.5rem;
color: $article-btn-text !important;
border-radius: $radius;
text-transform: uppercase;
letter-spacing: .06rem;
font-size: 1rem;
float: right;
z-index: 1;
@include gradient($article-btn-gradient);
&:after {
content: "";
position: absolute;
display: block;
top: 0;
right: 0;
width: 100%;
height: 100%;
border-radius: $radius;
@include gradient($article-btn-gradient-hover);
opacity: 0;
transition: opacity .2s;
z-index: -1;
}
&:hover {
cursor: pointer;
&:after {
opacity: 1;
}
}
}
}

View File

@ -5,6 +5,7 @@
"tools/media-queries.scss",
"tools/mixins.scss",
"tools/tooltips",
"tools/datepicker.scss",
"tools/normalize.scss";
// Import default light theme
@ -28,6 +29,7 @@
"layouts/feature-callouts",
"layouts/v1-overrides",
"layouts/notifications",
"layouts/custom-time-trigger",
"layouts/fullscreen-code";
// Import Product-specifc color schemes

View File

@ -183,9 +183,9 @@ $home-icon-c1: $br-dark-blue !default;
$home-icon-c2: $g20-white !default;
// Tooltip colors
$tooltip-color: $p-amethyst !default;
$tooltip-color: $br-new-purple !default;
$tooltip-color-alt: $p-twilight !default;
$tooltip-bg: $p-amethyst !default;
$tooltip-bg: $br-new-purple !default;
$tooltip-text: $g20-white !default;
// Support and feedback buttons

View File

@ -0,0 +1,260 @@
.datepicker {
display: none;
&.active {
display: block;
}
}
.datepicker-picker {
background-color: $article-bg;
border-radius: 2px;
display: inline-block;
span {
-webkit-touch-callout: none;
border: 0;
border-radius: 2px;
cursor: default;
display: block;
flex: 1;
text-align: center;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
}
.datepicker-main {
padding: 2px;
}
.datepicker-controls {
display: flex;
.button {
align-items: center;
background-color: $article-bg;
border: 1px solid #dbdbdb;
border-radius: 2px;
box-shadow: none;
color: $article-bold;
cursor: pointer;
display: inline-flex;
font-size: 1rem;
height: 2.25em;
justify-content: center;
line-height: 1.5;
margin: 0;
padding: calc(.375em - 1px) .75em;
position: relative;
text-align: center;
vertical-align: top;
white-space: nowrap;
transition: all .2s;
&:active {
outline: none;
border-color: #4a4a4a;
color: $article-text;
}
&:focus {
outline: none;
border-color: #3273dc;
color: $article-text;
&:not(:active) {
box-shadow: 0 0 0 .125em rgba(50,115,220,.25);
}
}
&:hover {
border-color: #b5b5b5;
color: $article-bold;
}
}
.button[disabled] {
cursor: not-allowed;
}
.view-switch {
flex: auto;
}
.next-btn {
padding-left: .375rem;
padding-right: .375rem;
width: 2.25rem;
}
.prev-btn {
padding-left: .375rem;
padding-right: .375rem;
width: 2.25rem;
}
.next-btn.disabled {
visibility: hidden;
}
.prev-btn.disabled {
visibility: hidden;
}
}
.datepicker-grid {
display: flex;
flex-wrap: wrap;
width: 15.75rem;
}
.datepicker-view {
display: flex;
.days-of-week {
display: flex;
}
.days {
.datepicker-cell {
flex-basis: 14.2857142857%;
}
}
.dow {
flex-basis: 14.2857142857%;
font-size: .875rem;
font-weight: 700;
height: 1.5rem;
line-height: 1.5rem;
}
.week {
height: 2.25rem;
line-height: 2.25rem;
color: #b5b5b5;
font-size: .75rem;
width: 2.25rem;
}
}
.datepicker-view.datepicker-grid {
.datepicker-cell {
flex-basis: 25%;
height: 4.5rem;
line-height: 4.5rem;
}
}
.datepicker-cell {
height: 2.25rem;
line-height: 2.25rem;
&:not(.disabled) {
&:hover {
background-color: rgba($article-text, .15);
cursor: pointer;
}
}
}
.datepicker-title {
background-color: #f5f5f5;
box-shadow: inset 0 -1px 1px hsla(0,0%,4%,.1);
font-weight: 700;
padding: .375rem .75rem;
text-align: center;
}
.datepicker-header {
.datepicker-controls {
padding: 2px 2px 0;
.button {
border-color: transparent;
font-weight: 700;
&:hover {
background-color: rgba($article-text, .15);
}
&:focus {
&:not(:active) {
box-shadow: 0 0 0 .125em hsla(0,0%,100%,.25);
}
}
&:active {
background-color: $article-link;
}
}
.button[disabled] {
box-shadow: none;
}
}
}
.datepicker-cell.focused {
&:not(.selected) {
background-color: $article-link;
color: #fff;
}
}
.datepicker-cell.selected {
background-color: $article-link;
color: #fff;
font-weight: 600;
&:hover {
background-color: $article-link;
color: #fff;
font-weight: 600;
}
}
.datepicker-cell.disabled {
color: #dbdbdb;
}
.datepicker-cell.next {
&:not(.disabled) {
color: #7a7a7a;
}
}
.datepicker-cell.prev {
&:not(.disabled) {
color: #7a7a7a;
}
}
.datepicker-cell.next.selected {
color: #e6e6e6;
}
.datepicker-cell.prev.selected {
color: #e6e6e6;
}
.datepicker-cell.highlighted {
&:not(.selected) {
&:not(.range) {
&:not(.today) {
background-color: rgba($article-text, .25);
border-radius: 0;
&:not(.disabled) {
&:hover {
background-color: #eee;
}
}
}
&:not(.today).focused {
background-color: $article-link;
color: #fff;
}
}
}
}
.datepicker-cell.today {
&:not(.selected) {
background-color: #00d1b2;
&:not(.disabled) {
color: #fff;
}
}
}
.datepicker-cell.today.focused {
&:not(.selected) {
background-color: #00c4a7;
}
}
.datepicker-input.in-edit {
border-color: #2366d1;
&:active {
box-shadow: 0 0 .25em .25em rgba(35,102,209,.2);
}
&:focus {
box-shadow: 0 0 .25em .25em rgba(35,102,209,.2);
}
}
@media (max-width:22.5rem) {
.datepicker-view {
.week {
width: 1.96875rem;
}
}
.calendar-weeks+.days {
.datepicker-grid {
width: 13.78125rem;
}
}
}

View File

@ -8,11 +8,12 @@ $bold: 700;
.tooltip {
position: relative;
display: block;
display: inline-block;
font-weight: $medium;
color: $tooltip-color;
&:hover {
cursor: help;
.tooltip-container { visibility: visible; }
.tooltip-text {
opacity: 1;
@ -56,6 +57,64 @@ $bold: 700;
border-left: 8px solid transparent;
}
}
&.shift-left {
.tooltip-text {
left: 75%;
transform: translate(-75%, -1.75rem);
&:after {
left: 75%;
transform: translateX(-75%);
}
}
&:hover .tooltip-text {
transform: translate(-75%, -2.5rem);
}
}
&.shift-right {
.tooltip-text {
left: 25%;
transform: translate(-25%, -1.75rem);
&:after {
left: 25%;
transform: translateX(-25%);
}
}
&:hover .tooltip-text {
transform: translate(-25%, -2.5rem);
}
}
&.right {
&:hover {
.tooltip-container { visibility: visible; }
.tooltip-text {
opacity: 1;
transform: translate(70%);
}
}
.tooltip-container {
left: 0%;
transform: translateX(60%);
}
.tooltip-text {
left: 60%;
transform: translate(60%);
transition: all 0.2s ease;
&:after {
top: 50%;
left: -14px;
transform: translateY(-50%);
border-top: 16px solid transparent;
border-right: 16px solid $tooltip-bg;
border-bottom: 16px solid transparent;
border-left: 8px solid transparent;
}
}
}
}
th .tooltip {

View File

@ -29,6 +29,7 @@ smartDashes = false
"influxdb/v2.1/tag" = "influxdb/v2.1/tags"
"influxdb/v2.0/tag" = "influxdb/v2.0/tags"
"influxdb/cloud/tag" = "influxdb/cloud/tags"
"influxdb/cloud-iox/tag" = "influxdb/cloud-iox/tags"
"flux/v0.x/tag" = "flux/v0.x/tags"
[markup]

View File

@ -0,0 +1,50 @@
---
title: InfluxDB Cloud backed by InfluxDB IOx documentation
description: >
InfluxDB Cloud is a hosted and managed version of InfluxDB backed by InfluxDB IOx,
the time series platform designed to handle high write and query loads.
Learn how to use and leverage InfluxDB Cloud in use cases such as monitoring
metrics, IoT data, and events.
menu:
influxdb_cloud_iox:
name: InfluxDB Cloud (IOx)
weight: 1
---
{{% note %}}
This InfluxDB Cloud documentation applies to all organizations created through
**cloud2.influxdata.com** on or after **January 31, 2023** that are powered by
the InfluxDB IOx storage engine. If your organizations was created before this
date or through a Cloud provider marketplace, see the
[TSM-based InfluxDB Cloud documentation](/influxdb/cloud/).
View the right column of your [InfluxDB Cloud organization homepage](https://cloud2.influxdata.com/)
to see which storage engine your InfluxDB Cloud organization is powered by.
{{% /note %}}
InfluxDB Cloud is a hosted and managed version of InfluxDB backed by InfluxDB IOx,
the time series platform designed to handle high write and query loads.
Learn how to use and leverage InfluxDB Cloud in use cases such as monitoring
metrics, IoT data, and event monitoring.
<a class="btn" href="/influxdb/cloud-iox/get-started/">Get started with InfluxDB Cloud (IOx)</a>
## The InfluxDB IOx storage engine
**InfluxDB IOx** is InfluxDB's next generation storage engine that unlocks series
limitations present in the Time Structured Merge Tree (TSM) storage engine.
InfluxDB IOx allows nearly infinite series cardinality without any impact on
overall database performance. It also brings with it native
**SQL support**<!-- and improved InfluxQL performance -->.
View the following video for more information about InfluxDB IOx:
{{< youtube "CzWVcDxmWbM" >}}
## How do you use InfluxDB IOx?
All InfluxDB Cloud accounts and organizations created through
[cloud2.influxdata.com](https://cloud2.influxdata.com) on or after **January 31, 2023**
are backed by the InfluxDB IOx storage engine.
You can also see which storage engine your organization is using on the
homepage of your [InfluxDB Cloud user interface (UI)](https://cloud2.influxdata.com).

View File

@ -0,0 +1,128 @@
---
title: Get started with InfluxDB Cloud
list_title: Get started
description: >
Start collecting, querying, processing, and visualizing data in InfluxDB Cloud backed by IOx.
menu:
influxdb_cloud_iox:
name: Get started
weight: 3
influxdb/cloud-iox/tags: [get-started]
---
InfluxDB {{< current-version >}} is the platform purpose-built to collect, store,
process and visualize time series data.
The InfluxDB IOx storage engine provides a number of benefits including nearly
unlimited series cardinality, improved query performance, and interoperability
with widely used data processing tools and platforms.
**Time series data** is a sequence of data points indexed in time order.
Data points typically consist of successive measurements made from the same
source and are used to track changes over time.
Examples of time series data include:
- Industrial sensor data
- Server performance metrics
- Heartbeats per minute
- Electrical activity in the brain
- Rainfall measurements
- Stock prices
This multi-part tutorial walks you through writing time series data to InfluxDB {{< current-version >}},
querying, and then visualizing that data.
## Key concepts before you get started
Before you get started using InfluxDB, it's important to understand how time series
data is organized and stored in InfluxDB and some key definitions that are used
throughout this documentation.
- [Data organization](#data-organization)
- [Schema on write](#schema-on-write)
- [Important definitions](#important-definitions)
### Data organization
The InfluxDB data model organizes time series data into buckets and measurements.
A bucket can contain multiple measurements. Measurements contain multiple
tags and fields.
- **Bucket**: Named location where time series data is stored.
A bucket can contain multiple _measurements_.
- **Measurement**: Logical grouping for time series data.
All _points_ in a given measurement should have the same _tags_.
A measurement contains multiple _tags_ and _fields_.
- **Tags**: Key-value pairs that provide metadata for each point--for example,
something to identify the source or context of the data like host,
location, station, etc.
- **Fields**: Key-value pairs with values that change over time--for example,
temperature, pressure, stock price, etc.
- **Timestamp**: Timestamp associated with the data.
When stored on disk and queried, all data is ordered by time.
<!-- _For detailed information and examples of the InfluxDB data model, see
[Data elements](/influxdb/v2.5/reference/key-concepts/data-elements/)._ -->
### Schema on write
When using InfluxDB, you define your schema as you write your data.
You don't need to create measurements (equivalent to a relational table) or
explicitly define the schema of the measurement.
Measurement schemas are defined by the schema of data as it is written to the measurement.
### Important definitions
The following definitions are important to understand when using InfluxDB:
- **Point**: Single data record identified by its _measurement, tag keys, tag values, field key, and timestamp_.
- **Series**: A group of points with the same _measurement, tag keys and values, and field key_.
- **Primary key**: Columns used to uniquely identify each row in a table.
Rows are uniquely identified by their _timestamp and tag set_.
##### Example InfluxDB query results
{{< influxdb/points-series-sql >}}
## Tools to use
Throughout this tutorial, there are multiple tools you can use to interact with
InfluxDB {{< current-version >}}. Examples are provided for each of the following:
- [InfluxDB user interface](#influxdb-user-interface)
- [`influx` CLI](#influx-cli)
- [InfluxDB HTTP API](#influxdb-http-api)
### InfluxDB user interface
The InfluxDB user interface (UI) provides a web-based visual interface for interacting with and managing InfluxDB.
{{% oss-only %}}The InfluxDB UI is packaged with InfluxDB and runs as part of the InfluxDB service. To access the UI, with InfluxDB running, visit [localhost:8086](http://localhost:8086) in your browser.{{% /oss-only %}}
{{% cloud-only %}}To access the InfluxDB Cloud UI, [log into your InfluxDB Cloud account](https://cloud2.influxdata.com).{{% /cloud-only %}}
### `influx` CLI
The `influx` CLI lets you interact with and manage InfluxDB {{< current-version >}} from a command line.
{{% oss-only %}}The CLI is packaged separately from InfluxDB and must be downloaded and installed separately.{{% /oss-only %}}
For detailed CLI installation instructions, see
[Use the influx CLI](/influxdb/v2.5/tools/influx-cli/).
### InfluxDB HTTP API
The [InfluxDB API](/influxdb/v2.5/reference/api/) provides a simple way to
interact with the InfluxDB {{< current-version >}} using HTTP(S) clients.
Examples in this tutorial use cURL, but any HTTP(S) client will work.
{{% note %}}
#### InfluxDB client libraries
[InfluxDB client libraries](/influxdb/v2.5/api-guide/client-libraries/) are
language-specific clients that interact with the InfluxDB HTTP API.
Examples for client libraries are not provided in this tutorial, but these can
be used to perform all the actions outlined in this tutorial.
{{% /note %}}
## Authorization
**InfluxDB {{< current-version >}} requires authentication** using [API tokens](/influxdb/v2.5/security/tokens/).
Each API token is associated with a user and a specific set of permissions for InfluxDB resources.
{{< page-nav next="/influxdb/cloud-iox/get-started/setup/" >}}

View File

@ -0,0 +1,18 @@
---
title: Get started processing data
seotitle: Process data | Get started with InfluxDB
list_title: Process data
description: >
Learn how to process time series data to do things like downsample and alert
on data.
menu:
influxdb_cloud_iox:
name: Process data
parent: Get started
identifier: get-started-process-data
weight: 103
metadata: [4 / 5]
draft: true
---
<!-- PLACEHOLDER -->

View File

@ -0,0 +1,394 @@
---
title: Get started querying data
seotitle: Query data | Get started with InfluxDB
list_title: Query data
description: >
Get started querying data in InfluxDB by learning about Flux and InfluxQL and
using tools like the InfluxDB UI, `influx` CLI, and InfluxDB API.
menu:
influxdb_cloud_iox:
name: Query data
parent: Get started
identifier: get-started-query-data
weight: 102
metadata: [3 / 3]
related:
- /influxdb/cloud-iox/query-data/
---
InfluxDB Cloud backed by InfluxDB IOx supports multiple query languages:
- **SQL**: Traditional SQL powered by the [Apache Arrow DataFusion](https://arrow.apache.org/datafusion/)
query engine. The supported SQL syntax is similar to PostgreSQL.
- **Flux**: A functional scripting language designed to query and process data
from InfluxDB and other data sources.
<!-- - **InfluxQL**: A SQL-like query language designed to query time series data from
InfluxDB. -->
This tutorial walks you through the fundamentals of querying data in InfluxDB and
**focuses on using SQL** to query your time series data.
<!-- For information about using InfluxQL and Flux, see
[Query data in InfluxDB](/influxdb/cloud-iox/query-data/). -->
{{% note %}}
The examples in this section of the tutorial query the data from written in the
[Get started writing data](/influxdb/cloud-iox/get-started/write/#write-line-protocol-to-influxdb) section.
{{% /note %}}
## Tools to execute queries
InfluxDB supports many different tools for querying data, including:
{{< req type="key" text="Covered in this tutorial" >}}
- InfluxDB user interface (UI){{< req "\* " >}}
- [InfluxDB HTTP API](/influxdb/cloud-iox/reference/api/){{< req "\* " >}}
- [`influx` CLI](/influxdb/cloud-iox/tools/influx-cli/){{< req "\* " >}}
- [Superset](https://superset.apache.org/)
- [Grafana](/influxdb/cloud-iox/tools/grafana/)
- [Chronograf](/{{< latest "Chronograf" >}}/)
- [InfluxDB client libraries](/influxdb/cloud-iox/api-guide/client-libraries/)
## SQL query basics
InfluxDB Cloud's SQL implementation is powered by the [Apache Arrow DataFusion](https://arrow.apache.org/datafusion/)
query engine which provides a SQL syntax similar to PostgreSQL.
{{% note %}}
This is a brief introduction to writing SQL queries for InfluxDB.
For more in-depth details, see the [SQL reference documentation](/influxdb/cloud-iox/reference/sql/).
{{% /note %}}
InfluxDB SQL queries most commonly include the following clauses:
{{< req type="key" >}}
- {{< req "\*">}} `SELECT`: Identify specific fields and tags to query from a
measurement or use the wild card alias (`*`) to select all fields and tags
from a measurement.
- {{< req "\*">}} `FROM`: Identify the measurement to query.
If coming from a SQL background, an InfluxDB measurement is the equivalent
of a relational table.
- `WHERE`: Only return data that meets defined conditions such as falling within
a time range, containing specific tag values, etc.
- `GROUP BY`: Group data into SQL partitions and apply an aggregate or selector
function to each group.
{{% influxdb/custom-timestamps %}}
```sql
-- Return the average temperature and humidity from each room
SELECT
mean(temp),
mean(hum),
room
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
GROUP BY
room
```
{{% /influxdb/custom-timestamps %}}
##### Example SQL queries
{{< expand-wrapper >}}
{{% expand "Select all data in a measurement" %}}
```sql
SELECT * FROM measurement
```
{{% /expand %}}
{{% expand "Select all data in a measurement within time bounds" %}}
```sql
SELECT
*
FROM
measurement
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
```
{{% /expand %}}
{{% expand "Select a specific field within relative time bounds" %}}
```sql
SELECT field1 FROM measurement WHERE time >= now() - INTERVAL '1 day'
```
{{% /expand %}}
{{% expand "Select specific fields and tags from a measurement" %}}
```sql
SELECT field1, field2, tag1 FROM measurement
```
{{% /expand %}}
{{% expand "Select data based on tag value" %}}
```sql
SELECT * FROM measurement WHERE tag1 = 'value1'
```
{{% /expand %}}
{{% expand "Select data based on tag value within time bounds" %}}
```sql
SELECT
*
FROM
measurement
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
AND tag1 = 'value1'
```
{{% /expand %}}
{{% expand "Downsample data by applying interval-based aggregates" %}}
```sql
SELECT
DATE_BIN(INTERVAL '1 hour', time, '2022-01-01T00:00:00Z'::TIMESTAMP) as time,
mean(field1),
sum(field2),
tag1
FROM
home
GROUP BY
time,
tag1
```
{{% /expand %}}
{{< /expand-wrapper >}}
### Execute a SQL query
Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to execute SQL queries.
For this example, use the following query to select all the data written to the
**get-started** bucket between
{{% influxdb/custom-timestamps-span %}}
**2022-01-01T08:00:00Z** and **2022-01-01T20:00:00Z**.
{{% /influxdb/custom-timestamps-span %}}
{{% influxdb/custom-timestamps %}}
```sql
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
```
{{% /influxdb/custom-timestamps %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxDB UI](#)
[influx CLI](#)
[InfluxDB API](#)
{{% /tabs %}}
{{% tab-content %}}
<!--------------------------- BEGIN FLUX UI CONTENT --------------------------->
1. Go to
{{% oss-only %}}[localhost:8086](http://localhost:8086){{% /oss-only %}}
{{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}}
in a browser to log in and access the InfluxDB UI.
2. In the left navigation bar, click **Data Explorer**.
{{< nav-icon "data-explorer" "v4" >}}
3. In the schema browser on the left, select the **get-started** bucket from the
**bucket** drop-down menu.
The displayed measurements and fields are read-only and are meant to show
you the schema of data stored in the selected bucket.
4. Enter the SQL query in the text editor.
5. Click **{{< icon "play" >}} {{% caps %}}Run{{% /caps %}}**.
Results are displayed under the text editor.
<!---------------------------- END FLUX UI CONTENT ---------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------- BEGIN FLUX CLI CONTENT --------------------------->
The [`influx query` command](/influxdb/cloud-iox/reference/cli/influx/query/)
uses the InfluxDB `/api/v2/query` endpoint to query InfluxDB.
This endpoint only accepts Flux queries. To use SQL with the `influx` CLI, wrap
your SQL query in Flux and use [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/)
to query the InfluxDB IOx storage engine with SQL.
Provide the following:
- **Bucket name** with the `bucket` parameter
- **SQL query** with the `query` parameter
{{< expand-wrapper >}}
{{% expand "View `iox.sql()` Flux example" %}}
```js
import "experimental/iox"
iox.sql(
bucket: "example-bucket",
query: "SELECT * FROM measurement'"
)
```
{{% /expand %}}
{{< /expand-wrapper >}}
1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/cloud-iox/tools/influx-cli/).
2. Use the [`influx query` command](/influxdb/cloud-iox/reference/cli/influx/query/)
to query InfluxDB using Flux.
**Provide the following**:
- String-encoded Flux query that uses `iox.sql()` to query the InfluxDB IOx
storage engine with SQL.
- [Connection and authentication credentials](/influxdb/cloud-iox/get-started/setup/?t=influx+CLI#configure-authentication-credentials)
{{% influxdb/custom-timestamps %}}
```sh
influx query "
import \"experimental/iox\"
iox.sql(
bucket: \"get-started\",
query: \"
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
\",
)"
```
{{% /influxdb/custom-timestamps %}}
<!--------------------------- END FLUX CLI CONTENT ---------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------- BEGIN FLUX API CONTENT --------------------------->
To query data from InfluxDB using SQL and the InfluxDB HTTP API, send a request
to the InfluxDB API [`/api/v2/query` endpoint](/influxdb/cloud-iox/api/#operation/PostQuery)
using the `POST` request method.
{{< api-endpoint endpoint="http://localhost:8086/api/v2/query" method="post" >}}
The `/api/v2/query` endpoint only accepts Flux queries.
To query data with SQL, wrap your SQL query in Flux and use [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/)
to query the InfluxDB IOx storage engine with SQL.
Provide the following:
- **Bucket name** with the `bucket` parameter
- **SQL query** with the `query` parameter
{{< expand-wrapper >}}
{{% expand "View `iox.sql()` Flux example" %}}
```js
import "experimental/iox"
iox.sql(
bucket: "example-bucket",
query: "SELECT * FROM measurement'"
)
```
{{% /expand %}}
{{< /expand-wrapper >}}
Include the following with your request:
- **Headers**:
- **Authorization**: Token <INFLUX_TOKEN>
- **Content-Type**: application/vnd.flux
- **Accept**: application/csv
- _(Optional)_ **Accept-Encoding**: gzip
- **Request body**: Flux query as plain text. In the Flux query, use `iox.sql()`
and provide your bucket name and your SQL query.
The following example uses cURL and the InfluxDB API to query data with Flux:
{{% influxdb/custom-timestamps %}}
```sh
curl --request POST \
"$INFLUX_HOST/api/v2/query" \
--header "Authorization: Token $INFLUX_TOKEN" \
--header "Content-Type: application/vnd.flux" \
--header "Accept: application/csv" \
--data "
import \"experimental/iox\"
iox.sql(
bucket: \"get-started\",
query: \"
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
\",
)"
```
{{% /influxdb/custom-timestamps %}}
{{% note %}}
The InfluxDB `/api/v2/query` endpoint returns query results in
[annotated CSV](/influxdb/cloud-iox/reference/syntax/annotated-csv/).
{{% /note %}}
<!--------------------------- END FLUX API CONTENT ---------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
### Query results
{{< expand-wrapper >}}
{{% expand "View query results" %}}
{{% influxdb/custom-timestamps %}}
| time | room | co | hum | temp |
| :------------------- | :---------- | --: | ---: | ---: |
| 2022-01-01T08:00:00Z | Kitchen | 0 | 35.9 | 21 |
| 2022-01-01T09:00:00Z | Kitchen | 0 | 36.2 | 23 |
| 2022-01-01T10:00:00Z | Kitchen | 0 | 36.1 | 22.7 |
| 2022-01-01T11:00:00Z | Kitchen | 0 | 36 | 22.4 |
| 2022-01-01T12:00:00Z | Kitchen | 0 | 36 | 22.5 |
| 2022-01-01T13:00:00Z | Kitchen | 1 | 36.5 | 22.8 |
| 2022-01-01T14:00:00Z | Kitchen | 1 | 36.3 | 22.8 |
| 2022-01-01T15:00:00Z | Kitchen | 3 | 36.2 | 22.7 |
| 2022-01-01T16:00:00Z | Kitchen | 7 | 36 | 22.4 |
| 2022-01-01T17:00:00Z | Kitchen | 9 | 36 | 22.7 |
| 2022-01-01T18:00:00Z | Kitchen | 18 | 36.9 | 23.3 |
| 2022-01-01T19:00:00Z | Kitchen | 22 | 36.6 | 23.1 |
| 2022-01-01T20:00:00Z | Kitchen | 26 | 36.5 | 22.7 |
| 2022-01-01T08:00:00Z | Living Room | 0 | 35.9 | 21.1 |
| 2022-01-01T09:00:00Z | Living Room | 0 | 35.9 | 21.4 |
| 2022-01-01T10:00:00Z | Living Room | 0 | 36 | 21.8 |
| 2022-01-01T11:00:00Z | Living Room | 0 | 36 | 22.2 |
| 2022-01-01T12:00:00Z | Living Room | 0 | 35.9 | 22.2 |
| 2022-01-01T13:00:00Z | Living Room | 0 | 36 | 22.4 |
| 2022-01-01T14:00:00Z | Living Room | 0 | 36.1 | 22.3 |
| 2022-01-01T15:00:00Z | Living Room | 1 | 36.1 | 22.3 |
| 2022-01-01T16:00:00Z | Living Room | 4 | 36 | 22.4 |
| 2022-01-01T17:00:00Z | Living Room | 5 | 35.9 | 22.6 |
| 2022-01-01T18:00:00Z | Living Room | 9 | 36.2 | 22.8 |
| 2022-01-01T19:00:00Z | Living Room | 14 | 36.3 | 22.5 |
| 2022-01-01T20:00:00Z | Living Room | 17 | 36.4 | 22.2 |
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{< /expand-wrapper >}}
**Congratulations!** You've learned the basics of querying data in InfluxDB with SQL.
For a deep dive into all the ways you can query InfluxDB, see the
[Query data in InfluxDB](/influxdb/cloud-iox/query-data/) section of documentation.
{{< page-nav prev="/influxdb/cloud-iox/get-started/write/" keepTab=true >}}

View File

@ -0,0 +1,285 @@
---
title: Set up InfluxDB
seotitle: Set up InfluxDB | Get started with InfluxDB
list_title: Set up InfluxDB
description: >
Learn how to set up InfluxDB for the "Get started with InfluxDB" tutorial.
menu:
influxdb_cloud_iox:
name: Set up InfluxDB
parent: Get started
identifier: get-started-set-up
weight: 101
metadata: [1 / 3]
related:
- /influxdb/cloud-iox/security/tokens/
- /influxdb/cloud-iox/organizations/buckets/
- /influxdb/cloud-iox/tools/influx-cli/
- /influxdb/cloud-iox/reference/api/
---
As you get started with this tutorial, do the following to make sure everything
you need is in place.
1. {{< req text="(Optional)" color="magenta" >}} **Download, install, and configure the `influx` CLI**.
The `influx` CLI provides a simple way to interact with InfluxDB from a
command line. For detailed installation and setup instructions,
see [Use the influx CLI](/influxdb/cloud-iox/tools/influx-cli/).
2. **Create an All Access API token.**
<span id="create-an-all-access-api-token"></span>
1. Go to
{{% oss-only %}}[localhost:8086](https://cloud2.influxdata.com){{% /oss-only %}}
{{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}}
in a browser to log in and access the InfluxDB UI.
2. Navigate to **Load Data** > **API Tokens** using the left navigation bar.
{{< nav-icon "load data" >}}
3. Click **+ {{% caps %}}Generate API token{{% /caps %}}** and select
**All Access API Token**.
4. Enter a description for the API token and click **{{< icon "check" >}} {{% caps %}}Save{{% /caps %}}**.
5. Copy the generated token and store it for safe keeping.
{{% note %}}
We recommend using a password manager or a secret store to securely store
sensitive tokens.
{{% /note %}}
3. **Configure authentication credentials**. <span id="configure-authentication-credentials"></span>
As you go through this tutorial, interactions with InfluxDB {{< current-version >}}
require your InfluxDB **host**, **organization name or ID**, and your **API token**.
There are different methods for providing these credentials depending on
which client you use to interact with InfluxDB.
{{% note %}}
When configuring your token, if you [created an all access token](#create-an-all-access-api-token),
use that token to interact with InfluxDB. Otherwise, use your operator token.
{{% /note %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxDB UI](#)
[influx CLI](#)
[InfluxDB API](#)
{{% /tabs %}}
{{% tab-content %}}
<!------------------------------ BEGIN UI CONTENT ----------------------------->
When managing InfluxDB {{< current-version >}} through the InfluxDB UI,
authentication credentials are provided automatically using credentials
associated with the user you log in with.
<!------------------------------- END UI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!---------------------------- BEGIN CLI CONTENT ----------------------------->
There are three ways to provided authentication credentials to the `influx` CLI:
{{< expand-wrapper >}}
{{% expand "CLI connection configurations <em>(<span class=\"req\">Recommended</span>)</em>" %}}
The `influx` CLI lets you specify connection configuration presets that let
you store and quickly switch between multiple sets of InfluxDB connection
credentials. Use the [`influx config create` command](/influxdb/cloud-iox/reference/cli/influx/config/create/)
to create a new CLI connection configuration. Include the following flags:
- `-n, --config-name`: Connection configuration name. This examples uses `get-started`.
- `-u, --host-url`: [InfluxDB Cloud region URL](/influxdb/cloud-iox/reference/regions/).
- `-o, --org`: InfluxDB organization name.
- `-t, --token`: InfluxDB API token.
```sh
influx config create \
--config-name get-started \
--host-url https://cloud2.influxdata.com \
--org <YOUR_INFLUXDB_ORG_NAME> \
--token <YOUR_INFLUXDB_API_TOKEN>
```
_For more information about CLI connection configurations, see
[Install and use the `influx` CLI](/influxdb/cloud-iox/tools/influx-cli/#set-up-the-influx-cli)._
{{% /expand %}}
{{% expand "Environment variables" %}}
The `influx` CLI checks for specific environment variables and, if present,
uses those environment variables to populate authentication credentials.
Set the following environment variables in your command line session:
- `INFLUX_HOST`: [InfluxDB Cloud region URL](/influxdb/cloud-iox/reference/regions/).
- `INFLUX_ORG`: InfluxDB organization name.
- `INFLUX_ORG_ID`: InfluxDB [organization ID](/influxdb/cloud-iox/organizations/view-orgs/#view-your-organization-id).
- `INFLUX_TOKEN`: InfluxDB API token.
```sh
export INFLUX_HOST=https://cloud2.influxdata.com
export INFLUX_ORG=<YOUR_INFLUXDB_ORG_NAME>
export INFLUX_ORG_ID=<YOUR_INFLUXDB_ORG_ID>
export INFLUX_TOKEN=<YOUR_INFLUXDB_API_TOKEN>
```
{{% /expand %}}
{{% expand "Command flags" %}}
Use the following `influx` CLI flags to provide required credentials to commands:
- `--host`: [InfluxDB Cloud region URL](/influxdb/cloud-iox/reference/regions/).
- `-o`, `--org` or `--org-id`: InfluxDB organization name or
[ID](/influxdb/cloud-iox/organizations/view-orgs/#view-your-organization-id).
- `-t`, `--token`: InfluxDB API token.
{{% /expand %}}
{{< /expand-wrapper >}}
{{% note %}}
All `influx` CLI examples in this getting started tutorial assume your InfluxDB
**host**, **organization**, and **token** are provided by either the
[active `influx` CLI configuration](/influxdb/cloud-iox/reference/cli/influx/#provide-required-authentication-credentials)
or by environment variables.
{{% /note %}}
<!------------------------------ END CLI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN API CONTENT ----------------------------->
When using the InfluxDB API, provide the required connection credentials in the
following ways:
- **InfluxDB host**: [InfluxDB Cloud region URL](/influxdb/cloud-iox/reference/regions/)
- **InfluxDB API Token**: Include an `Authorization` header that uses either
`Bearer` or `Token` scheme and your InfluxDB API token. For example:
`Authorization: Bearer 0xxx0o0XxXxx00Xxxx000xXXxoo0==`.
- **InfluxDB organization name or ID**: Depending on the API endpoint used, pass
this as part of the URL path, query string, or in the request body.
All API examples in this tutorial use **cURL** from a command line.
To provide all the necessary credentials to the example cURL commands, set
the following environment variables in your command line session.
```sh
export INFLUX_HOST=https://cloud2.influxdata.com
export INFLUX_ORG=<YOUR_INFLUXDB_ORG_NAME>
export INFLUX_ORG_ID=<YOUR_INFLUXDB_ORG_ID>
export INFLUX_TOKEN=<YOUR_INFLUXDB_API_TOKEN>
```
<!------------------------------ END API CONTENT ------------------------------>
{{% /tab-content %}}
{{< /tabs-wrapper >}}
6. {{< req text="(Optional)" color="magenta" >}} **Create a bucket**.
You can use an existing bucket or create a new one specifically for this
getting started tutorial. All examples in this tutorial assume a bucket named
**"get-started"**.
Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to create a
new bucket.
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxDB UI](#)
[influx CLI](#)
[InfluxDB API](#)
{{% /tabs %}}
{{% tab-content %}}
<!------------------------------ BEGIN UI CONTENT ----------------------------->
1. Go to
{{% oss-only %}}[localhost:8086](https://cloud2.influxdata.com){{% /oss-only %}}
{{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}}
in a browser to log in and access the InfluxDB UI.
2. Navigate to **Load Data** > **Buckets** using the left navigation bar.
{{< nav-icon "load data" >}}
3. Click **+ {{< caps >}}Create bucket{{< /caps >}}**.
4. Provide a bucket name (get-started) and select a
[retention period](/influxdb/cloud-iox/reference/glossary/#retention-period).
Supported retention periods depend on your InfluxDB Cloud plan.
5. Click **{{< caps >}}Create{{< /caps >}}**.
<!------------------------------- END UI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN CLI CONTENT ----------------------------->
1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/cloud-iox/tools/influx-cli/).
2. Use the [`influx bucket create` command](/influxdb/cloud-iox/reference/cli/influx/bucket/create/)
to create a new bucket.
**Provide the following**:
- `-n, --name` flag with the bucket name.
- `-r, --retention` flag with the bucket's retention period duration.
Supported retention periods depend on your InfluxDB Cloud plan.
- [Connection and authentication credentials](#configure-authentication-credentials)
```sh
influx bucket create \
--name get-started \
--retention 7d
```
<!------------------------------ END CLI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN API CONTENT ----------------------------->
To create a bucket using the InfluxDB HTTP API, send a request to
the InfluxDB API `/api/v2/buckets` endpoint using the `POST` request method.
{{< api-endpoint endpoint="https://cloud2.influxdata.com/api/v2/buckets" method="post" >}}
Include the following with your request:
- **Headers**:
- **Authorization**: Token `INFLUX_TOKEN`
- **Content-Type**: `application/json`
- **Request body**: JSON object with the following properties:
- **org**: InfluxDB organization name
- **name**: Bucket name
- **retentionRules**: List of retention rule objects that define the bucket's retention period.
Each retention rule object has the following properties:
- **type**: `"expire"`
- **everySeconds**: Retention period duration in seconds.
{{% cloud-only %}}Supported retention periods depend on your InfluxDB Cloud plan.{{% /cloud-only %}}
{{% oss-only %}}`0` indicates the retention period is infinite.{{% /oss-only %}}
```sh
export INFLUX_HOST=https://cloud2.influxdata.com
export INFLUX_ORG_ID=<YOUR_INFLUXDB_ORG_ID>
export INFLUX_TOKEN=<YOUR_INFLUXDB_API_TOKEN>
curl --request POST \
"$INFLUX_HOST/api/v2/buckets" \
--header "Authorization: Token $INFLUX_TOKEN" \
--header "Content-Type: application/json" \
--data '{
"orgID": "'"$INFLUX_ORG_ID"'",
"name": "get-started",
"retentionRules": [
{
"type": "expire",
"everySeconds": 604800
}
]
}'
```
<!------------------------------ END API CONTENT ------------------------------>
{{% /tab-content %}}
{{< /tabs-wrapper >}}
{{< page-nav prev="/influxdb/cloud-iox/get-started/" next="/influxdb/cloud-iox/get-started/write/" keepTab=true >}}

View File

@ -0,0 +1,17 @@
---
title: Get started visualizing data
seotitle: Visualize data | Get started with InfluxDB
list_title: Visualize data
description: >
...
menu:
influxdb_cloud_iox:
name: Visualize data
parent: Get started
identifier: get-started-visualize-data
weight: 104
metadata: [5 / 5]
draft: true
---
<!-- PLACEHOLDER -->

View File

@ -0,0 +1,340 @@
---
title: Get started writing data
seotitle: Write data | Get started with InfluxDB
list_title: Write data
description: >
Get started writing data to InfluxDB by learning about line protocol and using
tools like the InfluxDB UI, `influx` CLI, and InfluxDB API.
menu:
influxdb_cloud_iox:
name: Write data
parent: Get started
identifier: get-started-write-data
weight: 101
metadata: [2 / 3]
related:
- /influxdb/cloud-iox/write-data/
- /influxdb/cloud-iox/write-data/best-practices/
- /influxdb/cloud-iox/reference/syntax/line-protocol/
- /{{< latest "telegraf" >}}/
---
InfluxDB provides many different options for ingesting or writing data, including
the following:
- Influx user interface (UI)
- [InfluxDB HTTP API](/influxdb/cloud-iox/reference/api/)
- [`influx` CLI](/influxdb/cloud-iox/tools/influx-cli/)
- [Telegraf](/{{< latest "telegraf" >}}/)
- [InfluxDB client libraries](/influxdb/cloud-iox/api-guide/client-libraries/)
This tutorial walks you through the fundamental of using **line protocol** to write
data to InfluxDB. If using tools like Telegraf or InfluxDB client libraries, they will
build the line protocol for you, but it's good to understand how line protocol works.
## Line protocol
All data written to InfluxDB is written using **line protocol**, a text-based
format that lets you provide the necessary information to write a data point to InfluxDB.
_This tutorial covers the basics of line protocol, but for detailed information,
see the [Line protocol reference](/influxdb/cloud-iox/reference/syntax/line-protocol/)._
### Line protocol elements
Each line of line protocol contains the following elements:
{{< req type="key" >}}
- {{< req "\*" >}} **measurement**: String that identifies the [measurement]() to store the data in.
- **tag set**: Comma-delimited list of key value pairs, each representing a tag.
Tag keys and values are unquoted strings. _Spaces, commas, and equal characters must be escaped._
- {{< req "\*" >}} **field set**: Comma-delimited list of key value pairs, each representing a field.
Field keys are unquoted strings. _Spaces and commas must be escaped._
Field values can be [strings](/influxdb/cloud-iox/reference/syntax/line-protocol/#string) (quoted),
[floats](/influxdb/cloud-iox/reference/syntax/line-protocol/#float),
[integers](/influxdb/cloud-iox/reference/syntax/line-protocol/#integer),
[unsigned integers](/influxdb/cloud-iox/reference/syntax/line-protocol/#uinteger),
or [booleans](/influxdb/cloud-iox/reference/syntax/line-protocol/#boolean).
- **timestamp**: [Unix timestamp](/influxdb/cloud-iox/reference/syntax/line-protocol/#unix-timestamp)
associated with the data. InfluxDB supports up to nanosecond precision.
_If the precision if the timestamp is not in nanoseconds, you must specify the
precision when writing the data to InfluxDB._
#### Line protocol element parsing
- **measurement**: Everything before the _first unescaped comma before the first whitespace_.
- **tag set**: Key-value pairs between the _first unescaped comma_ and the _first unescaped whitespace_.
- **field set**: Key-value pairs between the _first and second unescaped whitespaces_.
- **timestamp**: Integer value after the _second unescaped whitespace_.
- Lines are separated by the newline character (`\n`).
Line protocol is whitespace sensitive.
---
{{< influxdb/line-protocol >}}
---
_For schema design recommendations, see [InfluxDB schema design](/influxdb/cloud-iox/write-data/best-practices/schema-design/)._
## Construct line protocol
With a basic understanding of line protocol, you can now construct line protocol
and write data to InfluxDB.
Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
- **measurement**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
Data is collected hourly beginning at
{{% influxdb/custom-timestamps-span %}}**2022-01-01T08:00:00Z (UTC)** until **2022-01-01T20:00:00Z (UTC)**{{% /influxdb/custom-timestamps-span %}}.
The resulting line protocol would look something like the following:
{{% influxdb/custom-timestamps %}}
##### Home sensor data line protocol
```sh
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
```
{{% /influxdb/custom-timestamps %}}
## Write line protocol to InfluxDB
Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to write the
line protocol above to InfluxDB.
{{< tabs-wrapper >}}
{{% tabs %}}
[InfluxDB UI](#)
[influx CLI](#)
[InfluxDB API](#)
{{% /tabs %}}
{{% tab-content %}}
<!------------------------------ BEGIN UI CONTENT ----------------------------->
1. Go to
{{% oss-only %}}[localhost:8086](http://localhost:8086){{% /oss-only %}}
{{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}}
in a browser to log in and access the InfluxDB UI.
2. Navigate to **Load Data** > **Buckets** using the left navigation bar.
{{< nav-icon "load data" >}}
3. Click **{{< icon "plus" >}} {{< caps >}}Add Data{{< /caps >}}** on the bucket
you want to write the data to and select **Line Protocol**.
4. Select **{{< caps >}}Enter Manually{{< /caps >}}**.
5. {{< req "Important" >}} In the **Precision** drop-down menu above the line
protocol text field, select **Seconds** (to match to precision of the
timestamps in the line protocol).
6. Copy the [line protocol above](#home-sensor-data-line-protocol) and paste it
into the line protocol text field.
7. Click **{{< caps >}}Write Data{{< /caps >}}**.
The UI will confirm that the data has been written successfully.
<!------------------------------- END UI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!---------------------------- BEGIN CLI CONTENT ----------------------------->
1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/cloud-iox/tools/influx-cli/).
2. Use the [`influx write` command](/influxdb/cloud-iox/reference/cli/influx/write/)
to write the [line protocol above](#home-sensor-data-line-protocol) to InfluxDB.
**Provide the following**:
- `-b, --bucket` or `--bucket-id` flag with the bucket name or ID to write do.
- `-p, --precision` flag with the timestamp precision (`s`).
- String-encoded line protocol.
- [Connection and authentication credentials](/influxdb/cloud-iox/get-started/setup/?t=influx+CLI#configure-authentication-credentials)
{{% influxdb/custom-timestamps %}}
```sh
influx write \
--bucket get-started \
--precision s "
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
"
```
{{% /influxdb/custom-timestamps %}}
<!------------------------------ END CLI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN API CONTENT ----------------------------->
To write data to InfluxDB using the InfluxDB HTTP API, send a request to
the InfluxDB API `/api/v2/write` endpoint using the `POST` request method.
{{< api-endpoint endpoint="http://localhost:8086/api/v2/write" method="post" >}}
Include the following with your request:
- **Headers**:
- **Authorization**: Token <INFLUX_TOKEN>
- **Content-Type**: text/plain; charset=utf-8
- **Accept**: application/json
- **Query parameters**:
- **org**: InfluxDB organization name
- **bucket**: InfluxDB bucket name
- **precision**: timestamp precision (default is `ns`)
- **Request body**: Line protocol as plain text
The following example uses cURL and the InfluxDB API to write line protocol
to InfluxDB:
{{% influxdb/custom-timestamps %}}
```sh
export INFLUX_HOST=http://localhost:8086
export INFLUX_ORG=<YOUR_INFLUXDB_ORG>
export INFLUX_TOKEN=<YOUR_INFLUXDB_API_TOKEN>
curl --request POST \
"$INFLUX_HOST/api/v2/write?org=$INFLUX_ORG&bucket=get-started&precision=s" \
--header "Authorization: Token $INFLUX_TOKEN" \
--header "Content-Type: text/plain; charset=utf-8" \
--header "Accept: application/json" \
--data-binary "
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
"
```
{{% /influxdb/custom-timestamps %}}
<!------------------------------ END API CONTENT ------------------------------>
{{% /tab-content %}}
{{< /tabs-wrapper >}}
{{< expand-wrapper >}}
{{% expand "View the written data" %}}
{{% influxdb/custom-timestamps %}}
| time | room | co | hum | temp |
| :------------------- | :---------- | --: | ---: | ---: |
| 2022-01-01T08:00:00Z | Kitchen | 0 | 35.9 | 21 |
| 2022-01-01T09:00:00Z | Kitchen | 0 | 36.2 | 23 |
| 2022-01-01T10:00:00Z | Kitchen | 0 | 36.1 | 22.7 |
| 2022-01-01T11:00:00Z | Kitchen | 0 | 36 | 22.4 |
| 2022-01-01T12:00:00Z | Kitchen | 0 | 36 | 22.5 |
| 2022-01-01T13:00:00Z | Kitchen | 1 | 36.5 | 22.8 |
| 2022-01-01T14:00:00Z | Kitchen | 1 | 36.3 | 22.8 |
| 2022-01-01T15:00:00Z | Kitchen | 3 | 36.2 | 22.7 |
| 2022-01-01T16:00:00Z | Kitchen | 7 | 36 | 22.4 |
| 2022-01-01T17:00:00Z | Kitchen | 9 | 36 | 22.7 |
| 2022-01-01T18:00:00Z | Kitchen | 18 | 36.9 | 23.3 |
| 2022-01-01T19:00:00Z | Kitchen | 22 | 36.6 | 23.1 |
| 2022-01-01T20:00:00Z | Kitchen | 26 | 36.5 | 22.7 |
| 2022-01-01T08:00:00Z | Living Room | 0 | 35.9 | 21.1 |
| 2022-01-01T09:00:00Z | Living Room | 0 | 35.9 | 21.4 |
| 2022-01-01T10:00:00Z | Living Room | 0 | 36 | 21.8 |
| 2022-01-01T11:00:00Z | Living Room | 0 | 36 | 22.2 |
| 2022-01-01T12:00:00Z | Living Room | 0 | 35.9 | 22.2 |
| 2022-01-01T13:00:00Z | Living Room | 0 | 36 | 22.4 |
| 2022-01-01T14:00:00Z | Living Room | 0 | 36.1 | 22.3 |
| 2022-01-01T15:00:00Z | Living Room | 1 | 36.1 | 22.3 |
| 2022-01-01T16:00:00Z | Living Room | 4 | 36 | 22.4 |
| 2022-01-01T17:00:00Z | Living Room | 5 | 35.9 | 22.6 |
| 2022-01-01T18:00:00Z | Living Room | 9 | 36.2 | 22.8 |
| 2022-01-01T19:00:00Z | Living Room | 14 | 36.3 | 22.5 |
| 2022-01-01T20:00:00Z | Living Room | 17 | 36.4 | 22.2 |
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{< /expand-wrapper >}}
**Congratulations!** You have written data to InfluxDB.
With data now stored in InfluxDB, let's query it.
<!-- The method described
above is the manual way of writing data, but there are other options available:
- [Write data to InfluxDB using no-code solutions](/influxdb/cloud-iox/write-data/no-code/)
- [Write data to InfluxDB using developer tools](/influxdb/cloud-iox/write-data/developer-tools/) -->
{{< page-nav prev="/influxdb/cloud-iox/get-started/setup/" next="/influxdb/cloud-iox/get-started/query/" keepTab=true >}}

View File

@ -0,0 +1,19 @@
---
title: Query data in InfluxDB Cloud
seotitle: Query data stored in InfluxDB Cloud
description: >
Learn to query data stored in InfluxDB using SQL, InfluxQL, and Flux using tools
like the InfluxDB user interface and the 'influx' command line interface.
menu:
influxdb_cloud_iox:
name: Query data
weight: 4
influxdb/cloud-iox/tags: [query, flux]
---
Learn to query data stored in InfluxDB.
<!-- using SQL, InfluxQL, and Flux using tools
like the InfluxDB user interface and the 'influx' command line interface. -->
{{< children >}}

View File

@ -0,0 +1,20 @@
---
title: Query data with SQL
seotitle: Query data with SQL
description: >
Learn to query data stored in InfluxDB Cloud using SQL.
menu:
influxdb_cloud_iox:
name: Query with SQL
parent: Query data
weight: 101
influxdb/cloud-iox/tags: [query, sql]
---
Learn to query data stored in InfluxDB using SQL.
{{< children type="anchored-list" >}}
---
{{< children readmore=true hr=true >}}

View File

@ -0,0 +1,268 @@
---
title: Aggregate or apply selector functions to data
seotitle: Perform a basic SQL query in InfluxDB Cloud
description: >
Use aggregate and selector functions to perform aggregate operations on your
time series data.
menu:
influxdb_cloud_iox:
name: Aggregate data
parent: Query with SQL
identifier: query-sql-aggregate
weight: 203
influxdb/cloud-iox/tags: [query, sql]
list_code_example: |
##### Aggregate fields by groups
```sql
SELECT
mean(field1) AS mean,
selector_first(field2)['value'] as first,
tag1
FROM home
GROUP BY tag
```
##### Aggregate by time-based intervals
```sql
SELECT
DATE_BIN(INTERVAL '1 hour', time, '2022-01-01T00:00:00Z'::TIMESTAMP) AS time,
mean(field1),
sum(field2),
tag1
FROM home
GROUP BY
time,
tag1
```
---
A SQL query that aggregates data includes the following clauses:
{{< req type="key" >}}
- {{< req "\*">}} `SELECT`: Identify specific fields and tags to query from a
measurement or use the wild card alias (`*`) to select all fields and tags
from a measurement. Include any columns you want to group by in the `SELECT`
clause.
- {{< req "\*">}} `FROM`: Identify the measurement to query data from.
- `WHERE`: Only return data that meets defined conditions such as falling within
a time range, containing specific tag values, etc.
- `GROUP BY`: Group data into SQL partitions by specific columns and apply an
aggregate or selector function to each group.
{{% note %}}
For simplicity, the term **"aggregate"** in this guide refers to applying
both aggregate and selector functions to a dataset.
{{% /note %}}
Learn how to apply aggregate operations to your queried data:
- [Aggregate and selector functions](#aggregate-and-selector-functions)
- [Aggregate functions](#aggregate-functions)
- [Selector functions](#selector-functions)
- [Example aggregate queries](#example-aggregate-queries)
## Aggregate and selector functions
Both aggregate and selector functions return a single row from each SQL partition
or group. For example, if you `GROUP BY room` and perform an aggregate operation
in your `SELECT` clause, results include an aggregate value for each unique
value of `room`.
### Aggregate functions
Use **aggregate functions** to aggregate values in a specified column for each
group and return a single row per group containing the aggregate value.
[View aggregate functions](#)
##### Basic aggregate query
```sql
SELECT AVG(co) from home
```
### Selector functions
Use **selector functions** to "select" a value from a specified column.
The available selector functions are designed to work with time series data.
[View selector functions](/influxdb/cloud-iox/reference/sql/functions/selectors/)
Each selector function returns a Rust _struct_ (similar to a JSON object)
representing a single time and value from the specified column in the each group.
What time and value get returned depend on the logic in the selector function.
For example, `selector_first` returns the value of specified column in the first row of the group.
`selector_max` returns the maximum value of the specified column in the group.
#### Selector struct schema
The struct returned from a selector function has two properties:
- **time**: `time` value in the selected row
- **value**: value of the specified column in the selected row
```js
{time: 2023-01-01T00:00:00Z, value: 72.1}
```
#### Use selector functions
Each selector function has two arguments:
- The first is the column to operate on.
- The second is the time column to use in the selection logic.
In your `SELECT` statement, execute a selector function and use bracket notation
to reference properties of the [returned struct](#selector-struct-schema) to
populate the column value:
```sql
SELECT
selector_first(temp, time)['time'] AS time,
selector_first(temp, time)['value'] AS temp,
room
FROM home
GROUP BY room
```
## Example aggregate queries
- [Performed an ungrouped aggregation](#performed-an-ungrouped-aggregation)
- [Group and aggregate data](#group-and-aggregate-data)
- [Downsample data by applying interval-based aggregates](#downsample-data-by-applying-interval-based-aggregates)
- [Query rows based on aggregate values](#query-rows-based-on-aggregate-values)
{{% note %}}
#### Sample data
The following examples use the sample data written in the
[Get started writing data guide](/influxdb/cloud-iox/get-started/write/).
To run the example queries and return results,
[write the sample data](/influxdb/cloud-iox/get-started/write/#write-line-protocol-to-influxdb)
to your InfluxDB Cloud bucket before running the example queries.
{{% /note %}}
### Perform an ungrouped aggregation
To aggregate _all_ queried values in a specified column:
- Use aggregate or selector functions in your `SELECT` statement
- Do not include a `GROUP BY` clause to leave your data ungrouped
```sql
SELECT avg(co) AS 'average co' from home
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| average co |
| :---------------: |
| 5.269230769230769 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Group and aggregate data
To apply aggregate or selector functions to data grouped:
- Use aggregate or selector functions in your `SELECT` statement
- Include columns to group by in your `SELECT` statement
- Include a `GROUP BY` clause with a comma-delimited list of columns to group by
```sql
SELECT
room,
avg(temp) AS 'average temp'
FROM home
GROUP BY room
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| room | average temp |
| :---------- | -----------------: |
| Living Room | 22.16923076923077 |
| Kitchen | 22.623076923076926 |
{{% /expand %}}
{{< /expand-wrapper >}}
#### Downsample data by applying interval-based aggregates
A common use case when querying time series is downsampling data by applying
aggregates to time-based groups. To group and aggregate data into time-based
groups:
- In your `SELECT` clause:
- Use `DATE_BIN` to calculate windows of time based on a specified interval
and update the timestamp in the `time` column based on the start
boundary of the window that the original timestamp is in.
For example, if you use `DATE_BIN` to window data into one day intervals,
{{% influxdb/custom-timestamps-span %}}`2022-01-01T12:34:56Z`{{% /influxdb/custom-timestamps-span %}}
will be updated to
{{% influxdb/custom-timestamps-span %}}`2022-01-01T00:00:00Z`{{% /influxdb/custom-timestamps-span %}}.
- Use aggregate or selector functions on specified columns.
- Include columns to group by.
- Include a `GROUP BY` clause with `time` and other columns to group by.
```sql
SELECT
DATE_BIN(INTERVAL '2 hours', time, '1970-01-01T00:00:00Z'::TIMESTAMP) AS time,
room,
selector_max(temp, time)['value'] AS 'max temp',
selector_min(temp, time)['value'] AS 'min temp',
avg(temp) AS 'average temp'
FROM home
GROUP BY time, room
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
{{% influxdb/custom-timestamps %}}
| time | room | max temp | min temp | average temp |
| :------------------- | :---------- | -------: | -------: | -----------------: |
| 2022-01-01T08:00:00Z | Kitchen | 23 | 21 | 22 |
| 2022-01-01T10:00:00Z | Kitchen | 22.7 | 22.4 | 22.549999999999997 |
| 2022-01-01T12:00:00Z | Kitchen | 22.8 | 22.5 | 22.65 |
| 2022-01-01T14:00:00Z | Kitchen | 22.8 | 22.7 | 22.75 |
| 2022-01-01T16:00:00Z | Kitchen | 22.7 | 22.4 | 22.549999999999997 |
| 2022-01-01T18:00:00Z | Kitchen | 23.3 | 23.1 | 23.200000000000003 |
| 2022-01-01T20:00:00Z | Kitchen | 22.7 | 22.7 | 22.7 |
| 2022-01-01T08:00:00Z | Living Room | 21.4 | 21.1 | 21.25 |
| 2022-01-01T10:00:00Z | Living Room | 22.2 | 21.8 | 22 |
| 2022-01-01T12:00:00Z | Living Room | 22.4 | 22.2 | 22.299999999999997 |
| 2022-01-01T14:00:00Z | Living Room | 22.3 | 22.3 | 22.3 |
| 2022-01-01T16:00:00Z | Living Room | 22.6 | 22.4 | 22.5 |
| 2022-01-01T18:00:00Z | Living Room | 22.8 | 22.5 | 22.65 |
| 2022-01-01T20:00:00Z | Living Room | 22.2 | 22.2 | 22.2 |
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{< /expand-wrapper >}}
### Query rows based on aggregate values
To query data based on values after an aggregate operation, include a `HAVING`
clause with defined predicate conditions such as a value threshold.
Predicates in the `WHERE` clause are applied _before_ data is aggregated.
Predicates in the `HAVING` clause are applied _after_ data is aggregated.
```sql
SELECT
room,
avg(co) AS 'average co'
FROM home
GROUP BY room
HAVING "average co" > 5
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| room | average co |
| :------ | -----------------: |
| Kitchen | 6.6923076923076925 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,194 @@
---
title: Perform a basic SQL query
seotitle: Perform a basic SQL query in InfluxDB Cloud
description: >
A basic SQL query that queries data from InfluxDB most commonly includes
`SELECT`, `FROM`, and `WHERE` clauses.
menu:
influxdb_cloud_iox:
name: Basic query
parent: Query with SQL
identifier: query-sql-basic
weight: 202
influxdb/cloud-iox/tags: [query, sql]
list_code_example: |
```sql
SELECT temp, room FROM home WHERE time >= now() - INTERVAL '1 day'
```
---
InfluxDB Cloud's SQL implementation is powered by the [Apache Arrow DataFusion](https://arrow.apache.org/datafusion/)
query engine which provides a SQL syntax similar to other relational query languages.
A basic SQL query that queries data from InfluxDB most commonly includes the
following clauses:
{{< req type="key" >}}
- {{< req "\*">}} `SELECT`: Identify specific fields and tags to query from a
measurement or use the wild card alias (`*`) to select all fields and tags
from a measurement.
- {{< req "\*">}} `FROM`: Identify the measurement to query from.
- `WHERE`: Only return data that meets defined conditions such as falling within
a time range, containing specific tag values, etc.
{{% influxdb/custom-timestamps %}}
```sql
SELECT
temp,
hum,
room
FROM home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
```
{{% /influxdb/custom-timestamps %}}
## Basic query examples
- [Query data within time boundaries](#query-data-within-time-boundaries)
- [Query data without time boundaries](#query-data-without-time-boundaries)
- [Query specific fields and tags](#query-specific-fields-and-tags)
- [Query fields based on tag values](#query-fields-based-on-tag-values)
- [Query points based on field values](#query-points-based-on-field-values)
- [Alias queried fields and tags](#alias-queried-fields-and-tags)
{{% note %}}
#### Sample data
The following examples use the sample data written in the
[Get started writing data guide](/influxdb/cloud-iox/get-started/write/).
To run the example queries and return results,
[write the sample data](/influxdb/cloud-iox/get-started/write/#write-line-protocol-to-influxdb)
to your InfluxDB Cloud bucket before running the example queries.
{{% /note %}}
### Query data within time boundaries
- Use the `SELECT` clause to specify what tags and fields to return.
To return all tags and fields, use the wildcard alias (`*`).
- Specify the measurement to query in the `FROM` clause.
- Specify time boundaries in the `WHERE` clause.
Include time-based predicates that compare the value of the `time` column to a timestamp.
Use the `AND` logical operator to chain multiple predicates together.
{{% influxdb/custom-timestamps %}}
```sql
SELECT *
FROM home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T12:00:00Z'
```
{{% /influxdb/custom-timestamps %}}
Query time boundaries can be relative or absolute.
{{< expand-wrapper >}}
{{% expand "Query with relative time boundaries" %}}
To query data from relative time boundaries, compare the value of the `time`
column to a timestamp calculated by subtracting an interval from a timestamp.
Use `now()` to return the timestamp for the current time (UTC).
##### Query all data from the last month
```sql
SELECT * FROM home WHERE time >= now() - INTERVAL '1 month'
```
##### Query one day of data data from a week ago
```sql
SELECT *
FROM home
WHERE
time >= now() - INTERVAL '7 days'
AND time <= now() - INTERVAL '6 days'
```
{{% /expand %}}
{{% expand "Query with absolute time boundaries" %}}
To query data from absolute time boundaries, compare the value of the `time column
to a timestamp literals.
Use the `AND` logical operator to chain together multiple predicates and define
both start and stop boundaries for the query.
{{% influxdb/custom-timestamps %}}
```sql
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
```
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{< /expand-wrapper >}}
### Query data without time boundaries
To query data without time boundaries, do not include any time-based predicates
in your `WHERE` clause.
{{% warn %}}
Querying data _without time bounds_ can return an unexpected amount of data.
The query may take a long time to complete and results may be truncated.
{{% /warn %}}
```sql
SELECT * FROM home
```
### Query specific fields and tags
To query specific fields, include them in the `SELECT` clause.
If querying multiple fields or tags, comma-delimit each.
If the field or tag keys include special characters or spaces or are case-sensitive,
wrap the key in _double-quotes_.
```sql
SELECT time, room, temp, hum FROM home
```
### Query fields based on tag values
- Include the fields you want to query and the tags you want to base conditions
on in the `SELECT` clause.
- Include predicates in the `WHERE` clause that compare the tag identifier to
a string literal.
Use [logical operators](#) to chain multiple predicates together and apply
multiple conditions.
```sql
SELECT * FROM home WHERE room = 'Kitchen'
```
### Query points based on field values
- Include the fields you want to query in the `SELECT` clause.
- Include predicates in the `WHERE` clause that compare the field identifier to
another value.
Use [logical operators](#) (`AND`, `OR`) to chain multiple predicates together
and apply multiple conditions.
```sql
SELECT co, time FROM home WHERE co >= 10 OR co <= -10
```
### Alias queried fields and tags
To alias or rename fields and tags that you query, pass a string literal after
the field or tag identifier in the `SELECT` clause.
You can use the `AS` clause to define the alias, but it isn't necessary.
The following queries are functionally the same:
```sql
SELECT temp 'temperature', hum 'humidity' FROM home
SELECT temp AS 'temperature', hum AS 'humidity' FROM home
```

View File

@ -0,0 +1,79 @@
---
title: Explore your schema with SQL
description: >
When working with InfluxDB's implementation of SQL, a **bucket** is equivalent
to a databases, a **measurements** is structured as a table, and **time**,
**fields**, and **tags** are structured as columns.
menu:
influxdb_cloud_iox:
name: Explore your schema
parent: Query with SQL
identifier: query-sql-schema
weight: 201
influxdb/cloud-iox/tags: [query, sql]
list_code_example: |
##### List measurements
```sql
SHOW TABLES
```
##### List columns in a measurement
```sql
SHOW COLUMNS IN measurement
```
---
When working with InfluxDB's implementation of SQL, a **bucket** is equivalent
to a databases, a **measurements** is structured as a table, and **time**,
**fields**, and **tags** are structured as columns.
## List measurements in a bucket
Use `SHOW TABLES` to list measurements in your InfluxDB bucket.
```sql
SHOW TABLES
```
{{< expand-wrapper >}}
{{% expand "View example output" %}}
Tables listed with the `table_schema` of `iox` are measurements.
Tables with `system` or `information_schema` table schemas are system tables that
store internal metadata.
| table_catalog | table_schema | table_name | table_type |
| :------------ | :----------------- | :---------- | ---------: |
| public | iox | home | BASE TABLE |
| public | iox | noaa | BASE TABLE |
| public | system | queries | BASE TABLE |
| public | information_schema | tables | VIEW |
| public | information_schema | views | VIEW |
| public | information_schema | columns | VIEW |
| public | information_schema | df_settings | VIEW |
{{% /expand %}}
{{< /expand-wrapper >}}
## List columns in a measurement
Use the `SHOW COLUMNS` statement to view what columns are in a measurement.
Use the `IN` clause to specify the measurement.
```sql
SHOW COLUMNS IN home
```
{{< expand-wrapper >}}
{{% expand "View example output" %}}
| table_catalog | table_schema | table_name | column_name | data_type | is_nullable |
| :------------ | :----------- | :--------- | :---------- | :-------------------------- | ----------: |
| public | iox | home | co | Int64 | YES |
| public | iox | home | hum | Float64 | YES |
| public | iox | home | room | Dictionary(Int32, Utf8) | YES |
| public | iox | home | temp | Float64 | YES |
| public | iox | home | time | Timestamp(Nanosecond, None) | NO |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,12 @@
---
title: InfluxDB Cloud reference documentation
description: >
Reference documentation for InfluxDB Cloud including updates, API documentation,
tools, syntaxes, and more.
menu:
influxdb_cloud_iox:
name: Reference
weight: 20
---
{{< children >}}

View File

@ -0,0 +1,76 @@
---
title: Flux reference documentation
description: >
Learn the Flux syntax and structure used to query InfluxDB.
menu:
influxdb_cloud_iox:
name: Flux reference
parent: Reference
weight: 103
---
All Flux reference material is provided in the Flux documentation:
<a class="btn" href="/flux/v0.x/">View the Flux documentation</a>
## Flux with the InfluxDB IOx storage engine
When querying data from an InfluxDB bucket backed by InfluxDB IOx, use the following
input functions:
- [`iox.from()`](/flux/v0.x/stdlib/experimental/iox/from/): alternative to
[`from()`](/flux/v0.x/stdlib/influxdata/influxdb/from/).
- [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/sql/): execute a SQL query
with Flux.
Both IOx-based input functions return pivoted data with a column for each field
in the output. To unpivot the data:
1. Group by measurement and tag columns.
2. Use [`experimental.unpivot()`](/flux/v0.x/stdlib/experimental/unpivot/) to
unpivot the data. All columns not in the group key (other than `_time`) are
treated as fields.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[iox.from()](#)
[iox.sql()](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
import "experimental"
import "experimental/iox"
iox.from(bucket: "example-bucket", measurement: "example-measurement")
|> range(start: -1d)
|> group(columns: ["tag1", "tag2". "tag3"])
|> experimental.unpivot()
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "experimental"
import "experimental/iox"
query = "SELECT * FROM \"example-measurement\" WHERE time >= INTERVAL '1 day'"
iox.sql(bucket: "example-bucket", query: query)
|> group(columns: ["tag1", "tag2". "tag3"])
|> experimental.unpivot()
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% warn %}}
#### Flux performance with InfluxDB IOx
When querying data from an InfluxDB bucket backed by InfluxDB IOx, using `iox.from()`
is **less performant** than querying a TSM-backed bucket with `from()`.
For better Flux query performance, use `iox.sql()`.
{{% /warn %}}

View File

@ -0,0 +1,13 @@
---
title: Glossary
description: >
Terms related to InfluxData products and platforms.
weight: 120
menu:
influxdb_cloud_iox:
name: Glossary
parent: Reference
influxdb/cloud-iox/tags: [glossary]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,47 @@
---
title: InfluxDB Cloud regions
description: >
InfluxDB Cloud is available on multiple cloud providers and in multiple regions.
Each region has a unique InfluxDB Cloud URL and API endpoint.
aliases:
- /influxdb/cloud-iox/reference/urls/
weight: 106
menu:
influxdb_cloud_iox:
name: InfluxDB Cloud regions
parent: Reference
---
InfluxDB Cloud backed by InfluxDB IOx is available on on the following cloud providers and regions.
Each region has a unique InfluxDB Cloud URL and API endpoint.
Use the URLs below to interact with your InfluxDB Cloud instances with the
[InfluxDB API](/influxdb/cloud/reference/api/), [InfluxDB client libraries](/influxdb/cloud/api-guide/client-libraries/),
[`influx` CLI](/influxdb/cloud/reference/cli/influx/), or [Telegraf](/influxdb/cloud/write-data/no-code/use-telegraf/).
{{% note %}}
#### InfluxDB IOx-enabled cloud regions
We are in the process of deploying and enabling the InfluxDB IOx storage engine
on other cloud providers and regions.
{{% /note %}}
<a href="https://www.influxdata.com/influxdb-cloud-2-0-provider-region/" target="_blank" class="btn">Request a cloud region</a>
<!-- ** Uncomment this when we add an IOx region with multiple clusters **
{{% note %}}
#### Regions with multiple clusters
Some InfluxDB Cloud regions have multiple Cloud clusters, each with a unique URL.
To find your cluster URL, [log in to your InfluxDB Cloud organization](https://cloud2.influxdata.com)
and review your organization URL. The first subdomain identifies your
InfluxDB Cloud cluster. For example:
{{< code-callout "us-west-2-1" >}}
```sh
https://us-west-2-1.aws.cloud2.influxdata.com/orgs/03a2bbf46249a000/...
```
{{< /code-callout >}}
{{% /note %}} -->
{{< cloud_regions type="iox-table" >}}

View File

@ -0,0 +1,12 @@
---
title: SQL reference documentation
description: >
Learn the SQL syntax and structure used to query InfluxDB.
menu:
influxdb_cloud_iox:
name: SQL reference
parent: Reference
weight: 101
---
{{< children >}}

View File

@ -0,0 +1,160 @@
---
title: SQL data types
list_title: Data types
description: >
The InfluxDB SQL implementation supports a number of data types including 64-bit integers,
double-precision floating point numbers, strings, and more.
menu:
influxdb_cloud_iox:
name: Data types
parent: SQL reference
weight: 220
---
InfluxDB Cloud backed by InfluxDB IOx uses the [Apache Arrow DataFusion](https://arrow.apache.org/datafusion/) implementation of SQL.
Data types define the type of values that can be stored in table columns.
In InfluxDB's SQL implementation, a **measurement** is structured as a table,
and **tags**, **fields** and **timestamps** are exposed as columns.
DataFusion uses the [Arrow](https://arrow.apache.org/) type system for query execution.
Data types stored in InfluxDB's storage engine are mapped to SQL data types at query time.
{{% note %}}
When performing casting operations, cast to the **name** of the data type, not the actual data type.
Names and indentifiers in SQL are _case-insensitive_ by default. For example:
```sql
SELECT
'99'::BIGINT,
'2019-09-18T00:00:00Z'::timestamp
```
{{% /note %}}
## Character types
| Name | Data type | Description |
| :------ | :-------- | --------------------------------- |
| CHAR | UTF8 | Character string, fixed-length |
| VARCHAR | UTF8 | Character string, variable-length |
| TEXT | UTF8 | Variable unlimited length |
##### Example character types
```sql
abcdefghijk
time
"h2o_temperature"
```
## Numeric types
The following numeric types are supported:
| Name | Data type | Description |
| :-------------- | :-------- | :--------------------------- |
| BIGINT | INT64 | 64-bit signed integer |
| BIGINT UNSIGNED | UINT64 | 64-bit unsigned integer |
| DOUBLE | FLOAT64 | 64-bit floating-point number |
Minimum signed integer:` -9223372036854775808`
Maximum signed integer: `9223372036854775807`
Minimum unsigned integer (uinteger): `0`
Maximum unsigned integer (uinteger): `18446744073709551615`
Floats can be a decimal point, decimal integer, or decimal fraction.
##### Example float types
```sql
23.8
-446.89
5.00
0.033
```
## Date and time data types
InfluxDB SQL supports the following DATE/TIME data types:
| Name | Data type | Description |
| :-------- | :-------- | :------------------------------------------------------------------- |
| TIMESTAMP | TIMESTAMP | TimeUnit::Nanosecond, None |
| INTERVAL | INTERVAL | Interval(IntervalUnit::YearMonth) or Interval(IntervalUnit::DayTime) |
#### Timestamp
A time type is a single point in time using nanosecond precision.
The following date and time formats are supported:
```sql
-- Examples
YYYY-MM-DDT00:00:00.000Z
YYYY-MM-DDT00:00:00.000-00:00
YYYY-MM-DD 00:00:00.000-00:00
YYYY-MM-DDT00:00:00Z
YYYY-MM-DD 00:00:00.000
YYYY-MM-DD 00:00:00
```
#### Interval
The INTERVAL data type can be used with the following precision:
- year
- month
- day
- hour
- minute
- second
```sql
-- Examples
WHERE time > now() - interval'10 minutes'
time >= now() - interval'1 year'
```
## Boolean types
Booleans store TRUE or FALSE values.
| Name | Data type | Description |
| :------ | :-------- | :-------------------------------------------------------------------- |
| BOOLEAN | BOOLEAN | TRUE or FALSE for strings, 0 and 1 for integers, uintegers and floats |
Booleans are parsed in the following manner:
- string: `TRUE` or `FALSE`
- integer: 0 is false, non-zero is true
- uinteger: 0 is false, non-zero is true
- float: 0.0 is false, non-zero is true
##### Example boolean types
```sql
true
TRUE
false
FALSE
```
## Unsupported SQL types
The following SQL types are not currently supported:
- UUID
- BLOB
- CLOB
- BINARY
- VARBINARY
- REGCLASS
- NVARCHAR
- CUSTOM
- ARRAY
- ENUM
- SET
- DATETIME
- BYTEA

View File

@ -0,0 +1,14 @@
---
title: SQL functions
list_title: Functions
description: >
Use SQL functions to transform queried values.
menu:
influxdb_cloud_iox:
name: Functions
parent: SQL reference
identifier: sql-functions
weight: 220
---
{{< children >}}

View File

@ -0,0 +1,178 @@
---
title: SQL selector functions
list_title: Selector functions
description: >
Select data with SQL selector functions.
menu:
influxdb_cloud_iox:
name: Selectors
parent: sql-functions
weight: 220
---
SQL selector functions are designed to work with time series data.
They behave similarly to aggregate functions in that they take a collection of
data and return a single value.
However, selectors are unique in that they return a _struct_ that contains
a **time value** in addition to the computed value.
## How do selector functions work?
Each selector function returns a Rust _struct_ (similar to a JSON object)
representing a single time and value from the specified column in the each group.
What time and value get returned depend on the logic in the selector function.
For example, `selector_first` returns the value of specified column in the first row of the group.
`selector_max` returns the maximum value of the specified column in the group.
### Selector struct schema
The struct returned from a selector function has two properties:
- **time**: `time` value in the selected row
- **value**: value of the specified column in the selected row
```js
{time: 2023-01-01T00:00:00Z, value: 72.1}
```
### Selector functions in use
In your `SELECT` statement, execute a selector function and use bracket notation
to reference properties of the [returned struct](#selector-struct-schema) to
populate the column value:
```sql
SELECT
selector_first(temp, time)['time'] AS time,
selector_first(temp, time)['value'] AS temp,
room
FROM home
GROUP BY room
```
## Selector functions
- [selector_min](#selector_min)
- [selector_max](#selector_max)
- [selector_first](#selector_first)
- [selector_last](#selector_last)
### selector_min
`selector_min()` function returns the smallest value of a selected column and a timestamp.
##### Arguments:
- **value**: Column to operate on or a literal value.
- **timestamp**: Time column or timestamp literal.
```sql
selector_min(<value>, <timestamp>)
```
{{< expand-wrapper >}}
{{% expand "View `selector_min` query example" %}}
```sql
SELECT
selector_min(water_level, time)['time'] AS time,
selector_min(water_level, time)['value'] AS water_level
FROM h2o_feet
```
| time | water_level |
| :------------------- | ----------: |
| 2019-08-28T14:30:00Z | -0.61 |
{{% /expand %}}
{{< /expand-wrapper >}}
### selector_max
`selector_max()` function returns the smallest value of a selected column and a timestamp.
##### Arguments:
- **value**: Column to operate on or a literal value.
- **timestamp**: Time column or timestamp literal.
```sql
selector_max(<value>, <timestamp>)
```
{{< expand-wrapper >}}
{{% expand "View `selector_max` query example" %}}
```sql
SELECT
selector_max(water_level, time)['time'] AS time,
selector_max(water_level, time)['value'] AS water_level
FROM h2o_feet
```
| time | water_level |
| :------------------- | ----------: |
| 2019-08-28T07:24:00Z | 9.964 |
{{% /expand %}}
{{< /expand-wrapper >}}
### selector_first
`selector_first()` returns the first value ordered by time ascending.
##### Arguments:
- **value**: Column to operate on or a literal value.
- **timestamp**: Time column or timestamp literal.
```sql
selector_first(<value>, <timestamp>)
```
{{< expand-wrapper >}}
{{% expand "View `selector_first` query example" %}}
```sql
SELECT
selector_first(water_level, time)['time'] AS time,
selector_first(water_level, time)['value'] AS water_level
FROM h2o_feet
```
| time | water_level |
| :------------------- | ----------: |
| 2019-08-28T07:24:00Z | 9.964 |
{{% /expand %}}
{{< /expand-wrapper >}}
### selector_last
`selector_last()` returns the last value ordered by time ascending.
##### Arguments:
- **value**: Column to operate on or a value literal.
- **timestamp**: Time column or timestamp literal.
```sql
selector_last(<value>, <timestamp>)
```
{{< expand-wrapper >}}
{{% expand "View `selector_last` query example" %}}
```sql
SELECT
selector_last(water_level, time)['time'] AS time,
selector_last(water_level, time)['value'] AS water_level
FROM h2o_feet
```
| time | water_level |
| :------------------- | ----------: |
| 2019-09-17T21:42:00Z | 4.938 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,90 @@
---
title: GROUP BY clause
description: >
Use the `GROUP BY` clause to group query data by column values.
menu:
influxdb_cloud_iox:
name: GROUP BY clause
parent: SQL reference
weight: 203
---
Use the `GROUP BY` clause to group data by column values.
`GROUP BY` requires an aggregate or selector function in the `SELECT` statement.
- [Syntax](#syntax)
- [Examples](#examples)
## Syntax
```sql
SELECT
AGGREGATE_FN(field1),
tag1
FROM measurement
GROUP BY tag1
```
## Examples
### Group data by a tag values
```sql
SELECT
AVG("water_level") AS "avg_water_level",
"location"
FROM "h2o_feet"
GROUP BY "location"
```
{{< expand-wrapper >}}}
{{% expand "View example results" %}}
| avg_water_level | location |
| ----------------: | ------------ |
| 5.359142420303919 | coyote_creek |
| 3.530712094245885 | santa_monica |
{{% /expand %}}
{{< /expand-wrapper >}}
Group results in 15 minute time intervals by tag:
```sql
SELECT
"location",
DATE_BIN(INTERVAL '15 minutes', time, TIMESTAMP '2022-01-01 00:00:00Z') AS time,
COUNT("water_level") AS count
FROM "h2o_feet"
WHERE
time >= timestamp '2019-09-17T00:00:00Z'
AND time <= timestamp '2019-09-17T01:00:00Z'
GROUP BY
time,
location
ORDER BY
location,
time
```
{{< expand-wrapper >}}}
{{% expand "View example results" %}}
The query uses the `COUNT()` function to count the number of `water_level` points per 15 minute interval.
Results are then ordered by location and time.
| location | time | count |
| :----------- | :------------------- | ----: |
| coyote_creek | 2019-09-16T23:45:00Z | 1 |
| coyote_creek | 2019-09-17T00:00:00Z | 2 |
| coyote_creek | 2019-09-17T00:15:00Z | 3 |
| coyote_creek | 2019-09-17T00:30:00Z | 2 |
| coyote_creek | 2019-09-17T00:45:00Z | 3 |
| santa_monica | 2019-09-16T23:45:00Z | 1 |
| santa_monica | 2019-09-17T00:00:00Z | 2 |
| santa_monica | 2019-09-17T00:15:00Z | 3 |
| santa_monica | 2019-09-17T00:30:00Z | 2 |
| santa_monica | 2019-09-17T00:45:00Z | 3 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,85 @@
---
title: HAVING clause
description: >
Use the `HAVING` clause to filter query results based on values returned from
an aggregate operation.
menu:
influxdb_cloud_iox:
name: HAVING clause
parent: SQL reference
weight: 205
---
The `HAVING` clause places conditions on results created by an aggregate operation on groups.
The `HAVING` clause must follow the `GROUP BY` clause and precede the `ORDER BY` clause.
{{% note %}}
The `WHERE` clause filters rows based on specified conditions _before_ the aggregate operation.
The `HAVING` clause filters rows based on specified conditions _after_ the aggregate operation has taken place.
{{% /note %}}
- [Syntax](#syntax)
- [Examples](#examples)
## Syntax
```sql
SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [HAVING_clause] [ORDER_BY_clause]
```
## Examples
### Return rows with an aggregate value greater than a specified number
```sql
SELECT
MEAN("water_level") AS "mean_water_level", "location"
FROM
"h2o_feet"
GROUP BY
"location"
HAVING
"mean_water_level" > 5
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
The query returns on rows with values in the `mean_water_level` greater than 5 _after_ the aggregate operation.
| location | mean_water_level |
| :----------- | :---------------- |
| coyote_creek | 5.359142420303919 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Return the average result greater than a specified number from a specific time range
```sql
SELECT
AVG("water_level") AS "avg_water_level",
"time"
FROM
"h2o_feet"
WHERE
time >= '2019-09-01T00:00:00Z' AND time <= '2019-09-02T00:00:00Z'
GROUP BY
"time"
HAVING
"avg_water_level" > 6.82
ORDER BY
"time"
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
The query calculates the average water level per time and only returns rows with an average greater than 6.82 during the specified time range.
| time | avg_water_level |
| :------------------- | -----------------: |
| 2019-09-01T22:06:00Z | 6.8225 |
| 2019-09-01T22:12:00Z | 6.8405000000000005 |
| 2019-09-01T22:30:00Z | 6.8505 |
| 2019-09-01T22:36:00Z | 6.8325 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,76 @@
---
title: LIMIT clause
description: >
Use the `LIMIT` clause to limit the number of results returned by a query.
menu:
influxdb_cloud_iox:
name: LIMIT clause
parent: SQL reference
weight: 206
---
The `LIMIT` clause limits the number of rows returned by a query to a specified non-negative integer.
- [Syntax](#syntax)
- [Examples](#examples)
## Syntax
```sql
SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] LIMIT <N>
```
## Examples
### Limit results to a maximum of five rows
```sql
SELECT
"water_level","location", "time"
FROM
"h2o_feet"
LIMIT
5
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
The query returns a maximum of 5 results.
| location | time | water_level |
| :----------- | :----------------------- | ----------- |
| coyote_creek | 2019-08-28T00:00:00.000Z | 4.206 |
| coyote_creek | 2019-08-28T00:06:00.000Z | 4.052 |
| coyote_creek | 2019-08-28T00:12:00.000Z | 3.901 |
| coyote_creek | 2019-08-28T00:18:00.000Z | 3.773 |
| coyote_creek | 2019-08-28T00:24:00.000Z | 3.632 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Sort and limit results
Use the `ORDER BY` and `LIMIT` clauses to first sort results by specified columns,
then limit the sorted results by a specified number.
```sql
SELECT
"water_level", "location", "time"
FROM
"h2o_feet"
ORDER BY
"water_level" DESC
LIMIT
3
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
The query returns the highest 3 `water_level` readings in the `h2o_feet` measurement.
| location | time | water_level |
| :----------- | :----------------------- | ----------- |
| coyote_creek | 2019-08-27T13:42:00.000Z | -0.561 |
| coyote_creek | 2019-08-29T15:24:00.000Z | -0.571 |
| coyote_creek | 2019-08-28T14:24:00.000Z | -0.587 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,96 @@
---
title: ORDER BY clause
list_title: ORDER BY clause
description: >
Use the `ORDER BY` clause to sort results by specified columns and order.
menu:
influxdb_cloud_iox:
name: ORDER BY clause
parent: SQL reference
weight: 204
---
The `ORDER BY` clause sort results by specified columns and order.
Sort data based on fields, tags, and timestamps.
The following orders are supported:
- `ASC`: ascending _(default)_
- `DESC`: descending
- [Syntax](#syntax)
- [Examples](#examples)
## Syntax
```sql
[SELECT CLAUSE] [FROM CLAUSE] [ ORDER BY expression [ ASC | DESC ][, …] ]
```
{{% note %}}
**Note:** If your query includes a `GROUP BY` clause, the `ORDER BY` clause must appear **after** the `GROUP BY` clause.
{{% /note %}}
## Examples
### Sort data by time with the most recent first
```sql
SELECT
"water_level", "time"
FROM
"h2o_feet"
WHERE
"location" = 'coyote_creek'
ORDER BY
time DESC
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| time | water_level |
| :----------------------- | :----------- |
| 2019-09-17T16:24:00.000Z | 3.235 |
| 2019-09-17T16:18:00.000Z | 3.314 |
| 2019-09-17T16:12:00.000Z | 3.402 |
| 2019-09-17T16:06:00.000Z | 3.497 |
| 2019-09-17T16:00:00.000Z | 3.599 |
| 2019-09-17T15:54:00.000Z | 3.704 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Sort data by tag or field values
```sql
SELECT
"water_level", "time", "location"
FROM
"h2o_feet"
ORDER BY
"location", "water_level" DESC
```
### Sort data by selection order
```sql
SELECT
"location","water_level", "time"
FROM
"h2o_feet"
ORDER BY
1, 2
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
The query sorts results the location of a column in the `SELECT` statement:
first by `location` (1), and second by `water_level` (2).
| location | time | water_level |
| :----------- | :----------------------- | :---------- |
| coyote_creek | 2019-08-28T14:30:00.000Z | -0.61 |
| coyote_creek | 2019-08-29T15:18:00.000Z | -0.594 |
| coyote_creek | 2019-08-28T14:36:00.000Z | -0.591 |
| coyote_creek | 2019-08-28T14:24:00.000Z | -0.587 |
| coyote_creek | 2019-08-29T15:24:00.000Z | -0.571 |
| coyote_creek | 2019-08-27T13:42:00.000Z | -0.561 |

View File

@ -0,0 +1,112 @@
---
title: SELECT statement
description: >
Use the SQL `SELECT` statement to query data from a measurement.
menu:
influxdb_cloud_iox:
name: SELECT statement
parent: SQL reference
weight: 201
---
Use the `SELECT` statement to query data from an InfluxDB measurement.
The `SELECT` clause is required when querying data in SQL.
- [Syntax](#syntax)
- [Examples](#examples)
### Syntax
```sql
SELECT a, b, "time" FROM <measurement>
```
{{% note %}}
**Note:** When querying InfluxDB, the `SELECT` statement **always requires** a `FROM` clause.
{{% /note %}}
The SELECT clause supports the following:
- `SELECT *` - return all tags, fields and timestamps.
- `SELECT DISTINCT` to return all distinct (different) values.
- `SELECT <"field" or "tag">` - returns a specified field or tag.
- `SELECT <"field" or "tag">, <"field" or "tag">` - returns more than one tag or field.
- `SELECT <"field"> AS a `- return the field as the alias.
## Examples
The following examples use data from the NOAA database.
To download the NOAA test data see [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data).
### Select all fields and tags from a measurement
```sql
SELECT * FROM h2o_feet LIMIT 10
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| level description | location | time | water_level |
| :------------------------ | :----------- | :----------------------- | :---------- |
| at or greater than 9 feet | coyote_creek | 2019-09-01T00:00:00.000Z | 9.126144144 |
| at or greater than 9 feet | coyote_creek | 2019-09-01T00:06:00.000Z | 9.009 |
| between 6 and 9 feet | coyote_creek | 2019-09-01T00:12:00.000Z | 8.862 |
| between 6 and 9 feet | coyote_creek | 2019-09-01T00:18:00.000Z | 8.714 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Select specific tags and fields from a measurement
```sql
SELECT "location", "water_level" FROM "h2o_feet"
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| location | water_level |
| :----------- | :---------- |
| coyote_creek | 9.126144144 |
| coyote_creek | 9.009 |
| coyote_creek | 8.862 |
| coyote_creek | 8.714 |
| coyote_creek | 8.547 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Select a field, tag and timestamp from a measurement
```sql
SELECT "water_level", "location", "time" FROM "h2o_feet"
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| location | time | water_level |
| :----------- | :----------------------- | :---------- |
| coyote_creek | 2019-08-20T00:00:00.000Z | 8.638 |
| coyote_creek | 2019-08-20T00:06:00.000Z | 8.658 |
| coyote_creek | 2019-08-20T00:12:00.000Z | 8.678 |
{{% /expand %}}
{{< /expand-wrapper >}}
### Select a field and perform basic arithmetic
The following query takes the value of water_level, multiplies it by 3 and adds 5 to the result.
```sql
SELECT ("water_level" * 3) + 5 FROM "h2o_feet"
```
{{< expand-wrapper >}}
{{% expand "View example results" %}}
| water_level |
| :----------------- |
| 30.128 |
| 30.641000000000002 |
| 31.142000000000003 |
| 31.586 |
| 32.027 |
| 32.378432432 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,14 @@
---
title: InfluxDB syntaxes
description: >
InfluxDB uses a handful of languages and syntaxes to perform tasks such as
writing, querying, processing, and deleting data.
weight: 105
menu:
influxdb_cloud_iox:
name: Other syntaxes
parent: Reference
influxdb/cloud-iox/tags: [syntax]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,230 @@
---
title: Annotated CSV
description: >
The InfluxDB `/api/v2/query` API returns query results in annotated CSV format.
You can write data to InfluxDB using annotated CSV and the `influx write` command.
weight: 103
menu:
influxdb_cloud_iox:
parent: Other syntaxes
influxdb/cloud-iox/tags: [csv, syntax]
related:
- /influxdb/cloud-iox/reference/syntax/annotated-csv/extended/
---
The InfluxDB `/api/v2/query` API returns query results in annotated CSV format.
You can also write data to InfluxDB using annotated CSV and the `influx write` command,
or [upload a CSV file](/influxdb/cloud-iox/write-data/csv/user-interface) in the InfluxDB UI.
CSV tables must be encoded in UTF-8 and Unicode Normal Form C as defined in [UAX15](http://www.unicode.org/reports/tr15/).
InfluxDB removes carriage returns before newline characters.
## CSV response format
InfluxDB annotated CSV supports encodings listed below.
### Tables
A table may have the following rows and columns.
#### Rows
- **Annotation rows**: describe column properties.
- **Header row**: defines column labels (one header row per table).
- **Record row**: describes data in the table (one record per row).
##### Example
```sh
#group,false,false,true,true,false,false,true,true,true,true
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double,string,string,string,string
#default,mean,,,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement,host,region
,,0,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:52:00Z,15.43,mem,m,A,east
,,1,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:52:00Z,59.25,mem,m,B,east
,,2,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:52:00Z,52.62,mem,m,C,east
```
#### Columns
In addition to the data columns, a table may include the following columns:
- **Annotation column**: Only used in annotation rows. Always the first column.
Displays the name of an annotation. Value can be empty or a supported [annotation](#annotations).
You'll notice a space for this column for the entire length of the table,
so rows appear to start with `,`.
- **Result column**: Contains the name of the result specified by the query.
- **Table column**: Contains a unique ID for each table in a result.
### Multiple tables and results
If a file or data stream contains multiple tables or results, the following requirements must be met:
- A table column indicates which table a row belongs to.
- All rows in a table are contiguous.
- An empty row delimits a new table boundary in the following cases:
- Between tables in the same result that do not share a common table schema.
- Between concatenated CSV files.
- Each new table boundary starts with new annotation and header rows.
##### Example
```sh
#group,false,false,true,true,false,false,true,true,true,true
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,double,string,string,string,string
#default,_result,,,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement,host,region
,,0,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:00:00Z,15.43,mem,m,A,east
,,1,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:00:00Z,59.25,mem,m,B,east
,,2,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:00:00Z,52.62,mem,m,C,east
#group,false,false,true,true,true,true,false,false,true,true
#datatype,string,long,string,string,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,string
#default,_result,,,,,,,,,
,result,table,_field,_measurement,_start,_stop,_time,_value,host,region
,,3,mem_level,m,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:00:00Z,ok,A,east
,,4,mem_level,m,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:00:00Z,info,B,east
,,5,mem_level,m,2022-12-31T05:41:24Z,2023-01-31T05:41:24.001Z,2023-01-01T00:00:00Z,info,C,east
```
## Dialect options
Flux supports the following dialect options for `text/csv` format.
| Option | Description | Default |
| :-------- | :--------- |:-------:|
| **header** | If true, the header row is included. | `true` |
| **delimiter** | Character used to delimit columns. | `,` |
| **quoteChar** | Character used to quote values containing the delimiter. | `"` |
| **annotations** | List of annotations to encode (datatype, group, or default). | `empty` |
| **commentPrefix** | String prefix to identify a comment. Always added to annotations. | `#` |
## Annotations
Annotation rows describe column properties, and start with `#` (or commentPrefix value).
The first column in an annotation row always contains the annotation name.
Subsequent columns contain annotation values as shown in the table below.
| Annotation name | Values | Description |
|:-------- |:--------- | :------- |
| **datatype** | a [data type](#data-types) or [line protocol element](#line-protocol-elements) | Describes the type of data or which line protocol element the column represents. |
| **group** | boolean flag `true` or `false` | Indicates the column is part of the group key. |
| **default** | a value of the column's data type | Value to use for rows with an empty value. |
{{% note %}}
To encode a table with its [group key](/influxdb/cloud-iox/reference/glossary/#group-key),
the `datatype`, `group`, and `default` annotations must be included.
If a table has no rows, the `default` annotation provides the group key values.
{{% /note %}}
## Data types
| Datatype | Flux type | Description |
| :-------- | :--------- | :---------- |
| boolean | bool | "true" or "false" |
| unsignedLong | uint | unsigned 64-bit integer |
| long | int | signed 64-bit integer |
| double | float | IEEE-754 64-bit floating-point number |
| string | string | UTF-8 encoded string |
| base64Binary | bytes | base64 encoded sequence of bytes as defined in RFC 4648 |
| dateTime | time | instant in time, may be followed with a colon : and a description of the format (number, RFC3339, RFC3339Nano) |
| duration | duration | length of time represented as an unsigned 64-bit integer number of nanoseconds |
## Line protocol elements
The `datatype` annotation accepts [data types](#data-types) and **line protocol elements**.
Line protocol elements identify how columns are converted into line protocol when using the
[`influx write` command](/influxdb/cloud-iox/reference/cli/influx/write/) to write annotated CSV to InfluxDB.
| Line protocol element | Description |
|:--------------------- |:----------- |
| `measurement` | column value is the measurement |
| `field` _(default)_ | column header is the field key, column value is the field value |
| `tag` | column header is the tag key, column value is the tag value |
| `time` | column value is the timestamp _(alias for `dateTime`)_ |
| `ignore` or`ignored` | column is ignored and not included in line protocol |
### Mixing data types and line protocol elements
Columns with [data types](#data-types) (other than `dateTime`) in the
`#datatype` annotation are treated as **fields** when converted to line protocol.
Columns without a specified data type default to `field` when converted to line protocol
and **column values are left unmodified** in line protocol.
_See an example [below](#example-of-mixing-data-types-line-protocol-elements) and
[line protocol data types and format](/influxdb/cloud-iox/reference/syntax/line-protocol/#data-types-and-format)._
### Time columns
A column with `time` or `dateTime` `#datatype` annotations are used as the timestamp
when converted to line protocol.
If there are multiple `time` or `dateTime` columns, the last column (on the right)
is used as the timestamp in line protocol.
Other time columns are ignored and the `influx write` command outputs a warning.
Time column values should be **Unix timestamps** (in an [accepted timestamp precision](/influxdb/cloud-iox/write-data/#timestamp-precision)),
**RFC3339**, or **RFC3339Nano**.
##### Example line protocol elements in datatype annotation
```
#group,false,false,false,false,false,false,false
#datatype,measurement,tag,tag,field,field,ignored,time
#default,,,,,,,
m,cpu,host,time_steal,usage_user,nothing,time
cpu,cpu1,host1,0,2.7,a,1482669077000000000
cpu,cpu1,host2,0,2.2,b,1482669087000000000
```
Resulting line protocol:
```
cpu,cpu=cpu1,host=host1 time_steal=0,usage_user=2.7 1482669077000000000
cpu,cpu=cpu1,host=host2 time_steal=0,usage_user=2.2 1482669087000000000
```
##### Example of mixing data types line protocol elements
```
#group,false,false,false,false,false,false,false,false,false
#datatype,measurement,tag,string,double,boolean,long,unsignedLong,duration,dateTime
#default,test,annotatedDatatypes,,,,,,
m,name,s,d,b,l,ul,dur,time
,,str1,1.0,true,1,1,1ms,1
,,str2,2.0,false,2,2,2us,2020-01-11T10:10:10Z
```
Resulting line protocol:
```
test,name=annotatedDatatypes s="str1",d=1,b=true,l=1i,ul=1u,dur=1000000i 1
test,name=annotatedDatatypes s="str2",d=2,b=false,l=2i,ul=2u,dur=2000i 1578737410000000000
```
## Errors
If an error occurs during execution, a table returns with:
- An error column that contains an error message.
- A reference column with a unique reference code to identify more information about the error.
- A second row with error properties.
If an error occurs:
- Before results materialize, the HTTP status code indicates an error. Error details are encoded in the csv table.
- After partial results are sent to the client, the error is encoded as the next table and remaining results are discarded. In this case, the HTTP status code remains 200 OK.
##### Example
Encoding for an error with the datatype annotation:
```
#datatype,string,long
,error,reference
,Failed to parse query,897
```

View File

@ -0,0 +1,20 @@
---
title: Extended annotated CSV
description: >
Extended annotated CSV provides additional annotations and options that specify
how CSV data should be converted to [line protocol](/influxdb/cloud/reference/syntax/line-protocol/)
and written to InfluxDB.
menu:
influxdb_cloud_iox:
name: Extended annotated CSV
parent: Annotated CSV
weight: 201
influxdb/cloud-iox/tags: [csv, syntax, write]
related:
- /influxdb/cloud-iox/write-data/csv/
- /influxdb/cloud-iox/reference/cli/influx/write/
- /influxdb/cloud-iox/reference/syntax/line-protocol/
- /influxdb/cloud-iox/reference/syntax/annotated-csv/
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,15 @@
---
title: Line protocol
description: >
InfluxDB uses line protocol to write data points.
It is a text-based format that provides the measurement, tag set, field set, and timestamp of a data point.
menu:
influxdb_cloud_iox:
parent: Other syntaxes
weight: 102
influxdb/cloud-iox/tags: [write, line protocol, syntax]
related:
- /influxdb/cloud-iox/write-data/
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,17 @@
---
title: Visualize data
seotitle: Visualize data stored in InfluxDB
description: >
Use tools like Grafana and Apache Superset to visualize time series data
stored in InfluxDB.
weight: 5
menu:
influxdb_cloud_iox:
name: Visualize data
influxdb/cloud-iox/tags: [visualization]
---
Use visualization tools like Grafana and Apache Superset to visualize your
time series data stored in InfluxDB.
{{< children >}}

View File

@ -0,0 +1,196 @@
---
title: Use Grafana to visualize data
seotitle: Use Grafana to visualize data stored in InfluxDB
list_title: Grafana
description: >
Use [Grafana](https://grafana.com/) to query and visualize data stored in an
InfluxDB bucket backed by InfluxDB IOx.
weight: 101
menu:
influxdb_cloud_iox:
name: Grafana
parent: Visualize data
influxdb/cloud-iox/tags: [visualization]
---
Use [Grafana](https://grafana.com/) to query and visualize data stored in an
InfluxDB bucket backed by InfluxDB IOx.
> [Grafana] enables you to query, visualize, alert on, and explore your metrics,
> logs, and traces wherever they are stored.
> [Grafana] provides you with tools to turn your time-series database (TSDB)
> data into insightful graphs and visualizations.
>
> {{% caption %}}[Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /caption %}}
For the most performant queries, use SQL and the
[Flight SQL protocol](https://arrow.apache.org/blog/2022/02/16/introducing-arrow-flight-sql/)
to query InfluxDB.
## Download the FlightSQL Plugin
{{% warn %}}
The Grafana FlightSQL plugin is experimental and is subject to change.
{{% /warn %}}
```sh
curl -L https://github.com/influxdata/grafana-flightsql-datasource/releases/download/v0.1.0/influxdata-flightsql-datasource-0.1.0.zip \
-o influxdata-flightsql-datasource-0.1.0.zip
```
## Install the Grafana FlightSQL plugin
Install the custom-built FlightSQL plugin in a local or Docker-based instance
of Grafana OSS or Grafana Enterprise.
{{% warn %}}
#### Grafana Cloud does not support custom plugins
Only plugins that are uploaded publicly to the Grafana Plugins repo that
include the ability to use “click to install” from the site can be added to
your Grafana Cloud instance. Private, custom-built, or third-party plugins
that require manual uploading or manually modifying Grafana backend files
cannot be installed on or used with Grafana Cloud.
{{% caption %}}
[Grafana Cloud documentation](https://grafana.com/docs/grafana-cloud/fundamentals/find-and-use-plugins/)
{{% /caption %}}
{{% /warn %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[Local](#)
[Docker](#)
{{% /tabs %}}
{{% tab-content %}}
<!---------------------------- BEGIN LOCAL CONTENT ---------------------------->
<div id="custom-grafana-plugins-directory"></div>
1. **Unzip the FlightSQL plugin archive to your Grafana custom plugin directory**.
The custom plugin directory can exist anywhere in your filesystem as long as
the Grafana process can access it.
```sh
unzip influxdata-flightsql-datasource-0.1.0.zip -d /path/to/grafana-plugins/
```
2. **Edit your Grafana configuration**.
Configure Grafana using configuration file or environment variable.
For information about where to find your Grafana configuration file or what
environment variables are available, see the
[Configure Grafana documentation](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/).
1. **Point Grafana to your custom plugin directory**.
Do one of the following:
- Edit the `paths.plugins` directive in your Grafana configuration file
to point to the path of your [custom plugins directory](#custom-grafana-plugins-directory):
```ini
[paths]
plugins = /path/to/grafana-plugins/
```
- Set the `GF_PATHS_PLUGINS` environment variable to point to the path
of your [custom plugins directory](#custom-grafana-plugins-directory):
```sh
GF_PATHS_PLUGINS=/path/to/grafana-plugins/
```
2. **Allow Grafana to load unsigned plugins**.
The FlightSQL plugin is unsigned and cannot be loaded by default.
Do one of the following:
- Edit the `plugins.allow_loading_unsigned_plugins` directive in your
Grafana configuration file to allow the `influxdata-flightsql-datasource`:
```ini
[plugins]
allow_loading_unsigned_plugins = influxdata-flightsql-datasource
```
- Set the `GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS` environment variable
to `influxdata-flightsql-datasource`:
```sh
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=influxdata-flightsql-datasource
```
4. **Restart Grafana to apply the configuration changes**.
<!----------------------------- END LOCAL CONTENT ----------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!---------------------------- BEGIN DOCKER CONTENT --------------------------->
To add the FlightSQL plugin to your pre-existing Grafana Docker deployment
mount the following volume to your Grafana container:
```bash
docker run \
--volume $PWD/dist:/var/lib/grafana/plugins/influxdata-flightsql-datasource \
--publish 3000:3000 \
--name grafana \
--env GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=influxdata-flightsql-datasource \
grafana/grafana:latest
```
{{% note %}}
It's important to set the `GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS`
environment variable. The FlightSQL plugin is unsigned and Grafana requires you
to explicitly load unsigned plugins.
{{% /note %}}
<!----------------------------- END DOCKER CONTENT ---------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Configure the Flight SQL datasource
1. In your Grafana user interface (UI), navigate to **Data Sources**.
2. Click **Add data source**.
3. Search for and select the **FlightSQL** plugin.
4 Provide a name for your datasource.
5. Add your connection credentials:
- **Host**: Provide the host and port of your Flight SQL client.
For InfluxDB {{< current-version >}}, this is your
{{% cloud-only %}}[InfluxDB Cloud region domain](/influxdb/cloud-iox/reference/regions/){{% /cloud-only %}}
{{% oss-only %}}InfluxDB domain{{% /oss-only %}}
and port 443. For example:
```
us-east-1-1.aws.cloud2.influxdata.com:433
```
- **AuthType**: Select **token**.
- **Token**: Provide your InfluxDB API token with read access to the buckets
you want to query.
- **Require TLS/SSL**:
{{% cloud-only %}}Enable this toggle.{{% /cloud-only %}}
{{% oss-only %}}If TLS is configured and enabled on your InfluxDB instance, enable this toggle.{{% /oss-only %}}
6. Add connection **MetaData**.
Provide optional key, value pairs to send to your Flight SQL client.
InfluxDB {{< current-version >}} requires your **bucket name** or **bucket-id**:
- **Key**: `bucket-name` or `bucket-id`
- **Value**: Bucket name or bucket ID
7. Select **Save & Test**.
{{< img-hd src="/img/influxdb/cloud-iox-grafana-flightsql-datasource.png" alt="Grafana FlightSQL datasource" />}}
8. Click **Explore** to begin exploring your schema and querying InfluxDB with SQL.
## Build visualizations with Grafana
For a comprehensive walk-through of creating visualizations with
Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/).

View File

@ -0,0 +1,163 @@
---
title: Use Superset to visualize data
seotitle: Use Apache Superset to visualize data stored in InfluxDB
list_title: Superset
description: >
Use [Apache Superset](https://superset.apache.org/) to query and visualize data
stored in an InfluxDB bucket backed by InfluxDB IOx.
weight: 101
menu:
influxdb_cloud_iox:
name: Superset
parent: Visualize data
influxdb/cloud-iox/tags: [visualization]
---
Use [Apache Superset](https://superset.apache.org/) to query and visualize data
stored in an InfluxDB bucket backed by InfluxDB IOx.
> Apache Superset is a modern, enterprise-ready business intelligence web application.
> It is fast, lightweight, intuitive, and loaded with options that make it easy for
> users of all skill sets to explore and visualize their data, from simple pie
> charts to highly detailed deck.gl geospatial charts.
>
> {{% caption %}}[Apache Superset documentation](https://superset.apache.org/docs/intro){{% /caption %}}
## Install and start Superset
We recommend using **Docker and docker-compose** to run Superset.
1. **Download and install Docker Engine and docker-compose**:
- **macOS**: [Install Docker for macOS](https://docs.docker.com/desktop/install/mac-install/)
- **Linux**: [Install Docker for Linux](https://docs.docker.com/desktop/install/linux-install/)
- **Windows**: [Install Docker for Windows](https://docs.docker.com/desktop/install/windows-install/)
{{% warn %}}
**Superset** is not officially supported on Windows. For more information, see the
[Superset documentation](https://superset.apache.org/docs/installation/installing-superset-using-docker-compose#1-install-a-docker-engine-and-docker-compose).
{{% /warn %}}
2. **Download Apache Superset**.
Clone the Superset repository and navigate into the repository directory.
```sh
git clone https://github.com/apache/superset.git && cd ./superset
```
3. **Add FlightSQL SQL Alchemy to your Superset Docker configuration**.
FlightSQL SQL Alchemy is a Python library that provides a
[DB API 2](https://peps.python.org/pep-0249/) interface and
[SQLAlchemy](https://www.sqlalchemy.org/) dialect for
[Flight SQL](https://arrow.apache.org/docs/format/FlightSql.html).
{{% warn %}}
The `flightsql-dbapi` library is experimental and under active development.
The APIs it provides could change at any time.
{{% /warn %}}
```sh
cat <<EOF >./docker/requirements-local.txt
flightsql-dbapi
EOF
```
4. Use docker-compose to create and start all the Docker containers necessary
to run Superset. _Superset does require multiple Docker containers._
```sh
docker-compose -f docker-compose-non-dev.yml pull
docker-compose -f docker-compose-non-dev.yml up
```
Once completed, Superset is running.
## Log in to Superset
1. Navigate to [localhost:8088](http://localhost:8088) in your browser.
If Superset is configured to use a custom domain, navigate to your custom domain.
2. If this is your first time logging into Superset, use the following username
and password:
- **Username**: admin
- **Password**: admin
3. _(Optional)_ Create a new admin user with a unique password.
1. In the Superset user interface (UI), click **Settings** in the top right
and select **List Users**.
2. Click **{{< icon "plus" >}}** in the top right.
3. Select the **Admin** role and provide the remaining credentials for the new user.
4. Click **Save**.
5. Delete the default **admin** users.
## Set up a new database connection
1. In the Superset UI, click **Settings** in the top right and select
**Database Connections**.
2. Click **+ Database** in the top right.
3. In the **Connect a Database** window, click on the **Supported Databases**
drop-down menu and select **Other**.
{{< img-hd src="/img/influxdb/cloud-iox-superset-connect.png" alt="Configure InfluxDB connection in Superset" />}}
4. Enter a **Display Name** for the database connection.
5. Enter your **SQL Alchemy URI** comprised of the following:
- **Protocol**: `datafusion+flightsql`
- **Domain**: [InfluxDB Cloud region domain](/influxdb/cloud-iox/reference/regions/)
- **Port**:
{{% cloud-only %}}443{{% /cloud-only %}}
{{% oss-only %}}8086 or your custom-configured bind address{{% /oss-only %}}
##### Query parameters
- **bucket-name**: URL-encoded InfluxDB bucket name
- **token**: InfluxDB API token with read access to the specified bucket
{{< code-callout "&lt;(influxdb-url|port|bucket-name|token)&gt;" >}}
{{< code-callout "us-east-1-1\.aws\.cloud2\.influxdata\.com|443|example-bucket|example-token" >}}
```sh
# Syntax
datafusion+flightsql://<influxdb-url>:<port>?bucket-name=<bucket-name>&token=<token>
# Example
datafusion+flightsql://us-east-1-1.aws.cloud2.influxdata.com:443?bucket-name=example-bucket&token=example-token
```
{{< /code-callout >}}
{{< /code-callout >}}
6. Click **Test Connection** to ensure the connection works.
7. Click **Connect** to save the database connection.
## Query InfluxDB with Superset
With a connection to InfluxDB {{< current-version >}} established, you can begin
to query and visualize data from InfluxDB.
1. In the Superset UI, click **SQL ▾** in the top navigation bar and select **SQL Lab**.
2. In the left pane:
1. Under **Database**, select your InfluxDB connection.
2. Under **Schema**, select **iox**.
3. Under **See table schema**, select the InfluxDB measurement to query.
The measurement schema appears in the left pane:
{{< img-hd src="/img/influxdb/cloud-iox-superset-schema.png" alt="Select your InfluxDB schema in Superset" />}}
3. Use the **query editor** to write a SQL query that queries data from your
InfluxDB bucket.
4. Click **Run** to execute the query.
Query results appear below the query editor.
## Build visualizations with Superset
With a connection to InfluxDB {{< current-version >}} established and a query
that returns results, you can begin build out data visualizations and dashboards
in Superset. For a comprehensive walk-through of creating visualizations with
Superset, see the [Creating Charts and Dashboards in Superset documentation](https://superset.apache.org/docs/creating-charts-dashboards/creating-your-first-dashboard).
{{< img-hd src="/img/influxdb/cloud-iox-superset-dashboard.png" alt="Build InfluxDB dashboards in Apache Superset" />}}

View File

@ -0,0 +1,19 @@
---
title: Write data to InfluxDB
list_title: Write data
description: >
Collect and write time series data to InfluxDB Cloud and InfluxDB OSS.
weight: 3
menu:
influxdb_cloud_iox:
name: Write data
influxdb/cloud-iox/tags: [write, line protocol]
# related:
# - /influxdb/cloud/api/#tag/Write, InfluxDB API /write endpoint
# - /influxdb/cloud/reference/syntax/line-protocol
# - /influxdb/cloud/reference/syntax/annotated-csv
# - /influxdb/cloud/reference/cli/influx/write
# - /resources/videos/ingest-data/, How to Ingest Data in InfluxDB (Video)
---
{{< children >}}

View File

@ -0,0 +1,17 @@
---
title: Best practices for writing data
seotitle: Best practices for writing data to InfluxDB Cloud
description: >
Learn about the recommendations and best practices for writing data to InfluxDB.
weight: 105
menu:
influxdb_cloud_iox:
name: Best practices
identifier: write-best-practices
parent: Write data
---
The following articles walk through recommendations and best practices for writing
data to InfluxDB.
{{< children >}}

View File

@ -0,0 +1,368 @@
---
title: InfluxDB schema design recommendations
seotitle: InfluxDB schema design recommendations and best practices
description: >
Design your schema for simpler and more performant queries.
menu:
influxdb_cloud_iox:
name: Schema design
weight: 201
parent: write-best-practices
---
Use the following guidelines to design your [schema](/influxdb/cloud-iox/reference/glossary/#schema)
for simpler and more performant queries.
- [InfluxDB data structure](#influxdb-data-structure)
- [Tags versus fields](#tags-versus-fields)
- [Schema restrictions](#schema-restrictions)
- [Do not use duplicate names for tags and fields](#do-not-use-duplicate-names-for-tags-and-fields)
- [Measurements can contain up to 200 columns](#measurements-can-contain-up-to-200-columns)
- [Design for performance](#design-for-performance)
- [Avoid wide schemas](#avoid-wide-schemas)
- [Avoid sparse schemas](#avoid-sparse-schemas)
- [Measurement schemas should be homogenous](#measurement-schemas-should-be-homogenous)
- [Design for query simplicity](#design-for-query-simplicity)
- [Keep measurement names, tag keys, and field keys simple](#keep-measurement-names-tag-keys-and-field-keys-simple)
- [Avoid keywords and special characters](#avoid-keywords-and-special-characters)
---
## InfluxDB data structure
The InfluxDB data model organizes time series data into buckets and measurements.
A bucket can contain multiple measurements. Measurements contain multiple
tags and fields.
- **Bucket**: Named location where time series data is stored.
A bucket can contain multiple _measurements_.
- **Measurement**: Logical grouping for time series data.
All _points_ in a given measurement should have the same _tags_.
A measurement contains multiple _tags_ and _fields_.
- **Tags**: Key-value pairs that provide metadata for each point--for example,
something to identify the source or context of the data like host,
location, station, etc.
- **Fields**: Key-value pairs with values that change over time--for example,
temperature, pressure, stock price, etc.
- **Timestamp**: Timestamp associated with the data.
When stored on disk and queried, all data is ordered by time.
### Tags versus fields
When designing your schema for InfluxDB, a common question is, "what should be a
tag and what should be a field?" The following guidelines should help answer that
question as you design your schema.
- Use tags to store identifying information about the source or context of the data.
- Use fields to store values that change over time.
- Tag values can only be strings.
- Field values can be any of the following data types:
- Integer
- Unsigned integer
- Float
- String
- Boolean
{{% note %}}
If coming from a version of InfluxDB backed by the TSM storage engine, **tag value**
cardinality no longer affects the overall performance of your database.
The InfluxDB IOx engine supports nearly infinite tag value and series cardinality.
{{% /note %}}
---
## Schema restrictions
### Do not use duplicate names for tags and fields
Tags and fields within the same measurement can not be named the same.
All tags an fields are stored as unique columns in a table representing the
measurement on disk. Tags and fields named the same cause a column conflict.
{{% note %}}
Use [explicit bucket schemas](/influxdb/cloud-iox/...) to enforce unique tag and
field keys within a schema.
{{% /note %}}
### Measurements can contain up to 200 columns
A measurement can contain **up to 200 columns**. Each row requires a time column,
but the rest represent tags and fields stored in the measurement.
Therefore, a measurement can contain one time column and 199 total field and tag columns.
If you attempt to write to a measurement and exceed the 200 column limit, the
write request fails and InfluxDB returns an error.
---
## Design for performance
How you structure your schema within a measurement can affect the overall
performance of queries against that measurement.
The following guidelines help to optimize query performance:
- [Avoid wide schemas](#avoid-wide-schemas)
- [Avoid sparse schemas](#avoid-sparse-schemas)
- [Measurement schemas should be homogenous](#measurement-schemas-should-be-homogenous)
### Avoid wide schemas
A wide schema is one with many tags and fields and corresponding columns for each.
At query time, InfluxDB evaluates each row in the queried measurement to
determine what rows to return. The "wider" the measurement (more columns), the
less performant queries are against that measurement.
To ensure queries stay performant, the InfluxDB IOx storage engine has a
[limit of 200 columns per measurement](#measurements-can-contain-up-to-200-columns).
To avoid a wide schema, limit the number of tags and fields stored in a measurement.
If you need to store more than 199 total tags and fields, consider segmenting
your fields into a separate measurement.
### Avoid sparse schemas
A sparse schema is one where, for many rows, columns contain null values.
These generally stem from [non-homogenous measurement schemas](#measurement-schemas-should-be-homogenous)
or individual fields for a tag set being reported at separate times.
Sparse schemas require the InfluxDB query engine to evaluate many
null columns, adding unnecessary overhead to storing and querying data.
_For an example of a sparse schema,
[view the non-homogenous schema example below](#view-example-of-a-sparse-non-homogenous-schema)._
### Measurement schemas should be homogenous
Data stored within a measurement should be "homogenous," meaning each row should
have the same tag and field keys.
All rows stored in a measurement share the same columns, but if a point doesn't
include a value for a column, the column value is null.
A measurement full of null values has a ["sparse" schema](#avoid-sparse-schemas).
{{< expand-wrapper >}}
{{% expand "View example of a sparse, non-homogenous schema" %}}
Non-homogenous schemas are often caused by writing points to a measurement with
inconsistent tag or field sets. For example, lets say data is collected from two
different sources and each source returns data with different tag and field sets.
{{< flex >}}
{{% flex-content %}}
##### Source 1 tags and fields:
- tags:
- source
- code
- crypto
- fields:
- price
{{% /flex-content %}}
{{% flex-content %}}
##### Source 2 tags and fields:
- tags:
- src
- currency
- crypto
- fields:
- cost
- volume
{{% /flex-content %}}
{{< /flex >}}
These sets of data written to the same measurement will result in a measurement
full of null values (also known as a sparse schema):
| time | source | src | code | currency | crypto | price | cost | volume |
| :------------------- | :----- | --: | :--- | :------- | :------ | ----------: | ---------: | ----------: |
| 2023-01-01T12:00:00Z | src1 | | USD | | bitcoin | 16588.45865 | | |
| 2023-01-01T12:00:00Z | | 2 | | EUR | bitcoin | | 16159.5806 | 16749450200 |
| 2023-01-01T13:00:00Z | src1 | | USD | | bitcoin | 16559.49871 | | |
| 2023-01-01T13:00:00Z | | 2 | | EUR | bitcoin | | 16131.3694 | 16829683245 |
| 2023-01-01T14:00:00Z | src1 | | USD | | bitcoin | 16577.46667 | | |
| 2023-01-01T14:00:00Z | | 2 | | EUR | bitcoin | | 16148.8727 | 17151722208 |
| 2023-01-01T15:00:00Z | src1 | | USD | | bitcoin | 16591.36998 | | |
| 2023-01-01T15:00:00Z | | 2 | | EUR | bitcoin | | 16162.4167 | 17311854919 |
{{% /expand %}}
{{< /expand-wrapper >}}
## Design for query simplicity
Naming conventions for measurements, tag keys, and field keys can simplify or
complicate the process of writing queries for your data.
The following guidelines help to ensure writing queries for your data is as
simple as possible.
- [Keep measurement names, tag keys, and field keys simple](#keep-measurement-names-tag-keys-and-field-keys-simple)
- [Avoid keywords and special characters](#avoid-keywords-and-special-characters)
### Keep measurement names, tag keys, and field keys simple
Measurement names, tag keys, and field keys should be simple and accurately
describe what each contains.
The most common cause of a complex naming convention is when you try to "embed"
data attributes into a measurement name, tag key, or field key.
#### Not recommended {.orange}
As a basic example, consider the following [line protocol](/influxdb/cloud-iox/reference/syntax/line-protocol/)
that embeds sensor metadata (location, model, and ID) into a tag key:
```
home,sensor=loc-kitchen.model-A612.id-1726ZA temp=72.1
home,sensor=loc-bath.model-A612.id-2635YB temp=71.8
```
{{< expand-wrapper >}}
{{% expand "View written data" %}}
{{% influxql/table-meta %}}
**name**: home
{{% /influxql/table-meta %}}
| time | sensor | temp |
| :------------------- | :------------------------------- | ---: |
| 2023-01-01T00:00:00Z | loc-kitchen.model-A612.id-1726ZA | 72.1 |
| 2023-01-01T00:00:00Z | loc-bath.model-A612.id-2635YB | 71.8 |
{{% /expand %}}
{{< /expand-wrapper >}}
To query data from the sensor with ID `1726ZA`, you have to use either SQL pattern
matching or regular expressions to evaluate the `sensor` tag:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
[Flux](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT * FROM home WHERE sensor LIKE '%id-1726ZA%'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT * FROM home WHERE sensor =~ /id-1726ZA/
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "experimental/iox"
iox.from(bucket: "example-bucket")
|> range(start: -1y)
|> filter(fn: (r) => r._measurement == "home")
|> filter(fn: (r) => r.sensor =~ /id-1726ZA/)
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
SQL pattern matching and regular expressions both complicate the query and
are less performant than simple equality expressions.
#### Recommended {.green}
The better approach would be to write each sensor attribute as an individual tag:
```
home,location=kitchen,sensor_model=A612,sensor_id=1726ZA temp=72.1
home,location=bath,sensor_model=A612,sensor_id=2635YB temp=71.8
```
{{< expand-wrapper >}}
{{% expand "View written data" %}}
{{% influxql/table-meta %}}
**name**: home
{{% /influxql/table-meta %}}
| time | location | sensor_model | sensor_id | temp |
| :------------------- | :------- | :----------- | :-------- | ---: |
| 2023-01-01T00:00:00Z | kitchen | A612 | 1726ZA | 72.1 |
| 2023-01-01T00:00:00Z | bath | A612 | 2635YB | 71.8 |
{{% /expand %}}
{{< /expand-wrapper >}}
To query data from the sensor with ID `1726ZA` using this schema, you can use a
simple equality expression:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL & InfluxQL](#)
[Flux](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT * FROM home WHERE sensor_id = '1726ZA'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "experimental/iox"
iox.from(bucket: "example-bucket")
|> range(start: -1y)
|> filter(fn: (r) => r._measurement == "home")
|> filter(fn: (r) => r.sensor_id == "1726ZA")
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
This query is easier to write and is more performant than using pattern matching
or regular expressions.
### Avoid keywords and special characters
To simplify query writing, avoid using reserved keywords or special characters
in measurement names, tag keys, and field keys.
- [SQL keywords](#)
- [InfluxQL keywords](/influxdb/cloud-iox/reference/syntax/influxql/spec/#keywords)
- [Flux keywords](/{{< latest "flux" >}}/spec/lexical-elements/#keywords)
When using SQL or InfluxQL to query measurements, tags, and fields with special
characters or keywords, you have to wrap these keys in **double quotes**.
In Flux, if using special characters in tag keys, you have to use
[bracket notation](/{{< latest "flux" >}}/data-types/composite/record/#bracket-notation)
to reference those columns.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL & InfluxQL](#)
[Flux](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
"example-field", "tag@1-23"
FROM
"example-measurement"
WHERE
"tag@1-23" = 'ABC'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "experimental/iox"
iox.from(bucket: "example-bucket")
|> range(start: -1y)
|> filter(fn: (r) => r._measurement == "example-measurement")
|> filter(fn: (r) => r["tag@1-23"] == "ABC")
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -0,0 +1,17 @@
---
title: Write CSV data to InfluxDB
description: >
Use the `influx CLI`, InfluxDB user interface, or Telegraf to write CSV data
to InfluxDB.
menu:
influxdb_cloud_iox:
name: Write CSV data
parent: Write data
weight: 103
related:
- /influxdb/cloud-iox/reference/syntax/line-protocol/
- /influxdb/cloud-iox/reference/syntax/annotated-csv/
- /influxdb/cloud-iox/reference/cli/influx/write/
---
{{< children >}}

View File

@ -0,0 +1,568 @@
---
title: Write CSV data with the influx CLI
description: >
Use the [`influx write` command](/influxdb/cloud-iox/reference/cli/influx/write/) to write CSV data
to InfluxDB. Include annotations with the CSV data to determine how the data translates
into [line protocol](/influxdb/cloud-iox/reference/syntax/line-protocol/).
menu:
influxdb_cloud_iox:
name: Use the influx CLI
parent: Write CSV data
weight: 201
related:
- /influxdb/cloud-iox/reference/syntax/line-protocol/
- /influxdb/cloud-iox/reference/syntax/annotated-csv/
- /influxdb/cloud-iox/reference/cli/influx/write/
---
Use the [`influx write` command](/influxdb/cloud-iox/reference/cli/influx/write/) to write CSV data
to InfluxDB. Include [Extended annotated CSV](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/)
annotations to specify how the data translates into [line protocol](/influxdb/cloud-iox/reference/syntax/line-protocol/).
Include annotations in the CSV file or inject them using the `--header` flag of
the `influx write` command.
##### Example write command
```sh
influx write -b example-bucket -f path/to/example.csv
```
##### example.csv
```
#datatype measurement,tag,double,dateTime:RFC3339
m,host,used_percent,time
mem,host1,64.23,2020-01-01T00:00:00Z
mem,host2,72.01,2020-01-01T00:00:00Z
mem,host1,62.61,2020-01-01T00:00:10Z
mem,host2,72.98,2020-01-01T00:00:10Z
mem,host1,63.40,2020-01-01T00:00:20Z
mem,host2,73.77,2020-01-01T00:00:20Z
```
##### Resulting line protocol
```
mem,host=host1 used_percent=64.23 1577836800000000000
mem,host=host2 used_percent=72.01 1577836800000000000
mem,host=host1 used_percent=62.61 1577836810000000000
mem,host=host2 used_percent=72.98 1577836810000000000
mem,host=host1 used_percent=63.40 1577836820000000000
mem,host=host2 used_percent=73.77 1577836820000000000
```
{{% note %}}
To test the CSV to line protocol conversion process, use the `influx write dryrun`
command to print the resulting line protocol to stdout rather than write to InfluxDB.
{{% /note %}}
- [CSV Annotations](#csv-annotations)
- [Inject annotation headers](#inject-annotation-headers)
- [Skip annotation headers](#skip-annotation-headers)
- [Process input as CSV](#process-input-as-csv)
- [Specify CSV character encoding](#specify-csv-character-encoding)
- [Skip rows with errors](#skip-rows-with-errors)
- [Advanced examples](#advanced-examples)
## CSV Annotations
Use **CSV annotations** to specify which element of line protocol each CSV column
represents and how to format the data. CSV annotations are rows at the beginning
of a CSV file that describe column properties.
The `influx write` command supports [Extended annotated CSV](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended)
which provides options for specifying how CSV data should be converted into line
protocol and how data is formatted.
To write data to InfluxDB, data must include the following:
- [measurement](/influxdb/cloud-iox/reference/syntax/line-protocol/#measurement)
- [field set](/influxdb/cloud-iox/reference/syntax/line-protocol/#field-set)
- [timestamp](/influxdb/cloud-iox/reference/syntax/line-protocol/#timestamp) _(Optional but recommended)_
- [tag set](/influxdb/cloud-iox/reference/syntax/line-protocol/#tag-set) _(Optional)_
Use CSV annotations to specify which of these elements each column represents.
## Write raw query results back to InfluxDB
Flux returns query results in [annotated CSV](/influxdb/cloud-iox/reference/syntax/annotated-csv/).
These results include all annotations necessary to write the data back to InfluxDB.
## Inject annotation headers
If the CSV data you want to write to InfluxDB does not contain the annotations
required to properly convert the data to line protocol, use the `--header` flag
to inject annotation rows into the CSV data.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--header "#constant measurement,birds" \
--header "#datatype dateTime:2006-01-02,long,tag"
```
{{< flex >}}
{{% flex-content %}}
##### example.csv
```
date,sighted,loc
2020-01-01,12,Boise
2020-06-01,78,Boise
2020-01-01,54,Seattle
2020-06-01,112,Seattle
2020-01-01,9,Detroit
2020-06-01,135,Detroit
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
birds,loc=Boise sighted=12i 1577836800000000000
birds,loc=Boise sighted=78i 1590969600000000000
birds,loc=Seattle sighted=54i 1577836800000000000
birds,loc=Seattle sighted=112i 1590969600000000000
birds,loc=Detroit sighted=9i 1577836800000000000
birds,loc=Detroit sighted=135i 1590969600000000000
```
{{% /flex-content %}}
{{< /flex >}}
#### Use files to inject headers
The `influx write` command supports importing multiple files in a single command.
Include annotations and header rows in their own file and import them with the write command.
Files are read in the order in which they're provided.
```sh
influx write -b example-bucket \
-f path/to/headers.csv \
-f path/to/example.csv
```
{{< flex >}}
{{% flex-content %}}
##### headers.csv
```
#constant measurement,birds
#datatype dateTime:2006-01-02,long,tag
```
{{% /flex-content %}}
{{% flex-content %}}
##### example.csv
```
date,sighted,loc
2020-01-01,12,Boise
2020-06-01,78,Boise
2020-01-01,54,Seattle
2020-06-01,112,Seattle
2020-01-01,9,Detroit
2020-06-01,135,Detroit
```
{{% /flex-content %}}
{{< /flex >}}
##### Resulting line protocol
```
birds,loc=Boise sighted=12i 1577836800000000000
birds,loc=Boise sighted=78i 1590969600000000000
birds,loc=Seattle sighted=54i 1577836800000000000
birds,loc=Seattle sighted=112i 1590969600000000000
birds,loc=Detroit sighted=9i 1577836800000000000
birds,loc=Detroit sighted=135i 1590969600000000000
```
## Skip annotation headers
Some CSV data may include header rows that conflict with or lack the annotations
necessary to write CSV data to InfluxDB.
Use the `--skipHeader` flag to specify the **number of rows to skip** at the
beginning of the CSV data.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--skipHeader=2
```
You can then [inject new header rows](#inject-annotation-headers) to rename columns
and provide the necessary annotations.
## Process input as CSV
The `influx write` command automatically processes files with the `.csv` extension as CSV files.
If your CSV file uses a different extension, use the `--format` flat to explicitly
declare the format of the input file.
```sh
influx write -b example-bucket \
-f path/to/example.txt \
--format csv
```
{{% note %}}
The `influx write` command assumes all input files are line protocol unless they
include the `.csv` extension or you declare the `csv`.
{{% /note %}}
## Specify CSV character encoding
The `influx write` command assumes CSV files contain UTF-8 encoded characters.
If your CSV data uses different character encoding, specify the encoding
with the `--encoding`.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--encoding "UTF-16"
```
## Skip rows with errors
If a row in your CSV data is missing an
[element required to write to InfluxDB](/influxdb/cloud-iox/reference/syntax/line-protocol/#elements-of-line-protocol)
or data is incorrectly formatted, when processing the row, the `influx write` command
returns an error and cancels the write request.
To skip rows with errors, use the `--skipRowOnError` flag.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--skipRowOnError
```
{{% warn %}}
Skipped rows are ignored and are not written to InfluxDB.
{{% /warn %}}
Use the `--errors-file` flag to record errors to a file.
The error file identifies all rows that cannot be imported and includes error messages for debugging.
For example:
```error : line 3: column 'a': '1.1' cannot fit into long data type
cpu,1.1
```
## Advanced examples
- [Define constants](#define-constants)
- [Annotation shorthand](#annotation-shorthand)
- [Ignore columns](#ignore-columns)
- [Use alternate numeric formats](#use-alternate-numeric-formats)
- [Use alternate boolean format](#use-alternate-boolean-format)
- [Use different timestamp formats](#use-different-timestamp-formats)
---
### Define constants
Use the Extended annotated CSV [`#constant` annotation](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#constant)
to add a column and value to each row in the CSV data.
{{< flex >}}
{{% flex-content %}}
##### CSV with constants
```
#constant measurement,example
#constant tag,source,csv
#datatype long,dateTime:RFC3339
count,time
1,2020-01-01T00:00:00Z
4,2020-01-02T00:00:00Z
9,2020-01-03T00:00:00Z
18,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example,source=csv count=1 1577836800000000000
example,source=csv count=4 1577923200000000000
example,source=csv count=9 1578009600000000000
example,source=csv count=18 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Annotation shorthand
Extended annotated CSV supports [annotation shorthand](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#annotation-shorthand),
which lets you define the **column label**, **datatype**, and **default value** in the column header.
{{< flex >}}
{{% flex-content %}}
##### CSV with annotation shorthand
```
m|measurement,count|long|0,time|dateTime:RFC3339
example,1,2020-01-01T00:00:00Z
example,4,2020-01-02T00:00:00Z
example,,2020-01-03T00:00:00Z
example,18,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example count=1 1577836800000000000
example count=4 1577923200000000000
example count=0 1578009600000000000
example count=18 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
#### Replace column header with annotation shorthand
It's possible to replace the column header row in a CSV file with annotation
shorthand without modifying the CSV file.
This lets you define column data types and default values while writing to InfluxDB.
To replace an existing column header row with annotation shorthand:
1. Use the `--skipHeader` flag to ignore the existing column header row.
2. Use the `--header` flag to inject a new column header row that uses annotation shorthand.
```sh
influx write -b example-bucket \
-f example.csv \
--skipHeader=1
--header="m|measurement,count|long|0,time|dateTime:RFC3339"
```
{{< flex >}}
{{% flex-content %}}
##### Unmodified example.csv
```
m,count,time
example,1,2020-01-01T00:00:00Z
example,4,2020-01-02T00:00:00Z
example,,2020-01-03T00:00:00Z
example,18,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example count=1i 1577836800000000000
example count=4i 1577923200000000000
example count=0i 1578009600000000000
example count=18i 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Ignore columns
Use the Extended annotated CSV [`#datatype ignored` annotation](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#ignored)
to ignore columns when writing CSV data to InfluxDB.
{{< flex >}}
{{% flex-content %}}
##### CSV data with ignored column
```
#datatype measurement,long,time,ignored
m,count,time,foo
example,1,2020-01-01T00:00:00Z,bar
example,4,2020-01-02T00:00:00Z,bar
example,9,2020-01-03T00:00:00Z,baz
example,18,2020-01-04T00:00:00Z,baz
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
m count=1i 1577836800000000000
m count=4i 1577923200000000000
m count=9i 1578009600000000000
m count=18i 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Use alternate numeric formats
If your CSV data contains numeric values that use a non-default fraction separator (`.`)
or contain group separators, [define your numeric format](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#double)
in the `double`, `long`, and `unsignedLong` datatype annotations.
{{% note %}}
If your **numeric format separators** include a comma (`,`), wrap the column annotation in double
quotes (`""`) to prevent the comma from being parsed as a column separator or delimiter.
You can also [define a custom column separator](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#define-custom-column-separator).
{{% /note %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[Floats](#)
[Integers](#)
[Uintegers](#)
{{% /tabs %}}
{{% tab-content %}}
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default float values
```
#datatype measurement,"double:.,",dateTime:RFC3339
m,lbs,time
example,"1,280.7",2020-01-01T00:00:00Z
example,"1,352.5",2020-01-02T00:00:00Z
example,"1,862.8",2020-01-03T00:00:00Z
example,"2,014.9",2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280.7 1577836800000000000
example lbs=1352.5 1577923200000000000
example lbs=1862.8 1578009600000000000
example lbs=2014.9 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
{{% /tab-content %}}
{{% tab-content %}}
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default integer values
```
#datatype measurement,"long:.,",dateTime:RFC3339
m,lbs,time
example,"1,280.0",2020-01-01T00:00:00Z
example,"1,352.0",2020-01-02T00:00:00Z
example,"1,862.0",2020-01-03T00:00:00Z
example,"2,014.9",2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280i 1577836800000000000
example lbs=1352i 1577923200000000000
example lbs=1862i 1578009600000000000
example lbs=2014i 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
{{% /tab-content %}}
{{% tab-content %}}
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default uinteger values
```
#datatype measurement,"unsignedLong:.,",dateTime:RFC3339
m,lbs,time
example,"1,280.0",2020-01-01T00:00:00Z
example,"1,352.0",2020-01-02T00:00:00Z
example,"1,862.0",2020-01-03T00:00:00Z
example,"2,014.9",2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280u 1577836800000000000
example lbs=1352u 1577923200000000000
example lbs=1862u 1578009600000000000
example lbs=2014u 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
---
### Use alternate boolean format
Line protocol supports only [specific boolean values](/influxdb/cloud-iox/reference/syntax/line-protocol/#boolean).
If your CSV data contains boolean values that line protocol does not support,
[define your boolean format](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#boolean)
in the `boolean` datatype annotation.
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default boolean values
```
#datatype measurement,"boolean:y,Y,1:n,N,0",dateTime:RFC3339
m,verified,time
example,y,2020-01-01T00:00:00Z
example,n,2020-01-02T00:00:00Z
example,1,2020-01-03T00:00:00Z
example,N,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example verified=true 1577836800000000000
example verified=false 1577923200000000000
example verified=true 1578009600000000000
example verified=false 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Use different timestamp formats
The `influx write` command automatically detects **RFC3339** and **number** formatted
timestamps when converting CSV to line protocol.
If using a different timestamp format, [define your timestamp format](/influxdb/cloud-iox/reference/syntax/annotated-csv/extended/#datetime)
in the `dateTime` datatype annotation.
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default timestamps
```
#datatype measurement,dateTime:2006-01-02,field
m,time,lbs
example,2020-01-01,1280.7
example,2020-01-02,1352.5
example,2020-01-03,1862.8
example,2020-01-04,2014.9
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280.7 1577836800000000000
example lbs=1352.5 1577923200000000000
example lbs=1862.8 1578009600000000000
example lbs=2014.9 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}

View File

@ -0,0 +1,121 @@
---
title: Use Telegraf to write CSV data
description: >
Use the Telegraf `file` input plugin to read and parse CSV data into
[line protocol](/influxdb/cloud-iox/reference/syntax/line-protocol/)
and write it to InfluxDB.
menu:
influxdb_cloud_iox:
name: Use Telegraf
identifier: write-csv-telegraf
parent: Write CSV data
weight: 203
related:
- /{{< latest "telegraf" >}}/data_formats/input/csv/
- /influxdb/cloud-iox/write-data/use-telegraf/
---
Use the Telegraf `file` input plugin to read and parse CSV data into
[line protocol](/influxdb/cloud-iox/reference/syntax/line-protocol/)
and write it to InfluxDB.
[Telegraf](/{{< latest "telegraf" >}}/) is a plugin-based agent that collects
metrics from different sources and writes them to specified destinations.
## Configure Telegraf to read CSV files
1. And and enable the [`inputs.file` plugin](/{{< latest "telegraf" >}}/plugins/#input-file)
in your Telegraf configuration file.
2. Use the `files` option to specify the list of CSV files to read.
CSV files must be accessible by the Telegraf agent.
3. Set the `data_format` option to `csv`.
4. Define all other `csv_` configuration options specific to the CSV data you
want to write to InfluxDB.
_For detailed information about each of the CSV format configuration options,
see [CSV input data format](/{{< latest "telegraf" >}}/data_formats/input/csv/)._
```toml
[[inputs.file]]
files = ["/path/to/example.csv"]
data_format = "csv"
csv_header_row_count = 0
csv_column_names = []
csv_column_types = []
csv_skip_rows = 0
csv_metadata_rows = 0
csv_metadata_separators = [":", "="]
csv_metadata_trim_set = ""
csv_skip_columns = 0
csv_delimiter = ","
csv_comment = ""
csv_trim_space = false
csv_tag_columns = []
csv_measurement_column = ""
csv_timestamp_column = ""
csv_timestamp_format = ""
csv_timezone = ""
csv_skip_values = []
csv_skip_errors = false
csv_reset_mode = "none"
```
## Configure Telegraf to write to InfluxDB
1. Add and enable the the [`outputs.influxdb_v2`](/{{< latest "telegraf" >}}/plugins/#output-influxdb_v2)
plugin in your Telegraf configuration file.
2. Include the following options:
- **urls**: List of
{{% cloud-only %}}[InfluxDB Cloud region URLs](/influxdb/cloud-iox/reference/regions/){{% /cloud-only %}}
{{% oss-only %}}[InfluxDB URLs](/{{< latest "influxdb" >}}/reference/regions/){{% /oss-only %}}
to write data to.
- **token**: InfluxDB API token.
- **organization**: InfluxDB organization name.
- **bucket**: InfluxDB bucket to write to.
```toml
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "$INFLUX_TOKEN"
organization = "example-org"
bucket = "example-bucket"
```
{{< expand-wrapper >}}
{{% expand "View full example Telegraf configuration file" %}}
```toml
[[inputs.file]]
files = ["/path/to/example.csv"]
data_format = "csv"
csv_header_row_count = 0
csv_column_names = []
csv_column_types = []
csv_skip_rows = 0
csv_metadata_rows = 0
csv_metadata_separators = [":", "="]
csv_metadata_trim_set = ""
csv_skip_columns = 0
csv_delimiter = ","
csv_comment = ""
csv_trim_space = false
csv_tag_columns = []
csv_measurement_column = ""
csv_timestamp_column = ""
csv_timestamp_format = ""
csv_timezone = ""
csv_skip_values = []
csv_skip_errors = false
csv_reset_mode = "none"
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "$INFLUX_TOKEN"
organization = "example-org"
bucket = "example-bucket"
```
{{% /expand %}}
{{< /expand-wrapper >}}
**Restart the Telegraf agent** to apply the configuration change and write the CSV
data to InfluxDB.

View File

@ -0,0 +1,32 @@
---
title: Use the InfluxDB UI to write CSV data
description: >
Use the InfluxDB user interface (UI) to write CSV data to InfluxDB.
menu:
influxdb_cloud_iox:
name: Use the InfluxDB UI
identifier: write-csv-ui
parent: Write CSV data
weight: 202
related:
- /influxdb/cloud-iox/reference/syntax/annotated-csv/
---
Use the InfluxDB user interface (UI) to write CSV data to InfluxDB.
1. In the navigation menu on the left, click **Load Data** > **Sources**.
{{< nav-icon "data" >}}
2. Under **File Upload**, select **Upload a CSV**.
Verify your CSV file follows the supported
[annotated CSV](/influxdb/cloud-iox/reference/syntax/annotated-csv/) syntax.
3. Select the bucket to write to.
4. Do one of the following:
- To upload file, drag and drop your file onto the UI.
- To enter data manually, select the **Enter Manually** tab and then paste
your annotated CSV data.
5. Click **Write Data**.

View File

@ -0,0 +1,21 @@
---
title: Delete data
description: >
To delete data from and IOx-backed InfluxDB Cloud bucket, please contact
[InfluxData support](https://support.influxdata.com).
menu:
influxdb_cloud_iox:
name: Delete data
parent: Write data
weight: 107
influxdb/cloud-iox/tags: [delete]
# related:
# - /influxdb/cloud-iox/reference/syntax/delete-predicate/
# - /influxdb/cloud-iox/reference/cli/influx/delete/
---
The InfluxDB `/api/v2/delete` API endpoint has been disabled for InfluxDB
IOx-backed organizations. To delete data from an IOx-backed bucket, please
contact InfluxData Support.
<a class="btn" href="https://support.influxdata.com">Contact InfluxData Support</a>

View File

@ -0,0 +1,82 @@
---
title: Migrate data to the InfluxDB IOx storage engine
description: >
Migrate data from InfluxDB backed by TSM (OSS, Enterprise, or Cloud) to
InfluxDB Cloud backed by InfluxDB IOx.
menu:
influxdb_cloud_iox:
name: Migrate data
parent: Write data
weight: 104
---
Migrate data to InfluxDB Cloud backed by InfluxDB IOx from other
InfluxDB instances backed by TSM including InfluxDB OSS 1.x, 2.x,
InfluxDB Enterprise, and InfluxDB Cloud.
- [Should you migrate?](#should-you-migrate)
- [Are you currently limited by series cardinality?](#are-you-currently-limited-by-series-cardinality)
- [Do you want to use SQL to query your data?](#do-you-want-to-use-sql-to-query-your-data)
- [Do you want better InfluxQL performance?](#do-you-want-better-influxql-performance)
- [Do you depend on a specific cloud provider or region?](#do-you-depend-on-a-specific-cloud-provider-or-region)
- [Are you reliant on Flux queries and Flux tasks?](#are-you-reliant-on-flux-queries-and-flux-tasks)
- [Data migration guides](#data-migration-guides)
## Should you migrate?
There are important things to consider with migrating to InfluxDB Cloud backed
by InfluxDB IOx. The following questions will help guide your decision to migrate.
#### Are you currently limited by series cardinality?
**Yes, you should migrate**. Series cardinality is a major limiting factor with
the InfluxDB TSM storage engine. The more unique series in your data, the less
performant your database.
The IOx storage engine supports near limitless series cardinality and is without
question, the better solution for high series cardinality workloads.
#### Do you want to use SQL to query your data?
**Yes, you should migrate**. InfluxDB {{< current-version >}} backed by InfluxDB
IOx lets you query your time series data with SQL. For more information about
querying your data with SQL, see:
- [Query data with SQL](/influxdb/cloud-iox/...)
- [InfluxDB SQL reference](/influxdb/cloud-iox/...)
#### Do you want better InfluxQL performance?
**Yes, you should migrate**. One of the primary goals when designing the InfluxDB
IOx storage engine was to enable performant implementations of both SQL and InfluxQL.
When compared to querying InfluxDB backed by TSM (InfluxDB OSS 1.x, 2.x, and Enterprise),
InfluxQL queries are more performant when querying InfluxDB backed by InfluxDB IOx.
#### Do you depend on a specific cloud provider or region?
**You should maybe migrate**. InfluxDB Cloud instances backed by InfluxDB IOx
are available from the following providers:
{{< cloud_regions type=iox-list >}}
If your deployment requires other cloud providers or regions, you may need to
wait until the IOx storage engine is available in a region that meets your requirements.
We are currently working to make InfluxDB IOx available on more providers and
in more regions around the world.
#### Are you reliant on Flux queries and Flux tasks?
**You should maybe migrate**. Flux queries are less performant against the IOx
storage engine. Flux is optimized to work with the TSM storage engine, but these
optimizations do not apply to the on-disk structure of InfluxDB IOx.
To maintain performant Flux queries against the IOx storage engine, you need to
update Flux queries to use a mixture of both SQL and Flux—SQL to query the base
dataset and Flux to perform other transformations that SQL does not support.
For information about using SQL and Flux together for performant queries, see
[Use SQL and Flux to query data](/influxdb/cloud-iox/...).
---
## Data migration guides
{{< children >}}

View File

@ -0,0 +1,286 @@
---
title: Migrate data from InfluxDB 1.x to IOx in InfluxDB Cloud
description: >
To migrate data from a TSM-backed InfluxDB 1.x (OSS or Enterprise) to an
InfluxDB Cloud IOx-backed organization, export the data as line protocol and
write the exported data to an IOx bucket in your InfluxDB Cloud organization.
menu:
influxdb_cloud_iox:
name: Migrate from 1.x to IOx
parent: Migrate data
weight: 103
---
To migrate data from an InfluxDB 1.x OSS or Enterprise instance to InfluxDB Cloud
backed by InfluxDB IOx, export the data as line protocol and write the exported
data to an IOx bucket in your InfluxDB Cloud organization.
Because full data migrations will likely exceed your organizations' limits and
adjustable quotas, migrate your data in batches.
{{% cloud %}}
All write requests are subject to your InfluxDB Cloud organization's
[rate limits and adjustable quotas](/influxdb/cloud-iox/account-management/limits/).
{{% /cloud %}}
## Tools to use
The migration process uses the following tools:
- **`influx_inspect` utility**:
The [`influx_inspect` utility](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect/#export)
is packaged with InfluxDB 1.x OSS and Enterprise.
- **InfluxDB 2.x `influx` CLI**:
The [2.x `influx` CLI]((/influxdb/cloud/tools/influx-cli/)) is packaged
separately from InfluxDB OSS 2.x and InfluxDB Cloud.
[Download and install the 2.x CLI](/influxdb/cloud/tools/influx-cli/).
- **InfluxDB Cloud user interface (UI)**:
Visit [cloud2.influxdata.com](https://cloud2.influxdata.com) to access the
InfluxDB Cloud UI.
{{% note %}}
#### InfluxDB 1.x and 2.x CLIs are unique
If both the **InfluxDB 1.x and 2.x `influx` CLIs** are installed in your `$PATH`,
rename one of the the binaries to ensure you're executing commands with the
correct CLI.
{{% /note %}}
## Migrate data
1. **Export data from your InfluxDB 1.x instance as line protocol.**
Use the **InfluxDB 1.x `influx_inspect export` utility** to export data as
line protocol and store it in a file.
Include the following:
- ({{< req "Required" >}}) `-lponly` flag to export line protocol without InfluxQL DDL or DML.
- ({{< req "Required" >}}) `-out` flag with a path to an output file.
Default is `~/.influxdb/export`. _Any subsequent export commands without
the output file defined will overwrite the existing export file._
- `-compress` flag to use gzip to compress the output.
- `-datadir` flag with the path to your InfluxDB 1.x `data` directory.
Only required if the `data` directory is at a non-default location.
For information about default locations, see
[InfluxDB OSS 1.x file system layout](/{{< latest "influxdb" "v1" >}}/concepts/file-system-layout/#file-system-layout)
or [InfluxDB Enterprise 1.x file system layout](/{{< latest "enterprise_influxdb" >}}/concepts/file-system-layout/#file-system-layout).
- `-waldir` flag with the path to your InfluxDB 1.x `wal` directory.
Only required if the `wal` directory is at a non-default location.
For information about default locations, see
[InfluxDB OSS 1.x file system layout](/{{< latest "influxdb" "v1" >}}/concepts/file-system-layout/#file-system-layout)
or [InfluxDB Enterprise 1.x file system layout](/{{< latest "enterprise_influxdb" >}}/concepts/file-system-layout/#file-system-layout).
- `-database` flag with a specific database name to export.
By default, all databases are exported.
- `-retention` flag with a specific retention policy to export.
By default, all retention policies are exported.
- `-start` flag with an RFC3339 timestamp that defines the earliest time to export.
Default is `1677-09-20T16:27:54-07:44`.
- `-end` flag with an RFC3339 timestamp that defines the latest time to export.
Default is `2262-04-11T16:47:16-07:00`.
{{% note %}}
We recommend exporting each database and retention policy combination separately
to easily write the exported data into corresponding InfluxDB {{< current-version >}}
buckets.
{{% /note %}}
##### Export all data in a database and retention policy to a file
```sh
influx_inspect export \
-lponly \
-database example-db \
-retention example-rp \
-out path/to/export-file.lp
```
##### View more export command examples:
{{< expand-wrapper >}}
{{% expand "Export all data to a file" %}}
```sh
influx_inspect export \
-lponly \
-out path/to/export-file.lp.gzip
```
{{% /expand %}}
{{% expand "Export all data to a compressed file" %}}
```sh
influx_inspect export \
-lponly \
-compress \
-out path/to/export-file.lp.gzip
```
{{% /expand %}}
{{% expand "Export data within time bounds to a file" %}}
```sh
influx_inspect export \
-lponly \
-start 2020-01-01T00:00:00Z \
-end 2023-01-01T00:00:00Z \
-out path/to/export-file.lp
```
{{% /expand %}}
{{% expand "Export a database and all its retention policies to a file" %}}
```sh
influx_inspect export \
-lponly \
-database example-db \
-out path/to/export-file.lp
```
{{% /expand %}}
{{% expand "Export a specific database and retention policy to a file" %}}
```sh
influx_inspect export \
-lponly \
-database example-db \
-retention example-rp \
-out path/to/export-file.lp
```
{{% /expand %}}
{{% expand "Export all data from _non-default_ `data` and `wal` directories" %}}
```sh
influx_inspect export \
-lponly \
-datadir path/to/influxdb/data/ \
-waldir path/to/influxdb/wal/ \
-out path/to/export-file.lp
```
{{% /expand %}}
{{< /expand-wrapper >}}
2. Create InfluxDB Cloud buckets for each InfluxDB 1.x database and retention policy combination.
InfluxDB {{< current-version >}} combines InfluxDB 1.x databases and retention policies
into buckets--named locations for time series data with specified retention periods.
{{< expand-wrapper >}}
{{% expand "View example 1.x databases and retention policies as InfluxDB Cloud buckets" %}}
If you have the following InfluxDB 1.x data structure:
- example-db <span style="opacity:.5;">_(database)_</span>
- autogen <span style="opacity:.5;">_(retention policy)_</span>
- historical-1mo <span style="opacity:.5;">_(retention policy)_</span>
- historical-6mo <span style="opacity:.5;">_(retention policy)_</span>
- historical-1y <span style="opacity:.5;">_(retention policy)_</span>
You would create the following InfluxDB {{< current-version >}} buckets:
- example-db/autogen
- example-db/historical-1mo
- example-db/historical-6mo
- example-db/historical-1y
{{% /expand %}}
{{< /expand-wrapper >}}
Use the **InfluxDB 2.x `influx` CLI** or the **InfluxDB {{< current-version >}} user interface (UI)**
to create a bucket.
{{< tabs-wrapper >}}
{{% tabs %}}
[influx CLI](#)
[InfluxDB UI](#)
{{% /tabs %}}
{{% tab-content %}}
<!----------------------------- BEGIN CLI CONTENT ----------------------------->
Use the [`influx bucket create` command](/influxdb/cloud-iox/reference/cli/influx/bucket/create/)
to create a new bucket.
**Provide the following**:
- [InfluxDB Cloud connection and authentication credentials](#)
- `-n, --name` flag with the bucket name.
- `-r, --retention` flag with the bucket's retention period duration.
Supported retention periods depend on your InfluxDB Cloud plan.
```sh
influx bucket create \
--name example-db/autogen \
--retention 7d
```
<!------------------------------ END CLI CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!------------------------------ BEGIN UI CONTENT ----------------------------->
1. Go to
{{% oss-only %}}[localhost:8086](https://cloud2.influxdata.com){{% /oss-only %}}
{{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}}
in a browser to log in and access the InfluxDB UI.
2. Navigate to **Load Data** > **Buckets** using the left navigation bar.
{{< nav-icon "load data" >}}
3. Click **+ {{< caps >}}Create bucket{{< /caps >}}**.
4. Provide a bucket name (for example: `example-db/autogen`) and select a
[retention period](/influxdb/cloud-iox/reference/glossary/#retention-period).
Supported retention periods depend on your InfluxDB Cloud plan.
5. Click **{{< caps >}}Create{{< /caps >}}**.
<!------------------------------- END UI CONTENT ------------------------------>
{{% /tab-content %}}
{{< /tabs-wrapper >}}
3. **Write the exported line protocol to your InfluxDB Cloud organization backed by InfluxDB IOx.**
Use the **InfluxDB 2.x CLI** to write data to InfluxDB Cloud.
While you can use the `/api/v2/write` API endpoint to write data directly,
the `influx write` command lets you define the rate at which data is written
to avoid exceeding your organization's rate limits.
Use the `influx write` command and include the following:
- [InfluxDB Cloud connection and authentication credentials](#authentication-credentials)
- `-b, --bucket` flag to identify the target bucket.
- `-f, --file` flag with the path to the line protocol file to import.
- `-rate-limit` flag with a rate limit that matches your InfluxDB Cloud
organization's write rate limit.
- `--compression` flag to identify the compression type of the import file.
Options are `none` or `gzip`. Default is `none`.
{{< cli/influx-creds-note >}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Uncompressed](#)
[Compressed](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
influx write \
--bucket example-db/autogen \
--file path/to/export-file.lp \
--rate-limit "300 MB / 5 min"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
influx write \
--bucket example-db/autogen \
--file path/to/export-file.lp.gzip \
--rate-limit "300 MB / 5 min" \
--compression gzip
```
{{% /code-tab-content %}}
{{< code-tabs-wrapper >}}
Repeat for each export file and target bucket.

View File

@ -0,0 +1,392 @@
---
title: Migrate data from TSM to IOx in InfluxDB Cloud
description: >
To migrate data from a TSM-backed InfluxDB Cloud organization to an InfluxDB
IOx-backed organization, query the data in time-based batches and write the
queried data to an IOx bucket in your InfluxDB Cloud organization.
menu:
influxdb_cloud_iox:
name: Migrate from TSM to IOx
parent: Migrate data
weight: 102
---
To migrate data from an InfluxDB Cloud organization backed by TSM to an organization
backed by InfluxDB IOx, query the data from your TSM backed buckets in time-based
batches and write the queried data to a bucket in your InfluxDB IOx-backed organization.
Because full data migrations will likely exceed your organizations' limits and
adjustable quotas, migrate your data in batches.
The following guide provides instructions for setting up an InfluxDB task
that queries data from an InfluxDB Cloud TSM-backed bucket in time-based batches
and writes each batch to another InfluxDB Cloud IOx-backed bucket in another
organization.
{{% cloud %}}
All query and write requests are subject to your InfluxDB Cloud organization's
[rate limits and adjustable quotas](/influxdb/cloud-iox/account-management/limits/).
{{% /cloud %}}
- [Set up the migration](#set-up-the-migration)
- [Migration task](#migration-task)
- [Configure the migration](#configure-the-migration)
- [Migration Flux script](#migration-flux-script)
- [Configuration help](#configuration-help)
- [Monitor the migration progress](#monitor-the-migration-progress)
- [Troubleshoot migration task failures](#troubleshoot-migration-task-failures)
## Set up the migration
{{% note %}}
The migration process requires two buckets in your source InfluxDB
organization—one bucket to store the data you're migrating and a second bucket
to store migration metadata.
If you're using the [InfluxDB Cloud Free Plan](/influxdb/cloud/account-management/limits/#free-plan),
and have more than one bucket to migrate, you will exceed your plans bucket limit.
To migrate more than one bucket, you need to [upgrade to the Usage-based plan](/influxdb/cloud/account-management/billing/#upgrade-to-usage-based-plan)
to complete the migration.
{{% /note %}}
1. **In the InfluxDB Cloud (IOx) organization you're migrating data _to_**:
1. [Create a bucket](/influxdb/cloud-iox/organizations/buckets/create-bucket/)
**to migrate data to**.
2. [Create an API token](/influxdb/cloud-iox/security/tokens/create-token/)
with **write access** to the bucket you want to migrate to.
2. **In the InfluxDB Cloud (TSM) organization you're migrating data _from_**:
1. Add the **InfluxDB Cloud API token from the IOx-backed organization _(created in step 1b)_**
as a secret using the key, `INFLUXDB_IOX_TOKEN`.
_See [Add secrets](/influxdb/cloud/security/secrets/add/) for more information._
3. [Create a bucket](/influxdb/cloud/organizations/buckets/create-bucket/)
**to store temporary migration metadata**.
4. [Create a new task](/influxdb/cloud/process-data/manage-tasks/create-task/)
using the provided [migration task](#migration-task).
Update the necessary [migration configuration options](#configure-the-migration).
5. _(Optional)_ Set up [migration monitoring](#monitor-the-migration-progress).
6. Save the task.
{{% note %}}
Newly-created tasks are enabled by default, so the data migration begins when you save the task.
{{% /note %}}
**After the migration is complete**, each subsequent migration task execution
will fail with the following error:
```
error exhausting result iterator: error calling function "die" @41:9-41:86:
Batch range is beyond the migration range. Migration is complete.
```
## Migration task
### Configure the migration
1. Specify how often you want the task to run using the `task.every` option.
_See [Determine your task interval](#determine-your-task-interval)._
2. Define the following properties in the `migration`
[record](/{{< latest "flux" >}}/data-types/composite/record/):
##### migration
- **start**: Earliest time to include in the migration.
_See [Determine your migration start time](#determine-your-migration-start-time)._
- **stop**: Latest time to include in the migration.
- **batchInterval**: Duration of each time-based batch.
_See [Determine your batch interval](#determine-your-batch-interval)._
- **batchBucket**: InfluxDB Cloud (TSM) bucket to store migration batch metadata in.
- **sourceBucket**: InfluxDB Cloud (TSM) bucket to migrate data from.
- **destinationHost**: [InfluxDB Cloud (IOx) region URL](/influxdb/cloud-iox/reference/regions)
to migrate data from.
- **destinationOrg**: InfluxDB Cloud (IOx) organization to migrate data to.
- **destinationToken**: InfluxDB Cloud (IOx) API token. To keep the API token secure, store
it as a secret in InfluxDB Cloud (TSM).
- **destinationBucket**: InfluxDB OSS bucket to migrate data to.
### Migration Flux script
```js
import "array"
import "experimental"
import "date"
import "influxdata/influxdb/secrets"
// Configure the task
option task = {every: 5m, name: "Migrate data from TSM to IOx"}
// Configure the migration
migration = {
start: 2022-01-01T00:00:00Z,
stop: 2022-02-01T00:00:00Z,
batchInterval: 1h,
batchBucket: "migration",
sourceBucket: "example-cloud-bucket",
destinationHost: "https://cloud2.influxdata.com",
destinationOrg: "example-destination-org",
destinationToken: secrets.get(key: "INFLUXDB_IOX_TOKEN"),
destinationBucket: "example-destination-bucket",
}
// batchRange dynamically returns a record with start and stop properties for
// the current batch. It queries migration metadata stored in the
// `migration.batchBucket` to determine the stop time of the previous batch.
// It uses the previous stop time as the new start time for the current batch
// and adds the `migration.batchInterval` to determine the current batch stop time.
batchRange = () => {
_lastBatchStop =
(from(bucket: migration.batchBucket)
|> range(start: migration.start)
|> filter(fn: (r) => r._field == "batch_stop")
|> filter(fn: (r) => r.dstOrg == migration.destinationOrg)
|> filter(fn: (r) => r.dstBucket == migration.destinationBucket)
|> last()
|> findRecord(fn: (key) => true, idx: 0))._value
_batchStart =
if exists _lastBatchStop then
time(v: _lastBatchStop)
else
migration.start
return {start: _batchStart, stop: date.add(d: migration.batchInterval, to: _batchStart)}
}
// Define a static record with batch start and stop time properties
batch = batchRange()
// Check to see if the current batch start time is beyond the migration.stop
// time and exit with an error if it is.
finished =
if batch.start >= migration.stop then
die(msg: "Batch range is beyond the migration range. Migration is complete.")
else
"Migration in progress"
// Query all data from the specified source bucket within the batch-defined time
// range. To limit migrated data by measurement, tag, or field, add a `filter()`
// function after `range()` with the appropriate predicate fn.
data = () =>
from(bucket: migration.sourceBucket)
|> range(start: batch.start, stop: batch.stop)
// rowCount is a stream of tables that contains the number of rows returned in
// the batch and is used to generate batch metadata.
rowCount =
data()
|> group(columns: ["_start", "_stop"])
|> count()
// emptyRange is a stream of tables that acts as filler data if the batch is
// empty. This is used to generate batch metadata for empty batches and is
// necessary to correctly increment the time range for the next batch.
emptyRange = array.from(rows: [{_start: batch.start, _stop: batch.stop, _value: 0}])
// metadata returns a stream of tables representing batch metadata.
metadata = () => {
_input =
if exists (rowCount |> findRecord(fn: (key) => true, idx: 0))._value then
rowCount
else
emptyRange
return
_input
|> map(
fn: (r) =>
({
_time: now(),
_measurement: "batches",
srcBucket: migration.sourceBucket,
dstOrg: migration.destinationOrg,
dstBucket: migration.destinationBucket,
batch_start: string(v: batch.start),
batch_stop: string(v: batch.stop),
rows: r._value,
percent_complete:
float(v: int(v: r._stop) - int(v: migration.start)) / float(
v: int(v: migration.stop) - int(v: migration.start),
) * 100.0,
}),
)
|> group(columns: ["_measurement", "srcOrg", "srcBucket", "dstBucket"])
}
// Write the queried data to the specified InfluxDB OSS bucket.
data()
|> to(
host: migration.destinationHost,
org: migration.destinationOrg,
token: migration.destinationToken,
bucket: migration.destinationBucket
)
// Generate and store batch metadata in the migration.batchBucket.
metadata()
|> experimental.to(bucket: migration.batchBucket)
```
### Configuration help
{{< expand-wrapper >}}
<!----------------------- BEGIN Determine task interval ----------------------->
{{% expand "Determine your task interval" %}}
The task interval determines how often the migration task runs and is defined by
the [`task.every` option](/influxdb/cloud/process-data/task-options/#every).
InfluxDB Cloud rate limits and quotas reset every five minutes, so
**we recommend a `5m` task interval**.
You can do shorter task intervals and execute the migration task more often,
but you need to balance the task interval with your [batch interval](#determine-your-batch-interval)
and the amount of data returned in each batch.
If the total amount of data queried in each five-minute interval exceeds your
InfluxDB Cloud organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/),
the batch will fail until rate limits and quotas reset.
{{% /expand %}}
<!------------------------ END Determine task interval ------------------------>
<!---------------------- BEGIN Determine migration start ---------------------->
{{% expand "Determine your migration start time" %}}
The `migration.start` time should be at or near the same time as the earliest
data point you want to migrate.
All migration batches are determined using the `migration.start` time and
`migration.batchInterval` settings.
To find time of the earliest point in your bucket, run the following query:
```js
from(bucket: "example-cloud-bucket")
|> range(start: 0)
|> group()
|> first()
|> keep(columns: ["_time"])
```
{{% /expand %}}
<!----------------------- END Determine migration start ----------------------->
<!----------------------- BEGIN Determine batch interval ---------------------->
{{% expand "Determine your batch interval" %}}
The `migration.batchInterval` setting controls the time range queried by each batch.
The "density" of the data in your InfluxDB Cloud bucket and your InfluxDB Cloud
organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/)
determine what your batch interval should be.
For example, if you're migrating data collected from hundreds of sensors with
points recorded every second, your batch interval will need to be shorter.
If you're migrating data collected from five sensors with points recorded every
minute, your batch interval can be longer.
It all depends on how much data gets returned in a single batch.
If points occur at regular intervals, you can get a fairly accurate estimate of
how much data will be returned in a given time range by using the `/api/v2/query`
endpoint to execute a query for the time range duration and then measuring the
size of the response body.
The following `curl` command queries an InfluxDB Cloud bucket for the last day
and returns the size of the response body in bytes.
You can customize the range duration to match your specific use case and
data density.
```sh
INFLUXDB_CLOUD_ORG=<your_influxdb_cloud_org>
INFLUXDB_CLOUD_TOKEN=<your_influxdb_cloud_token>
INFLUXDB_CLOUD_BUCKET=<your_influxdb_cloud_bucket>
curl -so /dev/null --request POST \
https://cloud2.influxdata.com/api/v2/query?org=$INFLUXDB_CLOUD_ORG \
--header "Authorization: Token $INFLUXDB_CLOUD_TOKEN" \
--header "Accept: application/csv" \
--header "Content-type: application/vnd.flux" \
--data "from(bucket:\"$INFLUXDB_CLOUD_BUCKET\") |> range(start: -1d, stop: now())" \
--write-out '%{size_download}'
```
{{% note %}}
You can also use other HTTP API tools like [Postman](https://www.postman.com/)
that provide the size of the response body.
{{% /note %}}
Divide the output of this command by 1000000 to convert it to megabytes (MB).
```
batchInterval = (write-rate-limit-mb / response-body-size-mb) * range-duration
```
For example, if the response body of your query that returns data from one day
is 1 MB and you're using the InfluxDB Cloud Free Plan with a write limit of
5 MB per five minutes:
```js
batchInterval = (5 / 1) * 1d
// batchInterval = 5d
```
You _could_ query 5 days of data before hitting your write limit, but this is just an estimate.
We recommend setting the `batchInterval` slightly lower than the calculated interval
to allow for variation between batches.
So in this example, **it would be best to set your `batchInterval` to `4d`**.
##### Important things to note
- This assumes no other queries are running in your source InfluxDB Cloud organization.
- This assumes no other writes are happening in your destination InfluxDB Cloud organization.
{{% /expand %}}
<!------------------------ END Determine batch interval ----------------------->
{{< /expand-wrapper >}}
## Monitor the migration progress
The [InfluxDB TSM to IOx Migration Community template](https://github.com/influxdata/community-templates/tree/master/influxdb-tsm-iox-migration/)
installs the migration task outlined in this guide as well as a dashboard
for monitoring running data migrations.
{{< img-hd src="/img/influxdb/cloud-iox-migration-dashboard.png" alt="InfluxDB Cloud migration dashboard" />}}
<a class="btn" href="https://github.com/influxdata/community-templates/tree/master/influxdb-tsm-iox-migration/#quick-install">Install the InfluxDB Cloud Migration template</a>
## Troubleshoot migration task failures
If the migration task fails, [view your task logs](/influxdb/cloud/process-data/manage-tasks/task-run-history/)
to identify the specific error. Below are common causes of migration task failures.
- [Exceeded rate limits](#exceeded-rate-limits)
- [Invalid API token](#invalid-api-token)
- [Query timeout](#query-timeout)
### Exceeded rate limits
If your data migration causes you to exceed your InfluxDB Cloud organization's
limits and quotas, the task will return an error similar to:
```
too many requests
```
**Possible solutions**:
- Update the `migration.batchInterval` setting in your migration task to use
a smaller interval. Each batch will then query less data.
### Invalid API token
If the API token you add as the `INFLUXDB_IOX_SECRET` doesn't have wrote access
to your InfluxDB Cloud (IOx) bucket, the task will return an error similar to:
```
unauthorized access
```
**Possible solutions**:
- Ensure the API token has write access to your InfluxDB Cloud (IOx) bucket.
- Generate a new API token with write access to the bucket you want to migrate to.
Then, update the `INFLUXDB_IOX_TOKEN` secret in your InfluxDB Cloud (TSM)
instance with the new token.
### Query timeout
The InfluxDB Cloud query timeout is 90 seconds. If it takes longer than this to
return the data from the batch interval, the query will time out and the
task will fail.
**Possible solutions**:
- Update the `migration.batchInterval` setting in your migration task to use
a smaller interval. Each batch will then query less data and take less time
to return results.

View File

@ -0,0 +1,60 @@
---
title: Use Telegraf to write data
seotitle: Use the Telegraf agent to collect and write data
weight: 101
description: >
Use Telegraf to collect and write data to InfluxDB.
Create Telegraf configurations in the InfluxDB UI or manually configure Telegraf.
aliases:
- /influxdb/cloud-iox/collect-data/advanced-telegraf
- /influxdb/cloud-iox/collect-data/use-telegraf
- /influxdb/cloud-iox/write-data/use-telegraf/
menu:
influxdb_cloud_iox:
name: Use Telegraf
parent: Write data
---
[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) is InfluxData's
data collection agent for collecting and reporting metrics.
Its vast library of input plugins and "plug-and-play" architecture lets you quickly
and easily collect metrics from many different sources.
For a list of available plugins, see [Telegraf plugins](/{{< latest "telegraf" >}}/plugins/).
#### Requirements
- **Telegraf 1.9.2 or greater**.
_For information about installing Telegraf, see the
[Telegraf Installation instructions](/{{< latest "telegraf" >}}//install/)._
## Basic Telegraf usage
Telegraf is a plugin-based agent with plugins are that enabled and configured in
your Telegraf configuration file (`telegraf.conf`).
Each Telegraf configuration must **have at least one input plugin and one output plugin**.
Telegraf input plugins retrieve metrics from different sources.
Telegraf output plugins write those metrics to a destination.
Use the `outputs.influxdb_v2` plugin to write metrics collected by Telegraf to InfluxDB.
```toml
# ...
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "$INFLUX_TOKEN"
organization = "example-org"
bucket = "example-bucket"
# ...
```
_For more information, see [Manually configure Telegraf](/influxdb/cloud-iox/write-data/use-telegraf/configure/manual-config/#enable-and-configure-the-influxdb-v2-output-plugin)._
## Use Telegraf with InfluxDB
{{< children >}}
{{< influxdbu "telegraf-102" >}}

View File

@ -0,0 +1,20 @@
---
title: Configure Telegraf for InfluxDB
description: >
Telegraf is a plugin-based agent with plugins are that enabled and configured in
your Telegraf configuration file (`telegraf.conf`).
Learn how to create Telegraf configuration files that work with InfluxDB.
menu:
influxdb_cloud_iox:
name: Configure Telegraf
parent: Use Telegraf
weight: 101
---
Telegraf is a plugin-based agent with plugins are that enabled and configured in
your Telegraf configuration file (`telegraf.conf`).
The following are options you have for creating Telegraf configuration files
that work with InfluxDB {{< current-version >}}.
{{< children >}}

View File

@ -0,0 +1,137 @@
---
title: Automatically configure Telegraf
seotitle: Automatically configure Telegraf for InfluxDB
description: >
Use the InfluxDB UI to automatically generate a Telegraf configuration,
then start Telegraf using the generated configuration file.
aliases:
- /influxdb/cloud-iox/collect-data/use-telegraf/auto-config
- /influxdb/cloud-iox/write-data/use-telegraf/auto-config
menu:
influxdb_cloud_iox:
parent: Configure Telegraf
name: Automatically
weight: 201
related:
- /influxdb/cloud-iox/use-telegraf/telegraf-configs/create/
---
The InfluxDB user interface (UI) can automatically create Telegraf configuration files based on user-selected Telegraf plugins.
This article describes how to create a Telegraf configuration in the InfluxDB UI and
start Telegraf using the generated configuration file.
{{< youtube M8KP7FAb2L0 >}}
{{% note %}}
_View the [requirements](/influxdb/cloud-iox/write-data/no-code/use-telegraf#requirements)
for using Telegraf with InfluxDB {{< current-version >}}._
{{% /note %}}
## Create a Telegraf configuration
1. Open the InfluxDB UI _(default: [localhost:8086](http://localhost:8086))_.
2. In the navigation menu on the left, select **Data** (**Load Data**) > **Telegraf**.
{{< nav-icon "load data" >}}
4. Click **{{< icon "plus" >}} Create Configuration**.
5. In the **Bucket** dropdown, select the bucket where Telegraf will store collected data.
6. Select one or more of the available plugin groups and click **Continue**.
7. Review the list of **Plugins to Configure** for configuration requirements.
Plugins listed with a <span style="color:#32B08C">{{< icon "check" >}}</span>
require no additional configuration.
To configure a plugin or access plugin documentation, click the plugin name.
5. Provide a **Telegraf Configuration Name** and an optional **Telegraf Configuration Description**.
6. Adjust configuration settings as needed. To find configuration settings for a specific plugin, see [Telegraf plugins](/telegraf/latest/plugins/).
7. Click **Save and Test**.
8. The **Test Your Configuration** page provides instructions for how to start Telegraf using the generated configuration.
_See [Start Telegraf](#start-telegraf) below for detailed information about what each step does._
9. Once Telegraf is running, click **Listen for Data** to confirm Telegraf is successfully sending data to InfluxDB.
Once confirmed, a **Connection Found!** message appears.
10. Click **Finish**. Your Telegraf configuration name and the associated bucket name appears in the list of Telegraf configurations.
### Windows
If you plan to monitor a Windows host using the System plugin, you must complete the following steps.
1. In the list of Telegraf configurations, double-click your
Telegraf configuration, and then click **Download Config**.
2. Open the downloaded Telegraf configuration file and replace the `[[inputs.processes]]` plugin with one of the following Windows plugins, depending on your Windows configuration:
- [`[[inputs.win_perf_counters]]`](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters)
- [`[[inputs.win_services]]`](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_services)
3. Save the file and place it in a directory that **telegraf.exe** can access.
## Start Telegraf
Requests to the [InfluxDB v2 API](/influxdb/cloud/reference/api/) must include an API token.
A token identifies specific permissions to the InfluxDB instance.
### Configure your token as an environment variable
1. Find your API token. _For information about viewing tokens, see [View tokens](/influxdb/cloud-iox/security/tokens/view-tokens/)._
2. To configure your API token as the `INFLUX_TOKEN` environment variable, run the command appropriate for your operating system and command-line tool:
{{< tabs-wrapper >}}
{{% tabs %}}
[macOS or Linux](#)
[Windows](#)
{{% /tabs %}}
{{% tab-content %}}
```sh
export INFLUX_TOKEN=YourAuthenticationToken
```
{{% /tab-content %}}
{{% tab-content %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[PowerShell](#)
[CMD](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
$env:INFLUX_TOKEN = "YourAuthenticationToken"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
set INFLUX_TOKEN=YourAuthenticationToken
# Make sure to include a space character at the end of this command.
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
### Start the Telegraf service
Start the Telegraf service using the `-config` flag to specify the location of the generated Telegraf configuration file.
- For Windows, the location is always a local file path.
- For Linux and macOS, the location can be a local file path or URL.
Telegraf starts using the Telegraf configuration pulled from InfluxDB API.
{{% note %}}
InfluxDB host URLs and ports differ between InfluxDB OSS and InfluxDB Cloud.
For the exact command, see the Telegraf configuration **Setup Instructions** in the InfluxDB UI.
{{% /note %}}
```sh
telegraf -config http://localhost:8086/api/v2/telegrafs/0xoX00oOx0xoX00o
```
## Manage Telegraf configurations
For more information about managing Telegraf configurations in InfluxDB, see
[Telegraf configurations](/influxdb/cloud-iox/write-data/use-telegraf/telegraf-configs/).

View File

@ -0,0 +1,175 @@
---
title: Manually configure Telegraf
seotitle: Manually configure Telegraf for InfluxDB
description: >
Update existing or create new Telegraf configurations to use the `influxdb_v2`
output plugin to write to InfluxDB.
Start Telegraf using the custom configuration.
menu:
influxdb_cloud_iox:
name: Manually
parent: Configure Telegraf
weight: 202
influxdb/cloud-iox/tags: [telegraf]
related:
- /{{< latest "telegraf" >}}/plugins//
- /influxdb/cloud-iox/use-telegraf/telegraf-configs/create/
- /influxdb/cloud-iox/use-telegraf/telegraf-configs/update/
---
Use the Telegraf `influxdb_v2` output plugin to collect and write metrics into
an InfluxDB {{< current-version >}} bucket.
This article describes how to enable the `influxdb_v2` output plugin in new and
existing Telegraf configurations,
then start Telegraf using the custom configuration file.
{{< youtube qFS2zANwIrc >}}
{{% note %}}
_View the [requirements](/influxdb/cloud-iox/write-data/use-telegraf#requirements)
for using Telegraf with InfluxDB {{< current-version >}}._
{{% /note %}}
## Configure Telegraf input and output plugins
Configure Telegraf input and output plugins in the Telegraf configuration file (typically named `telegraf.conf`).
Input plugins collect metrics.
Output plugins define destinations where metrics are sent.
_See [Telegraf plugins](/{{< latest "telegraf" >}}/plugins/) for a complete list of available plugins._
### Manually add Telegraf plugins
To manually add any of the available [Telegraf plugins](/{{< latest "telegraf" >}}/plugins//), follow the steps below.
1. Find the plugin you want to enable from the complete list of available
[Telegraf plugins](/{{< latest "telegraf" >}}/plugins/).
2. Click **View** to the right of the plugin name to open the plugin page on GitHub.
For example, view the [MQTT plugin GitHub page](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mqtt_consumer/README.md).
3. Copy and paste the example configuration into your Telegraf configuration file
(typically named `telegraf.conf`).
### Enable and configure the InfluxDB v2 output plugin
To send data to an InfluxDB {{< current-version >}} instance, enable in the
[`influxdb_v2` output plugin](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/influxdb_v2/README.md)
in the `telegraf.conf`.
To find an example InfluxDB v2 output plugin configuration in the UI:
1. In the navigation menu on the left, select **Data (Load Data)** > **Telegraf**.
{{< nav-icon "load data" >}}
2. Click **InfluxDB Output Plugin**.
3. Click **Copy to Clipboard** to copy the example configuration or **Download Config** to save a copy.
4. Paste the example configuration into your `telegraf.conf` and specify the options below.
The InfluxDB output plugin configuration contains the following options:
##### urls
An array of URLs for your InfluxDB {{< current-version >}} instances.
See [InfluxDB Cloud regions](/influxdb/cloud-iox/reference/regions/) for
information about which URLs to use.
**{{< cloud-name "short">}} requires HTTPS**.
##### token
Your InfluxDB {{< current-version >}} authorization token.
For information about viewing tokens, see [View tokens](/influxdb/cloud-iox/security/tokens/view-tokens/).
{{% note %}}
###### Avoid storing tokens in `telegraf.conf`
We recommend storing your tokens by setting the `INFLUX_TOKEN` environment
variable and including the environment variable in your configuration file.
{{< tabs-wrapper >}}
{{% tabs %}}
[macOS or Linux](#)
[Windows](#)
{{% /tabs %}}
{{% tab-content %}}
```sh
export INFLUX_TOKEN=YourAuthenticationToken
```
{{% /tab-content %}}
{{% tab-content %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[PowerShell](#)
[CMD](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sh
$env:INFLUX_TOKEN = "YourAuthenticationToken"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sh
set INFLUX_TOKEN=YourAuthenticationToken
# Make sure to include a space character at the end of this command.
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
_See the [example `telegraf.conf` below](#example-influxdb_v2-configuration)._
{{% /note %}}
##### organization
The name of the organization that owns the target bucket.
##### bucket
The name of the bucket to write data to.
#### Example influxdb_v2 configuration
The example below illustrates an `influxdb_v2` configuration.
```toml
# ...
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "$INFLUX_TOKEN"
organization = "example-org"
bucket = "example-bucket"
# ...
```
{{% note %}}
##### Write to InfluxDB v1.x and v2.6
If a Telegraf agent is already writing to an InfluxDB v1.x database,
enabling the InfluxDB v2 output plugin will write data to both v1.x and v2.6 instances.
{{% /note %}}
## Add a custom Telegraf configuration to InfluxDB
To add a custom or manually configured Telegraf configuration to your collection
of Telegraf configurations in InfluxDB, use the [`influx telegrafs create`](/influxdb/cloud-iox/reference/cli/influx/telegrafs/create/)
or [`influx telegrafs update`](/influxdb/cloud-iox/reference/cli/influx/telegrafs/update/) commands.
For more information, see:
- [Create a Telegraf configuration](/influxdb/cloud-iox/telegraf-configs/create/#use-the-influx-cli)
- [Update a Telegraf configuration](/influxdb/cloud-iox/telegraf-configs/update/#use-the-influx-cli)
## Start Telegraf
Start the Telegraf service using the `--config` flag to specify the location of your `telegraf.conf`.
```sh
telegraf --config /path/to/custom/telegraf.conf
```

View File

@ -0,0 +1,62 @@
---
title: Dual write to InfluxDB OSS and InfluxDB Cloud
description: >
Configure Telegraf to write data to both InfluxDB OSS and InfluxDB Cloud simultaneously.
menu:
influxdb_cloud_iox:
name: Dual write to OSS & Cloud
parent: Use Telegraf
weight: 203
---
If you want to back up your data in two places, or if you're migrating from OSS to Cloud, you may want to set up dual write.
Use Telegraf to write to both InfluxDB OSS and InfluxDB Cloud simultaneously.
The sample configuration below uses:
- The [InfluxDB v2 output plugin](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/influxdb_v2) twice: first pointing to the OSS instance and then to the cloud instance.
- Two different tokens, one for OSS and one for Cloud. You'll need to configure both tokens as environment variables (see [Configure your token as an environment variable](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#configure-your-token-as-an-environment-variable).
Use the configuration below to write your data to both OSS and Cloud instances simultaneously.
## Sample configuration
```toml
[[inputs.cpu]]
[[inputs.mem]]
## Any other inputs, processors, aggregators that you want to include in your configuration.
# Send data to InfluxDB OSS v2
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB instance.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## urls exp: http://127.0.0.1:9999
urls = ["http://localhost:8086"]
## OSS token for authentication.
token = "$INFLUX_TOKEN_OSS"
## Organization is the name of the organization you want to write to. It must already exist.
organization = "influxdata"
## Destination bucket to write to.
bucket = "telegraf"
# Send data to InfluxDB Cloud - AWS West cloud instance
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB Cloud instance.
urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
## Cloud token for authentication.
token = "$INFLUX_TOKEN_CLOUD"
## Organization is the name of the organization you want to write to. It must already exist.
organization = "example@domain.com"
## Destination bucket to write into.
bucket = "telegraf"
```

View File

@ -0,0 +1,17 @@
---
title: Manage Telegraf configurations
description: >
InfluxDB Cloud lets you automatically generate Telegraf configurations or upload customized
Telegraf configurations that collect metrics and write them to InfluxDB Cloud.
weight: 102
menu:
influxdb_cloud_iox:
name: Manage Telegraf configs
parent: Use Telegraf
influxdb/cloud-iox/tags: [telegraf]
related:
- /influxdb/cloud-iox/write-data/use-telegraf/manual-config/
- /influxdb/cloud-iox/write-data/use-telegraf/auto-config/
---
{{< duplicate-oss "/telegraf-configs/" >}}

View File

@ -0,0 +1,24 @@
---
title: Create a Telegraf configuration
description: >
Use the InfluxDB UI or the [`influx` CLI](/influxdb/cloud/reference/cli/influx/)
to create an Telegraf configuration.
weight: 101
menu:
influxdb_cloud_iox:
name: Create a config
parent: Manage Telegraf configs
related:
- /influxdb/cloud-iox/write-data/no-code/use-telegraf/configure/manual-config/
- /influxdb/cloud-iox/write-data/no-code/use-telegraf/configure/auto-config/
- /influxdb/cloud-iox/write-data/use-telegraf/telegraf-configs/update/
aliases:
- /influxdb/cloud-iox/telegraf-configs/create/
- /influxdb/cloud-iox/telegraf-configs/clone/
---
{{< duplicate-oss "/telegraf-configs/create" >}}
## Clone an existing Telegraf configuration
{{< duplicate-oss "/telegraf-configs/clone" >}}

View File

@ -0,0 +1,15 @@
---
title: Remove a Telegraf configuration
description: >
Use the InfluxDB UI or the [`influx` CLI](/influxdb/cloud/reference/cli/influx/)
to remove Telegraf configurations from InfluxDB.
weight: 104
menu:
influxdb_cloud_iox:
name: Remove a config
parent: Manage Telegraf configs
aliases:
- /influxdb/cloud-iox/telegraf-configs/remove/
---
{{< duplicate-oss "/telegraf-configs/remove" >}}

View File

@ -0,0 +1,15 @@
---
title: Update a Telegraf configuration
description: >
Use the InfluxDB user interface (UI) or the [`influx` CLI](/influxdb/cloud/reference/cli/influx/)
to update InfluxDB Telegraf configurations.
weight: 103
menu:
influxdb_cloud_iox:
name: Update a config
parent: Manage Telegraf configs
aliases:
- /influxdb/cloud-iox/telegraf-configs/update/
---
{{< duplicate-oss "/telegraf-configs/update" >}}

View File

@ -0,0 +1,15 @@
---
title: View Telegraf configurations
description: >
Use the InfluxDB user interface (UI) or the [`influx` CLI](/influxdb/cloud/reference/cli/influx/)
to view and download InfluxDB Telegraf configurations.
weight: 102
menu:
influxdb_cloud_iox:
name: View configs
parent: Manage Telegraf configs
aliases:
- /influxdb/cloud-iox/telegraf-configs/view/
---
{{< duplicate-oss "/telegraf-configs/view" >}}

View File

@ -1,7 +1,7 @@
---
title: InfluxDB Cloud documentation
description: >
InfluxDB Cloud is an hosted and managed version of InfluxDB v2.0, the time series platform designed to handle high write and query loads.
InfluxDB Cloud is a hosted and managed version of InfluxDB v2.0, the time series platform designed to handle high write and query loads.
Learn how to use and leverage InfluxDB Cloud in use cases such as monitoring metrics, IoT data, and events.
layout: landing-influxdb
menu:

View File

@ -67,7 +67,7 @@ The following are important definitions to understand when using InfluxDB:
##### Example InfluxDB query results
{{< influxdb/points-series >}}
{{< influxdb/points-series-flux>}}
## Tools to use

View File

@ -67,7 +67,7 @@ The following are important definitions to understand when using InfluxDB:
##### Example InfluxDB query results
{{< influxdb/points-series >}}
{{< influxdb/points-series-flux>}}
## Tools to use

View File

@ -67,7 +67,7 @@ The following are important definitions to understand when using InfluxDB:
##### Example InfluxDB query results
{{< influxdb/points-series >}}
{{< influxdb/points-series-flux >}}
## Tools to use

View File

@ -754,6 +754,16 @@ If you write a point to a series with a timestamp that matches an existing point
Related entries: [measurement](#measurement), [tag set](#tag-set), [field set](#field-set), [timestamp](#timestamp)
{{% cloud-only %}}
### primary key
With the InfluxDB IOx storage engine, the primary key is the list of columns
used to uniquely identify each row in a table.
Rows are uniquely identified by their timestamp and tag set.
{{% /cloud-only %}}
### precision
The precision configuration setting determines the timestamp precision retained for input data points.

View File

@ -13,6 +13,7 @@ cloud:
providers:
- name: Amazon Web Services
short_name: AWS
iox: true
regions:
- name: US West (Oregon)
location: Oregon, USA
@ -25,9 +26,11 @@ cloud:
- name: US East (Virginia)
location: Virginia, USA
url: https://us-east-1-1.aws.cloud2.influxdata.com
iox: true
- name: EU Frankfurt
location: Frankfurt, Germany
url: https://eu-central-1-1.aws.cloud2.influxdata.com
iox: true
- name: Asia Pacific (Australia)
location: Sydney, Australia
url: https://ap-southeast-2-1.aws.cloud2.influxdata.com

View File

@ -57,3 +57,50 @@
For more information, see the
[Linux Package Signing Key Rotation blog post](https://www.influxdata.com/blog/linux-package-signing-key-rotation/).
- id: iox-doc-fork
level: note
scope:
- /influxdb/cloud-iox/
message: |
### InfluxDB Cloud backed by InfluxDB IOx
All InfluxDB Cloud organizations created on or after **January 31, 2023**
are backed by the new InfluxDB IOx storage engine.
Check the right column of your [InfluxDB Cloud organization homepage](https://cloud2.influxdata.com)
to see which InfluxDB storage engine you're using.
**If powered by IOx**, this is the correct documentation.
**If powered by TSM**, see the [TSM-based InfluxDB Cloud documentation](/influxdb/cloud/).
- id: tsm-doc-fork
level: note
scope:
- /influxdb/cloud/
message: |
### InfluxDB Cloud backed by InfluxDB TSM
All InfluxDB Cloud organizations created on or after **January 31, 2023**
are backed by the new InfluxDB IOx storage engine which enables nearly unlimited
series cardinality and SQL query support.
Check the right column of your [InfluxDB Cloud organization homepage](https://cloud2.influxdata.com)
to see which InfluxDB storage engine you're using.
**If powered by TSM**, this is the correct documentation.
**If powered by IOx**, see the [IOx-based InfluxDB Cloud documentation](/influxdb/cloud-iox/).
- id: iox-wip
level: warn
scope:
- /influxdb/cloud-iox/
message: |
### State of the InfluxDB Cloud (IOx) documentation
The new documentation for **InfluxDB Cloud backed by InfluxDB IOx** is a work
in progress. We are adding new information and content almost daily.
Thank you for your patience!
If there is specific information you're looking for, please
[submit a documentation issue](https://github.com/influxdata/docs-v2/issues/new/choose).

View File

@ -2,10 +2,9 @@ influxdb:
name: InfluxDB
altname: InfluxDB OSS
namespace: influxdb
list_order: 2
list_order: 3
versions: [v1.3, v1.4, v1.5, v1.6, v1.7, v1.8, v2.0, v2.1, v2.2, v2.3, v2.4, v2.5, v2.6]
latest: v2.6
latest_override: v2.6
latest_patches:
"2.6": 1
"2.5": 1
@ -33,14 +32,23 @@ influxdb_cloud:
altname: InfluxDB Cloud
namespace: influxdb
versions: [cloud]
list_order: 1
list_order: 2
latest: cloud
link: "https://cloud2.influxdata.com/signup"
influxdb_cloud_iox:
name: InfluxDB Cloud (IOx)
altname: InfluxDB Cloud (IOx)
namespace: influxdb
versions: [cloud-iox]
list_order: 1
latest: cloud-iox
link: "https://cloud2.influxdata.com/signup"
telegraf:
name: Telegraf
namespace: telegraf
list_order: 4
list_order: 5
versions: [v1.9, v1.10, v1.11, v1.12, v1.13, v1.14, v1.15, v1.16, v1.17, v1.18, v1.19, v1.20, v1.21, v1.22, v1.23, v1.24]
latest: v1.24
latest_patches:
@ -61,7 +69,7 @@ telegraf:
chronograf:
name: Chronograf
namespace: chronograf
list_order: 5
list_order: 6
versions: [v1.6, v1.7, v1.8, v1.9, v1.10]
latest: v1.10
latest_patches:
@ -74,7 +82,7 @@ chronograf:
kapacitor:
name: Kapacitor
namespace: kapacitor
list_order: 6
list_order: 7
versions: [v1.4, v1.5, v1.6]
latest: v1.6
latest_patches:
@ -85,7 +93,7 @@ kapacitor:
enterprise_influxdb:
name: "InfluxDB Enterprise"
namespace: enterprise_influxdb
list_order: 3
list_order: 4
versions: [v1.5, v1.6, v1.7, v1.8, v1.9, v1.10]
latest: v1.10
latest_patches:
@ -99,6 +107,6 @@ enterprise_influxdb:
flux:
name: Flux
namespace: flux
list_order: 7
list_order: 8
versions: [v0.x]
latest: v0.x

View File

@ -4,7 +4,7 @@
{{ $productPathData := findRE "[^/]+.*?" .RelPermalink }}
{{ $product := index $productPathData 0 }}
{{ $currentVersion := index $productPathData 1 }}
{{ $isCloud := eq $currentVersion "cloud" }}
{{ $isCloud := in $currentVersion "cloud"}}
<div class="page-wrapper">
{{ partial "sidebar.html" . }}
@ -14,7 +14,7 @@
<div class="cards">
<div class="card main" id="get-started">
{{ if $isCloud }}
<h1>Get started with <span class="avoid-wrap">InfluxDB Cloud</span></h1>
<h1>Get started with <span class="avoid-wrap">InfluxDB Cloud{{ if in $currentVersion "iox" }} (IOx){{end}}</span></h1>
<a class="btn" href="get-started">Get started</a>
{{ else }}
<h1>Get started with <span class="avoid-wrap">InfluxDB OSS {{ replaceRE "v" "" $currentVersion }}</span></h1>

View File

@ -36,6 +36,7 @@
<div class="exp-btn">
<p>View the InfluxDB documentation</p>
<ul class="exp-btn-links">
<li><a href="/influxdb/cloud-iox/">InfluxDB Cloud (IOx)</a></li>
<li><a href="/influxdb/cloud/">InfluxDB Cloud</a></li>
<li><a href="/influxdb/{{ $influxdbVersionV2 }}/">InfluxDB OSS {{ replaceRE "v" "" $influxdbVersionV2 }}</a></li>
</ul>

View File

@ -1,8 +1,8 @@
{{ $productPathData := findRE "[^/]+.*?" .RelPermalink }}
{{ $product := index $productPathData 0 }}
{{ $version := index $productPathData 1 }}
{{ $influxdbOSS := and (eq $product "influxdb") (ne $version "cloud") }}
{{ $influxdbCloud := and (eq $product "influxdb") (eq $version "cloud") }}
{{ $influxdbOSS := and (eq $product "influxdb") (not (in $version "cloud")) }}
{{ $influxdbCloud := and (eq $product "influxdb") (in $version "cloud") }}
{{ if $influxdbOSS }}
{{ .Content | replaceRE `(?Us)(<li>\s*<(?:div|span) class=\'cloud\-only\'>.*<\/(?:div|span)><\!\-\- close \-\-\>\s*</li>)` "" | replaceRE `(?Us)(<(?:div|span) class=\'cloud\-only\'>.*<\/(?:div|span)><\!\-\- close \-\-\>)` "" | safeHTML}}

View File

@ -1,5 +1,5 @@
{{ $productPathData := findRE "[^/]+.*?" .RelPermalink }}
{{ $latestV2 := cond (isset .Site.Data.products.influxdb "latest_override") .Site.Data.products.influxdb.latest_override .Site.Data.products.influxdb.latest }}
{{ $latestV2 := .Site.Data.products.influxdb.latest }}
{{ .Scratch.Set "product" (index $productPathData 0) }}
{{ if in (.Scratch.Get "product") "influxdb" }}
{{ .Scratch.Set "product" "influxdb" }}

View File

@ -9,7 +9,7 @@
{{ $v2EquivalentURL := replaceRE `v[1-2]\.[0-9]{1,2}` $latestV2 .Page.Params.v2 }}
{{ $v2EquivalentPage := .GetPage (replaceRE `\/$` "" $v2EquivalentURL) }}
{{ $v2PageExists := gt (len $v2EquivalentPage.Title) 0 }}
{{ $isCloud := in .Page.RelPermalink "/influxdb/cloud/"}}
{{ $isCloud := in .Page.RelPermalink "/influxdb/cloud"}}
{{ if and (lt $currentVersion $stableVersion) (not $isCloud) }}
<div class="warn block old-version">

View File

@ -13,6 +13,9 @@
<!-- Docs Notifications -->
{{ partial "footer/notifications.html" . }}
<!-- Custom time modal trigger-->
{{ partial "footer/custom-time-trigger" . }}
</body>
{{ partial "footer/javascript.html" . }}
</html>

View File

@ -0,0 +1,5 @@
{{ if or (.Page.HasShortcode "influxdb/custom-timestamps") (.Page.HasShortcode "influxdb/custom-timestamps-span") }}
<div class="custom-time-trigger">
<a onclick='toggleModal("#influxdb-gs-date-select")'><span class="cf-icon Clock_New"></span></a>
</div>
{{ end }}

View File

@ -10,6 +10,7 @@
{{ $keybindings := resources.Get "js/keybindings.js" }}
{{ $fluxGroupKeys := resources.Get "js/flux-group-keys.js" }}
{{ $fluxCurrentTime := resources.Get "js/flux-current-time.js" }}
{{ $influxdbGSTimestamps := resources.Get "js/get-started-timestamps.js" }}
{{ $fullscreenCode := resources.Get "js/fullscreen-code.js" }}
{{ $pageFeedback := resources.Get "js/page-feedback.js" }}
{{ $homepageInteractions := resources.Get "js/home-interactions.js" }}
@ -17,6 +18,7 @@
{{ $footerjs := slice $versionSelector $contentInteractions $searchInteractions $listFilters $modals $influxdbURLs $featureCallouts $tabbedContent $notifications $keybindings $fullscreenCode $pageFeedback $homepageInteractions $fluxInfluxDBVersions | resources.Concat "js/footer.bundle.js" | resources.Fingerprint }}
{{ $fluxGroupKeyjs := slice $fluxGroupKeys | resources.Concat "js/flux-group-keys.js" | resources.Fingerprint }}
{{ $fluxCurrentTimejs := slice $fluxCurrentTime | resources.Concat "js/flux-current-time.js" | resources.Fingerprint }}
{{ $influxdbGSTimestampsjs := slice $influxdbGSTimestamps | resources.Concat "js/get-started-timestamps.js" | resources.Fingerprint }}
<!-- Load cloudUrls array -->
<script type="text/javascript">
@ -53,4 +55,10 @@
<!-- Load Flux current time js if when the flux/current-time shortcode is present -->
{{ if .Page.HasShortcode "flux/current-time" }}
<script type="text/javascript" src="{{ $fluxCurrentTime.RelPermalink }}"></script>
{{ end }}
<!-- Load getting started timestamps js if when the influxdb/custom-gs-timestamps shortcode is present -->
{{ if or (.Page.HasShortcode "influxdb/custom-timestamps") (.Page.HasShortcode "influxdb/custom-timestamps-span") }}
<script src="https://cdn.jsdelivr.net/npm/vanillajs-datepicker@1.2.0/dist/js/datepicker.min.js"></script>
<script type="text/javascript" src="{{ $influxdbGSTimestampsjs.RelPermalink }}"></script>
{{ end }}

View File

@ -7,6 +7,9 @@
<!-- Modal window content blocks-->
{{ partial "footer/modals/influxdb-url.html" . }}
{{ partial "footer/modals/page-feedback.html" . }}
{{ if or (.Page.HasShortcode "influxdb/custom-timestamps") (.Page.HasShortcode "influxdb/custom-timestamps-span") }}
{{ partial "footer/modals/influxdb-gs-date-select.html" }}
{{ end }}
{{ if $inStdlib }}
{{ partial "footer/modals/flux-influxdb-versions.html" . }}
{{ end }}

View File

@ -0,0 +1,6 @@
<div class="modal-content" id="influxdb-gs-date-select">
<h3>Select a new date</h3>
<p><em>Select a date in your bucket's retention period.</em></p>
<div id="custom-date-selector"></div>
<a class="btn" id="submit-custom-date" onclick="">Update</a>
</div>

View File

@ -1,11 +1,11 @@
{{ $latestInfluxDBVersion := cond (isset .Site.Data.products.influxdb "latest_override" ) .Site.Data.products.influxdb.latest_override .Site.Data.products.influxdb.latest }}
{{ $latestInfluxDBVersion := .Site.Data.products.influxdb.latest }}
{{ $latestEnterpriseVersion := .Site.Data.products.enterprise_influxdb.latest }}
{{ $OSSLink := print "/influxdb/" $latestInfluxDBVersion "/reference/urls/ "}}
{{ $CloudLink := "/influxdb/cloud/reference/regions/" }}
{{ $EnterpriseLink := print "/enterprise_influxdb/" $latestEnterpriseVersion "/administration/config-data-nodes/#http-endpoint-settings" }}
{{ $isInfluxDB := cond (gt (len (findRE `^/influxdb/|^/enterprise_influxdb/` .RelPermalink)) 0) true false }}
{{ $isOSS := cond (in .RelPermalink "/influxdb/v") true false }}
{{ $isCloud := cond (in .RelPermalink "/influxdb/cloud/") true false }}
{{ $isCloud := cond (in .RelPermalink "/influxdb/cloud") true false }}
{{ $isEnterprise := cond (in .RelPermalink "/enterprise_influxdb/") true false }}
{{ .Scratch.Set "modalTitle" "Where are you running InfluxDB?" }}

View File

@ -21,7 +21,9 @@ docsearch({
return '';
} else if (version === 'cloud') {
return 'Cloud';
} else if (/v\d\./.test(version)) {
} else if (version === 'cloud-iox') {
return 'Cloud (IOx)';
| else if (/v\d\./.test(version)) {
return version;
} else {
return '';

View File

@ -8,7 +8,7 @@
{{ $searchTag := print $product "-" $currentVersion }}
{{ if not .IsHome }}
{{ if or (eq $currentVersion (index $.Site.Data.products $product).latest) (or (eq $product "platform") (eq $product "resources")) (eq $currentVersion (index $.Site.Data.products $product).latest_override) (eq $currentVersion "cloud") }}
{{ if or (eq $currentVersion (index $.Site.Data.products $product).latest) (or (eq $product "platform") (eq $product "resources")) (in $currentVersion "cloud") }}
<meta name="docsearch:latest" content="true">
{{ end }}
{{ if and (ne $product "platform") (ne $product "resources") (ne $currentVersion "") }}

View File

@ -18,6 +18,8 @@
{{ end }}
{{ else if eq $currentVersion "cloud"}}
{{ $scratch.Set "siteTitle" "InfluxDB Cloud Documentation" }}
{{ else if eq $currentVersion "cloud-iox"}}
{{ $scratch.Set "siteTitle" "InfluxDB Cloud (IOx) Documentation" }}
{{ else if eq $currentVersion nil}}
{{ $scratch.Set "siteTitle" (print (index .Site.Data.products $product).name " Documentation") }}
{{ else }}

View File

@ -10,6 +10,8 @@
{{ .Scratch.Set "menuKey" "platform" }}
{{ else if eq $product "resources" }}
{{ .Scratch.Set "menuKey" "resources" }}
{{ else if in $currentVersion "iox" }}
{{ .Scratch.Set "menuKey" "influxdb_cloud_iox" }}
{{ else }}
{{ .Scratch.Set "menuKey" (print $product "_" (replaceRE `\.` "_" (replaceRE "v" "" $currentVersion))) }}
{{ end }}
@ -24,7 +26,11 @@
{{ else if (eq $currentVersion nil) }}
{{ .Scratch.Set "searchPlaceholder" (print "Search " (index .Site.Data.products $product).name) }}
{{ else if eq $product "influxdb" }}
{{ .Scratch.Set "searchPlaceholder" (print "Search " (index .Site.Data.products $product).name " " (cond (in $currentVersion "v") $currentVersion (title $currentVersion)) " & Flux") }}
{{ if in $currentVersion "cloud" }}
{{ .Scratch.Set "searchPlaceholder" (print "Search " (index .Site.Data.products $product).name " " (cond (in $currentVersion "iox") "Cloud (IOx)" "Cloud & Flux")) }}
{{ else }}
{{ .Scratch.Set "searchPlaceholder" (print "Search " (index .Site.Data.products $product).name " " (cond (in $currentVersion "v") $currentVersion (title $currentVersion)) " & Flux") }}
{{ end }}
{{ else }}
{{ .Scratch.Set "searchPlaceholder" (print "Search " (index .Site.Data.products $product).name " " $currentVersion) }}
{{ end }}
@ -84,7 +90,7 @@
{{ end }}
<!-- Additional resources for all docs -->
{{ if ne $product "resources" }}
{{ if and (ne $product "resources") (ne $currentVersion "cloud-iox") }}
<h4 class="resources">Additional resources</h4>
<li class="nav-category"><a href="/resources/videos/">Videos</a></li>
<li class="nav-category"><a href="/resources/how-to-guides/">How-to Guides</a></li>

View File

@ -2,14 +2,15 @@
{{ $productPathData := findRE "[^/]+.*?" .RelPermalink }}
{{ $product := index $productPathData 0 }}
{{ $currentVersion := index $productPathData 1 }}
{{ $isCloud := eq "influxdb/cloud" (print $product "/" $currentVersion )}}
{{ $isCloud := in (print $product "/" $currentVersion ) "influxdb/cloud" }}
{{ $isOSSv2 := in (print $product "/" $currentVersion ) "influxdb/v2."}}
<div class="dropdown">
{{ if or (eq $product nil) (eq $product "platform") (eq $product "resources") }}
<p class="selected">Select product</p>
{{ else if $isCloud }}
<p class="selected">{{ index .Site.Data.products.influxdb_cloud.altname }}</p>
{{ $cloudType := cond (in $currentVersion "iox") $.Site.Data.products.influxdb_cloud_iox.altname $.Site.Data.products.influxdb_cloud.altname}}
<p class="selected">{{ $cloudType }}</p>
{{ else }}
{{ $productData := (index .Site.Data.products $product) }}
<p class="selected">{{ if $productData.altname }}{{ $productData.altname }}{{ else }}{{ $productData.name }}{{ end }}</p>

View File

@ -50,4 +50,53 @@
</li>
{{ end }}
</ul>
{{ else if eq $type "iox-list" }}
<ul>
{{ range where .Site.Data.influxdb_urls.cloud.providers "iox" true }}
{{ $scratch.Set "title" .name }}
{{ if not (in .name .short_name) }}
{{ $scratch.Set "title" (print .name " (" .short_name ")")}}
{{ end }}
{{ $title := $scratch.Get "title" }}
<li><strong>{{ $title }}</strong>
<ul>
{{ range where .regions "iox" true }}
<li {{ if .status }}class="{{ .status }}"{{ end }}>{{ .name }}</li>
{{ end }}
</ul>
</li>
{{ end }}
</ul>
{{ else if eq $type "iox-table"}}
{{ range where .Site.Data.influxdb_urls.cloud.providers "iox" true }}
{{ $scratch.Set "title" .name }}
{{ if not (in .name .short_name) }}
{{ $scratch.Set "title" (print .name " (" .short_name ")")}}
{{ end }}
{{ $title := $scratch.Get "title" }}
{{ $titleID := anchorize $title }}
<h3 id="{{ $titleID }}">{{ $title }}</h3>
<table class="cloud-urls">
<thead>
<th align="left">Region</th>
<th align="left">Location</th>
<th align="left">URL(s)</th>
</thead>
{{ range where .regions "iox" true }}
<tr>
<td {{ if .status }}class="{{ .status }}"{{ end }}>{{ .name }}</td>
<td>{{ .location }}</td>
<td>
{{ if .clusters }}
{{ range .clusters }}
<p><span class="cluster-name">{{ .name }}:</span> <a href="{{ .url }}">{{ .url }}</a></p>
{{ end }}
{{ else }}
<a href="{{ .url }}">{{ .url }}</a>
{{ end }}
</td>
</tr>
{{ end }}
</table>
{{ end }}
{{ end }}

View File

@ -2,4 +2,6 @@
{{- $currentVersion := replaceRE "v" "" (index $productPathData 1) -}}
{{- $keep := .Get "keep" | default false -}}
{{- $keepClass := cond ( $keep ) " keep" "" -}}
<span class="current-version{{ $keepClass }}">{{ $currentVersion }}</span>
{{- $isCloud := in $currentVersion "cloud" -}}
{{- $versionText := cond ($isCloud) "Cloud" $currentVersion }}
<span class="current-version{{ $keepClass }}">{{ $versionText }}</span>

View File

@ -1,8 +1,11 @@
{{ $latestV2 := cond (isset .Site.Data.products.influxdb "latest_override") .Site.Data.products.influxdb.latest_override .Site.Data.products.influxdb.latest }}
{{ $productPathData := findRE "[^/]+.*?" .Page.RelPermalink }}
{{ $cloudVersion := index $productPathData 1 }}
{{ $latestV2 := .Site.Data.products.influxdb.latest }}
{{ $pathReplacement := print "/influxdb/" $latestV2 "/" }}
{{ $path := .Get 0 | default (replaceRE `^influxdb\/cloud\/` $pathReplacement .Page.File.Path) }}
{{ $relPath := .Get 0 | default "" }}
{{ $path := cond (eq $relPath "") (replaceRE `(?U)^influxdb\/cloud.*\/` $pathReplacement .Page.File.Path) (print $pathReplacement (replaceRE `^\/` "" $relPath))}}
{{ $page := .Site.GetPage $path }}
{{ with $page }}
{{ .Content | replaceRE `\/influxdb\/v2\.[0-9]{1,2}\/` "/influxdb/cloud/" | replaceRE `<span class="current-version">.*<\/span>` "Cloud" | safeHTML }}
{{ .Content | replaceRE `\/influxdb\/v2\.[0-9]{1,2}\/` (print "/influxdb/" $cloudVersion "/") | replaceRE `<span class="current-version">.*<\/span>` "Cloud" | safeHTML }}
{{ end }}

View File

@ -0,0 +1 @@
<span class="get-started-timestamps">{{ .Inner }}</span>

View File

@ -0,0 +1,3 @@
<div class="get-started-timestamps">
{{ .Inner }}
</div>

View File

@ -0,0 +1,202 @@
{{- $productPathData := findRE "[^/]+.*?" .Page.RelPermalink -}}
{{- $currentVersion := index $productPathData 1 -}}
{{- $isOSS := cond (in $currentVersion "cloud") false true -}}
<div id="series-diagram-wrapper flux">
<div class="series-diagram">
<table>
<thead>
<tr>
<th align="left">_time</th>
<th align="left">_measurement</th>
<th align="left"><span class="tooltip" data-tooltip-text="Tag Key">city</span></th>
<th align="left"><span class="tooltip" data-tooltip-text="Tag Key">country</span></th>
<th align="left">_field</th>
<th align="right">_value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><span class="tooltip" data-tooltip-text="Timestamp">2022-01-01T12:00:00Z</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="Measurement">weather</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="Tag Value">London</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="Tag Value">UK</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="Field Key">temperature</span></td>
<td align="right"><span class="tooltip shift-left" data-tooltip-text="Field Value">12.0</span></td>
</tr>
<tr class="point">
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">temperature</td>
<td align="right">12.1</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">temperature</td>
<td align="right">11.5</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">temperature</td>
<td align="right">5.9</td>
</tr>
</tbody>
</table>
</div>
<div class="series-diagram">
<table>
<thead>
<tr>
<th align="left">_time</th>
<th align="left">_measurement</th>
<th align="left">city</th>
<th align="left">country</th>
<th align="left">_field</th>
<th align="right">_value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">2022-01-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">temperature</td>
<td align="right">13.2</td>
</tr>
<tr>
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">temperature</td>
<td align="right">11.5</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">temperature</td>
<td align="right">10.2</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">temperature</td>
<td align="right">7.9</td>
</tr>
</tbody>
</table>
</div>
{{ if not $isOSS }}
<div class="series-diagram">
<table>
<thead>
<tr>
<th align="left">_time</th>
<th align="left">_measurement</th>
<th align="left">city</th>
<th align="left">country</th>
<th align="left">_field</th>
<th align="right">_value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">2022-01-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">humidity</td>
<td align="right">88.4</td>
</tr>
<tr>
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">humidity</td>
<td align="right">94.0</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">humidity</td>
<td align="right">82.1</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="left">humidity</td>
<td align="right">87.6</td>
</tr>
</tbody>
</table>
</div>
<div class="series-diagram">
<table>
<thead>
<tr>
<th align="left">_time</th>
<th align="left">_measurement</th>
<th align="left">city</th>
<th align="left">country</th>
<th align="left">_field</th>
<th align="right">_value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">2022-01-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">humidity</td>
<td align="right">88.5</td>
</tr>
<tr>
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">humidity</td>
<td align="right">87.8</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">humidity</td>
<td align="right">76.4</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="left">humidity</td>
<td align="right">93.3</td>
</tr>
</tbody>
</table>
</div>
{{ end }}
</div>

View File

@ -0,0 +1,72 @@
<p style="margin-bottom: 0;">name: <span class="tooltip right" data-tooltip-text="measurement">weather</span></p>
<div class="sql">
<table>
<thead>
<tr>
<th align="left">time</th>
<th align="left"><span class="tooltip" data-tooltip-text="tag key">city</span></th>
<th align="left"><span class="tooltip" data-tooltip-text="tag key">country</span></th>
<th align="right"><span class="tooltip" data-tooltip-text="field key">temperature</span></th>
<th align="right"><span class="tooltip" data-tooltip-text="field key">humidity</span></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><span class="tooltip" data-tooltip-text="timestamp">2022-01-01T12:00:00Z</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="tag value">London</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="tag value">UK</span></td>
<td align="right"><span class="tooltip" data-tooltip-text="field value">12.0</span></td>
<td align="right"><span class="tooltip shift-left" data-tooltip-text="field value">88.4</span></td>
</tr>
<tr class="points">
<td align="left"><span class="point one two">2022-01-01T12:00:00Z</span></td>
<td align="left"><span class="point one two">Cologne</span></td>
<td align="left"><span class="point one two">DE</span></td>
<td align="right"><span class="point one">13.2</span></td>
<td align="right"><span class="point two">88.5</span></td>
</tr>
<tr>
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="right">12.1</td>
<td align="right">94.0</td>
</tr>
<tr>
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="right">11.5</td>
<td align="right">87.8</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="right">11.5</td>
<td align="right">82.1</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="right">10.2</td>
<td align="right">76.4</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">London</td>
<td align="left">UK</td>
<td align="right">5.9</td>
<td align="right">87.6</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">Cologne</td>
<td align="left">DE</td>
<td align="right">7.9</td>
<td align="right">93.3</td>
</tr>
</tbody>
</table>
</div>

View File

@ -1,87 +0,0 @@
<div id="series-diagram">
<table>
<thead>
<tr>
<th align="left">_time</th>
<th align="left">_measurement</th>
<th align="left"><span class="tooltip" data-tooltip-text="tag key">location</span></th>
<th align="left">_field</th>
<th align="right">_value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><span class="tooltip" data-tooltip-text="timestamp">2022-01-01T12:00:00Z</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="measurement">weather</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="tag value">London, UK</span></td>
<td align="left"><span class="tooltip" data-tooltip-text="field key">temperature</span></td>
<td align="right"><span class="tooltip" data-tooltip-text="field value">12.0</span></td>
</tr>
<tr class="point">
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London, UK</td>
<td align="left">temperature</td>
<td align="right">12.1</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London, UK</td>
<td align="left">temperature</td>
<td align="right">11.5</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">London, UK</td>
<td align="left">temperature</td>
<td align="right">5.9</td>
</tr>
</tbody>
</table>
</div>
<div id="series-diagram">
<table>
<thead>
<tr>
<th align="left">_time</th>
<th align="left">_measurement</th>
<th align="left">location</th>
<th align="left">_field</th>
<th align="right">_value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">2022-01-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne, DE</td>
<td align="left">temperature</td>
<td align="right">13.2</td>
</tr>
<tr>
<td align="left">2022-02-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne, DE</td>
<td align="left">temperature</td>
<td align="right">11.5</td>
</tr>
<tr>
<td align="left">2022-03-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne, DE</td>
<td align="left">temperature</td>
<td align="right">10.2</td>
</tr>
<tr>
<td align="left">2022-04-01T12:00:00Z</td>
<td align="left">weather</td>
<td align="left">Cologne, DE</td>
<td align="left">temperature</td>
<td align="right">7.9</td>
</tr>
</tbody>
</table>
</div>

Some files were not shown because too many files have changed in this diff Show More