Merge branch 'master' into notebooks

pull/1953/head
noramullen1 2020-12-04 09:16:33 -08:00
commit 832dec0f9f
171 changed files with 3278 additions and 1795 deletions

View File

@ -333,6 +333,57 @@ WHERE time > now() - 15m
{{< /code-tabs-wrapper >}}
~~~
### Required elements
Use the `{{< req >}}` shortcode to identify required elements in documentation with
orange text and/or asterisks. By default, the shortcode outputs the text, "Required," but
you can customize the text by passing a string argument with the shortcode.
```md
{{< req >}}
```
**Output:** Required
```md
{{< req "This is Required" >}}
```
**Output:** This is required
#### Required elements in a list
When identifying required elements in a list, use `{{< req type="key" >}}` to generate
a "* Required" key before the list. For required elements in the list, include
{{< req "\*" >}} before the text of the list item. For example:
```md
{{< req type="key" >}}
- {{< req "\*" >}} **This element is required**
- {{< req "\*" >}} **This element is also required**
- **This element is NOT required**
```
### Keybinds
Use the `{{< keybind >}}` shortcode to include OS-specific keybindings/hotkeys.
The following parameters are available:
- mac
- linux
- win
- all
- other
```md
<!-- Provide keybinding for one OS and another for all others -->
{{< keybind mac="⇧⌘P" other="Ctrl+Shift+P" >}}
<!-- Provide a keybind for all OSs -->
{{< keybind all="Ctrl+Shift+P" >}}
<!-- Provide unique keybindings for each OS -->
{{< keybind mac="⇧⌘P" linux="Ctrl+Shift+P" win="Ctrl+Shift+Alt+P" >}}
```
### Related content
Use the `related` frontmatter to include links to specific articles at the bottom of an article.
@ -529,6 +580,8 @@ Below is a list of available icons (some are aliases):
- nav-orgs
- nav-tasks
- note
- notebook
- notebooks
- org
- orgs
- pause
@ -555,13 +608,14 @@ In many cases, documentation references an item in the left nav of the InfluxDB
Provide a visual example of the navigation item using the `nav-icon` shortcode.
```
{{< nav-icon "Tasks" >}}
{{< nav-icon "tasks" >}}
```
The following case insensitive values are supported:
- admin, influx
- data-explorer, data explorer
- notebooks, books
- dashboards
- tasks
- monitor, alerts, bell

View File

@ -1873,6 +1873,7 @@ paths:
$ref: "#/components/schemas/Error"
/delete:
post:
operationId: PostDelete
summary: Delete time series data from InfluxDB
requestBody:
description: Predicate delete request
@ -1990,7 +1991,7 @@ paths:
operationId: PostSources
tags:
- Sources
summary: Creates a source
summary: Create a source
parameters:
- $ref: "#/components/parameters/TraceSpan"
requestBody:
@ -2814,7 +2815,7 @@ paths:
operationId: GetDashboardsIDLabels
tags:
- Dashboards
summary: list all labels for a dashboard
summary: List all labels for a dashboard
parameters:
- $ref: "#/components/parameters/TraceSpan"
- in: path
@ -3305,7 +3306,7 @@ paths:
operationId: DeleteAuthorizationsID
tags:
- Authorizations
summary: Delete a authorization
summary: Delete an authorization
parameters:
- $ref: "#/components/parameters/TraceSpan"
- in: path
@ -3681,7 +3682,7 @@ paths:
operationId: DeleteBucketsIDLabelsID
tags:
- Buckets
summary: delete a label from a bucket
summary: Delete a label from a bucket
parameters:
- $ref: "#/components/parameters/TraceSpan"
- in: path
@ -8479,45 +8480,6 @@ components:
TaskStatusType:
type: string
enum: [active, inactive]
Invite:
properties:
id:
description: the idpe id of the invite
readOnly: true
type: string
email:
type: string
role:
type: string
enum:
- member
- owner
expiresAt:
format: date-time
type: string
links:
type: object
readOnly: true
example:
self: "/api/v2/invites/1"
properties:
self:
type: string
format: uri
required: [id, email, role]
Invites:
type: object
properties:
links:
type: object
properties:
self:
type: string
format: uri
invites:
type: array
items:
$ref: "#/components/schemas/Invite"
User:
properties:
id:
@ -8978,8 +8940,32 @@ components:
$ref: "#/components/schemas/Legend"
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
shadeBelow:
type: boolean
hoverDimension:
@ -8990,6 +8976,8 @@ components:
enum: [overlaid, stacked]
geom:
$ref: "#/components/schemas/XYGeom"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9039,8 +9027,32 @@ components:
$ref: "#/components/schemas/Legend"
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
upperColumn:
type: string
mainColumn:
@ -9052,6 +9064,8 @@ components:
enum: [auto, x, y, xy]
geom:
$ref: "#/components/schemas/XYGeom"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9101,8 +9115,32 @@ components:
$ref: "#/components/schemas/Legend"
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
shadeBelow:
type: boolean
hoverDimension:
@ -9117,6 +9155,8 @@ components:
type: string
decimalPlaces:
$ref: "#/components/schemas/DecimalPlaces"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9167,6 +9207,18 @@ components:
type: boolean
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
ySeriesColumns:
type: array
items:
@ -9197,6 +9249,8 @@ components:
type: string
ySuffix:
type: string
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9248,8 +9302,32 @@ components:
type: boolean
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
fillColumns:
type: array
items:
@ -9280,6 +9358,8 @@ components:
type: string
ySuffix:
type: string
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9330,8 +9410,32 @@ components:
type: boolean
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
xDomain:
type: array
items:
@ -9356,6 +9460,8 @@ components:
type: string
binSize:
type: number
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9409,11 +9515,6 @@ components:
$ref: "#/components/schemas/Legend"
decimalPlaces:
$ref: "#/components/schemas/DecimalPlaces"
legendOpacity:
type: number
format: float
legendOrientationThreshold:
type: integer
HistogramViewProperties:
type: object
required:
@ -9468,6 +9569,8 @@ components:
enum: [overlaid, stacked]
binCount:
type: integer
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9635,6 +9738,8 @@ components:
type: array
items:
$ref: "#/components/schemas/DashboardColor"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -10159,6 +10264,10 @@ components:
bucketID:
type: string
description: The ID of the bucket to write to.
allowInsecure:
type: boolean
description: Skip TLS verification on endpoint.
default: false
ScraperTargetResponse:
type: object
allOf:
@ -11704,12 +11813,7 @@ components:
type: string
enum: ["slack", "pagerduty", "http", "telegram"]
DBRP:
required:
- orgID
- org
- bucketID
- database
- retention_policy
type: object
properties:
id:
type: string
@ -11735,6 +11839,17 @@ components:
description: Specify if this mapping represents the default retention policy for the database specificed.
links:
$ref: "#/components/schemas/Links"
oneOf:
- required:
- orgID
- bucketID
- database
- retention_policy
- required:
- org
- bucketID
- database
- retention_policy
DBRPs:
properties:
notificationEndpoints:

View File

@ -1,17 +1,17 @@
openapi: "3.0.0"
info:
title: InfluxDB API Service (v1 compatible endpoints)
title: Influx API Service (V1 compatible endpoints)
version: 0.1.0
servers:
- url: /
description: InfluxDB v1 compatible API endpoints.
description: V1 compatible api endpoints.
paths:
/write:
post: # technically this functions with other methods as well
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a v1 compatible format
summary: Write time series data into InfluxDB in a V1 compatible format
requestBody:
description: Line protocol body
required: true
@ -103,7 +103,7 @@ paths:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a v1 compatible format
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
@ -305,4 +305,4 @@ components:
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required: [code, message, maxLength]
required: [code, message, maxLength]

View File

@ -1873,6 +1873,7 @@ paths:
$ref: "#/components/schemas/Error"
/delete:
post:
operationId: PostDelete
summary: Delete time series data from InfluxDB
requestBody:
description: Predicate delete request
@ -1990,7 +1991,7 @@ paths:
operationId: PostSources
tags:
- Sources
summary: Creates a source
summary: Create a source
parameters:
- $ref: "#/components/parameters/TraceSpan"
requestBody:
@ -2814,7 +2815,7 @@ paths:
operationId: GetDashboardsIDLabels
tags:
- Dashboards
summary: list all labels for a dashboard
summary: List all labels for a dashboard
parameters:
- $ref: "#/components/parameters/TraceSpan"
- in: path
@ -3305,7 +3306,7 @@ paths:
operationId: DeleteAuthorizationsID
tags:
- Authorizations
summary: Delete a authorization
summary: Delete an authorization
parameters:
- $ref: "#/components/parameters/TraceSpan"
- in: path
@ -3681,7 +3682,7 @@ paths:
operationId: DeleteBucketsIDLabelsID
tags:
- Buckets
summary: delete a label from a bucket
summary: Delete a label from a bucket
parameters:
- $ref: "#/components/parameters/TraceSpan"
- in: path
@ -8479,45 +8480,6 @@ components:
TaskStatusType:
type: string
enum: [active, inactive]
Invite:
properties:
id:
description: the idpe id of the invite
readOnly: true
type: string
email:
type: string
role:
type: string
enum:
- member
- owner
expiresAt:
format: date-time
type: string
links:
type: object
readOnly: true
example:
self: "/api/v2/invites/1"
properties:
self:
type: string
format: uri
required: [id, email, role]
Invites:
type: object
properties:
links:
type: object
properties:
self:
type: string
format: uri
invites:
type: array
items:
$ref: "#/components/schemas/Invite"
User:
properties:
id:
@ -8978,8 +8940,32 @@ components:
$ref: "#/components/schemas/Legend"
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
shadeBelow:
type: boolean
hoverDimension:
@ -8990,6 +8976,8 @@ components:
enum: [overlaid, stacked]
geom:
$ref: "#/components/schemas/XYGeom"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9039,8 +9027,32 @@ components:
$ref: "#/components/schemas/Legend"
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
upperColumn:
type: string
mainColumn:
@ -9052,6 +9064,8 @@ components:
enum: [auto, x, y, xy]
geom:
$ref: "#/components/schemas/XYGeom"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9101,8 +9115,32 @@ components:
$ref: "#/components/schemas/Legend"
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
shadeBelow:
type: boolean
hoverDimension:
@ -9117,6 +9155,8 @@ components:
type: string
decimalPlaces:
$ref: "#/components/schemas/DecimalPlaces"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9167,6 +9207,18 @@ components:
type: boolean
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
ySeriesColumns:
type: array
items:
@ -9197,6 +9249,8 @@ components:
type: string
ySuffix:
type: string
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9248,8 +9302,32 @@ components:
type: boolean
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
fillColumns:
type: array
items:
@ -9280,6 +9358,8 @@ components:
type: string
ySuffix:
type: string
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9330,8 +9410,32 @@ components:
type: boolean
xColumn:
type: string
generateXAxisTicks:
type: array
items:
type: string
xTotalTicks:
type: integer
xTickStart:
type: number
format: float
xTickStep:
type: number
format: float
yColumn:
type: string
generateYAxisTicks:
type: array
items:
type: string
yTotalTicks:
type: integer
yTickStart:
type: number
format: float
yTickStep:
type: number
format: float
xDomain:
type: array
items:
@ -9356,6 +9460,8 @@ components:
type: string
binSize:
type: number
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9409,11 +9515,6 @@ components:
$ref: "#/components/schemas/Legend"
decimalPlaces:
$ref: "#/components/schemas/DecimalPlaces"
legendOpacity:
type: number
format: float
legendOrientationThreshold:
type: integer
HistogramViewProperties:
type: object
required:
@ -9468,6 +9569,8 @@ components:
enum: [overlaid, stacked]
binCount:
type: integer
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -9635,6 +9738,8 @@ components:
type: array
items:
$ref: "#/components/schemas/DashboardColor"
legendColorizeRows:
type: boolean
legendOpacity:
type: number
format: float
@ -10159,6 +10264,10 @@ components:
bucketID:
type: string
description: The ID of the bucket to write to.
allowInsecure:
type: boolean
description: Skip TLS verification on endpoint.
default: false
ScraperTargetResponse:
type: object
allOf:
@ -11704,12 +11813,7 @@ components:
type: string
enum: ["slack", "pagerduty", "http", "telegram"]
DBRP:
required:
- orgID
- org
- bucketID
- database
- retention_policy
type: object
properties:
id:
type: string
@ -11735,6 +11839,17 @@ components:
description: Specify if this mapping represents the default retention policy for the database specificed.
links:
$ref: "#/components/schemas/Links"
oneOf:
- required:
- orgID
- bucketID
- database
- retention_policy
- required:
- org
- bucketID
- database
- retention_policy
DBRPs:
properties:
notificationEndpoints:

View File

@ -1,17 +1,17 @@
openapi: "3.0.0"
info:
title: InfluxDB API Service (v1 compatible endpoints)
title: Influx API Service (V1 compatible endpoints)
version: 0.1.0
servers:
- url: /
description: InfluxDB v1 compatible API endpoints.
description: V1 compatible api endpoints.
paths:
/write:
post: # technically this functions with other methods as well
operationId: PostWriteV1
tags:
- Write
summary: Write time series data into InfluxDB in a v1 compatible format
summary: Write time series data into InfluxDB in a V1 compatible format
requestBody:
description: Line protocol body
required: true
@ -103,7 +103,7 @@ paths:
operationId: PostQueryV1
tags:
- Query
summary: Query InfluxDB in a v1 compatible format
summary: Query InfluxDB in a V1 compatible format
requestBody:
description: InfluxQL query to execute.
content:
@ -305,4 +305,4 @@ components:
description: Max length in bytes for a body of line-protocol.
type: integer
format: int32
required: [code, message, maxLength]
required: [code, message, maxLength]

41
assets/js/keybindings.js Normal file
View File

@ -0,0 +1,41 @@
// Dynamically update keybindings or hotkeys
function getPlatform() {
if (/Mac/.test(navigator.platform)) {
return "osx"
} else if (/Win/.test(navigator.platform)) {
return "win"
} else if (/Linux/.test(navigator.platform)) {
return "linux"
} else {
return "other"
}
}
const platform = getPlatform()
function addOSClass(osClass) {
$('.keybinding').addClass(osClass)
}
function updateKeyBindings() {
$('.keybinding').each(function() {
var osx = $(this).data("osx")
var linux = $(this).data("linux")
var win = $(this).data("win")
if (platform === "other") {
if (win != linux) {
var keybind = '<code class="osx">' + osx + '</code> for macOS, <code>' + linux + '</code> for Linux, and <code>' + win + '</code> for Windows';
} else {
var keybind = '<code>' + linux + '</code> for Linux and Windows and <code class="osx">' + osx + '</code> for macOS';
}
} else {
var keybind = '<code>' + $(this).data(platform) + '</code>'
}
$(this).html(keybind)
})
}
addOSClass(platform)
updateKeyBindings()

View File

@ -106,6 +106,7 @@
"article/expand",
"article/feedback",
"article/flex",
"article/keybinding",
"article/lists",
"article/note",
"article/pagination-btns",
@ -127,8 +128,18 @@
.required, .req {
color:#FF8564;
font-weight:700;
font-weight:$medium;
font-style: italic;
margin: 0 .15rem 0 .1rem;
&.asterisk {
margin: 0 -.1rem 0 -.5rem;
}
&.key {
font-size: .9rem;
font-weight: $medium;
}
}
a.q-link {

View File

@ -0,0 +1,8 @@
$osxFont: -apple-system, BlinkMacSystemFont, $rubik, 'Helvetica Neue', Arial, sans-serif;
.keybinding {
font-family: $rubik;
code { font-family: $rubik; }
&.osx code {font-family: $osxFont;}
code.osx {font-family: $osxFont;}
}

View File

@ -35,6 +35,15 @@
color: $article-code-accent7;
}
}
&.external {
margin: 0 0 0 -.25rem;
display: inline-block;
padding: .1rem .75rem;
background-color: rgba($article-text, .1);
border-radius: 1rem;
font-size: .9rem;
font-weight: $medium;
}
}
& .info {

View File

@ -1,10 +1,10 @@
@font-face {
font-family: 'icomoon';
src: url('fonts/icomoon.eot?dzkzbu');
src: url('fonts/icomoon.eot?dzkzbu#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?dzkzbu') format('truetype'),
url('fonts/icomoon.woff?dzkzbu') format('woff'),
url('fonts/icomoon.svg?dzkzbu#icomoon') format('svg');
src: url('fonts/icomoon.eot?lj8dxa');
src: url('fonts/icomoon.eot?lj8dxa#iefix') format('embedded-opentype'),
url('fonts/icomoon.ttf?lj8dxa') format('truetype'),
url('fonts/icomoon.woff?lj8dxa') format('woff'),
url('fonts/icomoon.svg?lj8dxa#icomoon') format('svg');
font-weight: normal;
font-style: normal;
font-display: block;
@ -25,8 +25,8 @@
-moz-osx-font-smoothing: grayscale;
}
.icon-alert-circle:before {
content: "\e933";
.icon-book-pencil:before {
content: "\e965";
}
.icon-influx-logo:before {
content: "\e900";
@ -112,6 +112,9 @@
.icon-menu:before {
content: "\e91b";
}
.icon-download:before {
content: "\e91c";
}
.icon-minus:before {
content: "\e91d";
}
@ -139,7 +142,7 @@
.icon-data-explorer:before {
content: "\e925";
}
.icon-download:before {
.icon-ui-download:before {
content: "\e926";
}
.icon-duplicate:before {
@ -178,6 +181,9 @@
.icon-remove:before {
content: "\e932";
}
.icon-alert-circle:before {
content: "\e933";
}
.icon-trash:before {
content: "\e935";
}

View File

@ -205,8 +205,31 @@ Environment variable: `$TLS_PRIVATE_KEY`
List of etcd endpoints.
##### CLI example
```sh
## Single parameter
--etcd-endpoints=localhost:2379
## Mutiple parameters
--etcd-endpoints=localhost:2379 \
--etcd-endpoints=192.168.1.61:2379 \
--etcd-endpoints=192.192.168.1.100:2379
```
Environment variable: `$ETCD_ENDPOINTS`
##### Environment variable example
```sh
## Single parameter
ETCD_ENDPOINTS=localhost:2379
## Mutiple parameters
ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
```
#### `--etcd-username=`
Username to log into etcd.
@ -332,8 +355,30 @@ Environment variable: `$GH_CLIENT_SECRET`
[Optional] Specify a GitHub organization membership required for a user.
##### CLI example
```sh
## Single parameter
--github-organization=org1
## Mutiple parameters
--github-organization=org1 \
--github-organization=org2 \
--github-organization=org3
```
Environment variable: `$GH_ORGS`
##### Environment variable example
```sh
## Single parameter
GH_ORGS=org1
## Mutiple parameters
GH_ORGS=org1,org2,org3
```
### Google-specific OAuth 2.0 authentication options
See [Configuring Google authentication](/chronograf/v1.8/administration/managing-security/#configure-google-authentication) for more information.
@ -354,8 +399,29 @@ Environment variable: `$GOOGLE_CLIENT_SECRET`
[Optional] Restricts authorization to users from specified Google email domains.
##### CLI example
```sh
## Single parameter
--google-domains=delorean.com
## Mutiple parameters
--google-domains=delorean.com \
--google-domains=savetheclocktower.com
```
Environment variable: `$GOOGLE_DOMAINS`
##### Environment variable example
```sh
## Single parameter
GOOGLE_DOMAINS=delorean.com
## Mutiple parameters
GOOGLE_DOMAINS=delorean.com,savetheclocktower.com
```
### Auth0-specific OAuth 2.0 authentication options
See [Configuring Auth0 authentication](/chronograf/v1.8/administration/managing-security/#configure-auth0-authentication) for more information.
@ -386,8 +452,30 @@ Environment variable: `$AUTH0_CLIENT_SECRET`
Organizations are set using an "organization" key in the user's `app_metadata`.
Lists are comma-separated and are only available when using environment variables.
##### CLI example
```sh
## Single parameter
--auth0-organizations=org1
## Mutiple parameters
--auth0-organizations=org1 \
--auth0-organizations=org2 \
--auth0-organizations=org3
```
Environment variable: `$AUTH0_ORGS`
##### Environment variable example
```sh
## Single parameter
AUTH0_ORGS=org1
## Mutiple parameters
AUTH0_ORGS=org1,org2,org3
```
### Heroku-specific OAuth 2.0 authentication options
See [Configuring Heroku authentication](/chronograf/v1.8/administration/managing-security/#configure-heroku-authentication) for more information.
@ -404,10 +492,31 @@ The Heroku Secret for OAuth 2.0 support.
### `--heroku-organization=`
The Heroku organization memberships required for access to Chronograf.
Lists are comma-separated.
##### CLI example
```sh
## Single parameter
--heroku-organization=org1
## Mutiple parameters
--heroku-organization=org1 \
--heroku-organization=org2 \
--heroku-organization=org3
```
**Environment variable:** `$HEROKU_ORGS`
##### Environment variable example
```sh
## Single parameter
HEROKU_ORGS=org1
## Mutiple parameters
HEROKU_ORGS=org1,org2,org3
```
### Generic OAuth 2.0 authentication options
See [Configure OAuth 2.0](/chronograf/v1.8/administration/managing-security/#configure-oauth-2-0) for more information.
@ -437,16 +546,59 @@ The scopes requested by provider of web client.
Default value: `user:email`
##### CLI example
```sh
## Single parameter
--generic-scopes=api
## Mutiple parameters
--generic-scopes=api \
--generic-scopes=openid \
--generic-scopes=read_user
```
Environment variable: `$GENERIC_SCOPES`
##### Environment variable example
```sh
## Single parameter
GENERIC_SCOPES=api
## Mutiple parameters
GENERIC_SCOPES=api,openid,read_user
```
#### `--generic-domains=`
The email domain required for user email addresses.
Example: `--generic-domains=example.com`
##### CLI example
```sh
## Single parameter
--generic-domains=delorean.com
## Mutiple parameters
--generic-domains=delorean.com \
--generic-domains=savetheclocktower.com
```
Environment variable: `$GENERIC_DOMAINS`
##### Environment variable example
```sh
## Single parameter
GENERIC_DOMAINS=delorean.com
## Mutiple parameters
GENERIC_DOMAINS=delorean.com,savetheclocktower.com
```
#### `--generic-auth-url=`
The authorization endpoint URL for the OAuth 2.0 provider.

View File

@ -26,16 +26,36 @@ Have an existing Chronograf configuration store that you want to use with a Chro
2. Extract the `etcd` binary and place it in your system PATH.
3. Start etcd.
## Start Chronograf
Run the following command to start Chronograf using etcd as the storage layer:
Run the following command to start Chronograf using `etcd` as the storage layer. The syntax depends on whether you're using command line flags or the `ETCD_ENDPOINTS` environment variable.
##### Define etcd endpoints with command line flags
```sh
# Sytnax
# Syntax
chronograf --etcd-endpoints=<etcd-host>
# Examples
# Add a single etcd endpoint when starting Chronograf
# Example
chronograf --etcd-endpoints=localhost:2379
# Add multiple etcd endpoints when starting Chronograf
chronograf \
--etcd-endpoints=localhost:2379 \
--etcd-endpoints=192.168.1.61:2379 \
--etcd-endpoints=192.192.168.1.100:2379
```
##### Define etcd endpoints with the ETCD_ENDPOINTS environment variable
```sh
# Provide etcd endpoints in a comma-separated list
export ETCD_ENDPOINTS=localhost:2379,192.168.1.61:2379,192.192.168.1.100:2379
# Start Chronograf
chronograf
```
For more information, see [Chronograf etcd configuration options](/chronograf/v1.8/administration/config-options#etcd-options).

View File

@ -187,7 +187,7 @@ Separate multiple domains using commas.
For example, to permit access only from `biffspleasurepalace.com` and `savetheclocktower.com`, set the environment variable as follows:
```sh
export GOOGLE_DOMAINS=biffspleasurepalance.com,savetheclocktower.com
export GOOGLE_DOMAINS=biffspleasurepalace.com,savetheclocktower.com
```
#### Configure Auth0 authentication

View File

@ -5,6 +5,7 @@ menu:
chronograf_1_8:
weight: 20
parent: Guides
draft: true
---
TICKscript logs data to a log file for debugging purposes.

View File

@ -84,7 +84,7 @@ chronograf [flags]
|:-------------------------------|:------------------------------------------------------------------------|:--------------------|
| `-i`, `--github-client-id` | GitHub client ID value for OAuth 2.0 support | `$GH_CLIENT_ID` |
| `-s`, `--github-client-secret` | GitHub client secret value for OAuth 2.0 support | `$GH_CLIENT_SECRET` |
| `-o`, `--github-organization` | Specify a GitHub organization membership required for a user. Optional. | `$GH_ORGS` |
| `-o`, `--github-organization` | Restricts authorization to users from specified Github organizations. To add more than one organization, add multiple flags. Optional. | `$GH_ORGS` |
### Google-specific OAuth 2.0 authentication flags
@ -92,7 +92,7 @@ chronograf [flags]
|:-------------------------|:--------------------------------------------------------------------------------|:------------------------|
| `--google-client-id` | Google client ID value for OAuth 2.0 support | `$GOOGLE_CLIENT_ID` |
| `--google-client-secret` | Google client secret value for OAuth 2.0 support | `$GOOGLE_CLIENT_SECRET` |
| `--google-domains` | Restricts authorization to users from specified Google email domains. Optional. | `$GOOGLE_DOMAINS` |
| `--google-domains` | Restricts authorization to users from specified Google email domain. To add more than one domain, add multiple flags. Optional. | `$GOOGLE_DOMAINS` |
### Auth0-specific OAuth 2.0 authentication flags
@ -102,7 +102,7 @@ chronograf [flags]
| `--auth0-domain` | Subdomain of your Auth0 client. Available on the configuration page for your Auth0 client. | `$AUTH0_DOMAIN` |
| `--auth0-client-id` | Auth0 client ID value for OAuth 2.0 support | `$AUTH0_CLIENT_ID` |
| `--auth0-client-secret` | Auth0 client secret value for OAuth 2.0 support | `$AUTH0_CLIENT_SECRET` |
| `--auth0-organizations` | Auth0 organization membership required to access Chronograf. Organizations are set using an organization key in the users `app_metadata`. Lists are comma-separated and are only available when using environment variables. | `$AUTH0_ORGS` |
| `--auth0-organizations` | Restricts authorization to users specified Auth0 organization. To add more than one organization, add multiple flags. Optional. Organizations are set using an organization key in the users `app_metadata`. | `$AUTH0_ORGS` |
### Heroku-specific OAuth 2.0 authentication flags
@ -110,7 +110,7 @@ chronograf [flags]
|:------------------------|:-----------------------------------------------------------------------------------------|:--------------------|
| `--heroku-client-id` | Heroku client ID value for OAuth 2.0 support | `$HEROKU_CLIENT_ID` |
| `--heroku-secret` | Heroku secret for OAuth 2.0 support | `$HEROKU_SECRET` |
| `--heroku-organization` | Heroku organization membership required to access Chronograf. Lists are comma-separated. | `$HEROKU_ORGS` |
| `--heroku-organization` | Restricts authorization to users from specified Heroku organization. To add more than one organization, add multiple flags. Optional. | `$HEROKU_ORGS` |
### Generic OAuth 2.0 authentication flags

View File

@ -1,12 +1,8 @@
---
title: Example post
description: This is just an example post to show the format of new 2.0 posts
menu:
influxdb_2_0:
name: Example post
weight: 1
draft: true
"v2.0/tags": [influxdb, functions]
related:
- /influxdb/v2.0/write-data/
- /influxdb/v2.0/write-data/quick-start
@ -26,6 +22,7 @@ This is **bold** text. This is _italic_ text. This is _**bold and italic**_.
{{< nav-icon "tasks" >}}
{{< nav-icon "alerts" >}}
{{< nav-icon "settings" >}}
{{< nav-icon "notebooks" >}}
{{< icon "add-cell" >}} add-cell
{{< icon "add-label" >}} add-label
@ -66,6 +63,8 @@ This is **bold** text. This is _italic_ text. This is _**bold and italic**_.
{{< icon "nav-orgs" >}} nav-orgs
{{< icon "nav-tasks" >}} nav-tasks
{{< icon "note" >}} note
{{< icon "notebook" >}} notebook
{{< icon "notebooks" >}} notebooks
{{< icon "org" >}} org
{{< icon "orgs" >}} orgs
{{< icon "pause" >}} pause

View File

@ -90,7 +90,7 @@ To use the `influx` CLI to manage and interact with your InfluxDB Cloud instance
Click the following button to download and install `influx` CLI for macOS.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_client_2.0.1_darwin_amd64.tar.gz" download>influx CLI (macOS)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_client_2.0.2_darwin_amd64.tar.gz" download>influx CLI (macOS)</a>
#### Step 2: Unpackage the influx binary
@ -102,7 +102,7 @@ or run the following command in a macOS command prompt application such
```sh
# Unpackage contents to the current working directory
tar zxvf ~/Downloads/influxdb_client_2.0.1_darwin_amd64.tar.gz
tar zxvf ~/Downloads/influxdb_client_2.0.2_darwin_amd64.tar.gz
```
#### Step 3: (Optional) Place the binary in your $PATH
@ -114,7 +114,7 @@ prefix the executable with `./` to run in place. If the binary is on your $PATH,
```sh
# Copy the influx binary to your $PATH
sudo cp influxdb_client_2.0.1_darwin_amd64/influx /usr/local/bin/
sudo cp influxdb_client_2.0.2_darwin_amd64/influx /usr/local/bin/
```
{{% note %}}
@ -158,8 +158,8 @@ To see all available `influx` commands, type `influx -h` or check out [influx -
Click one of the following buttons to download and install the `influx` CLI appropriate for your chipset.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_client_2.0.1_linux_amd64.tar.gz" download >influx CLI (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_client_2.0.1_linux_arm64.tar.gz" download >influx CLI (arm)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_client_2.0.2_linux_amd64.tar.gz" download >influx CLI (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb_client_2.0.2_linux_arm64.tar.gz" download >influx CLI (arm)</a>
#### Step 2: Unpackage the influx binary
@ -167,7 +167,7 @@ Click one of the following buttons to download and install the `influx` CLI appr
```sh
# Unpackage contents to the current working directory
tar xvfz influxdb_client_2.0.1_linux_amd64.tar.gz
tar xvfz influxdb_client_2.0.2_linux_amd64.tar.gz
```
#### Step 3: (Optional) Place the binary in your $PATH
@ -179,7 +179,7 @@ prefix the executable with `./` to run in place. If the binary is on your $PATH,
```sh
# Copy the influx and influxd binary to your $PATH
sudo cp influxdb_client_2.0.1_linux_amd64/influx /usr/local/bin/
sudo cp influxdb_client_2.0.2_linux_amd64/influx /usr/local/bin/
```
{{% note %}}

View File

@ -15,4 +15,160 @@ related:
- /influxdb/cloud/reference/api/influxdb-1x/dbrp
---
{{< duplicate-oss >}}
In InfluxDB 1.x, data is stored in [databases](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database)
and [retention policies](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp).
In InfluxDB Cloud, data is stored in [buckets](/influxdb/cloud/reference/glossary/#bucket).
Because InfluxQL uses the 1.x data model, before querying in InfluxQL, a bucket must be mapped to a database and retention policy (DBRP).
**Complete the following steps:**
1. [Verify buckets have a mapping](#verify-buckets-have-a-mapping).
2. [Map unmapped buckets](#map-unmapped-buckets).
3. [Query a mapped bucket with InfluxQL](#query-a-mapped-bucket-with-influxql).
## Verify buckets have a mapping
{{% note %}}
When writing to an InfluxDB Cloud bucket using the `/write` 1.x compatibility API,
InfluxDB Cloud automatically creates a DBRP mapping for the bucket that matches the `db/rp` naming convention.
For more information, see [Database and retention policy mapping](/influxdb/cloud/reference/api/influxdb-1x/dbrp/).
If you're not sure how data was written into a bucket, verify the bucket has a mapping.
{{% /note %}}
Use the [`/api/v2/dbrps` API endpoint](/influxdb/cloud/api/#operation/GetDBRPs) to list DBRP mappings.
Include the following:
- **Request method:** `GET`
- **Headers:**
- **Authorization:** `Token` schema with your InfluxDB [authentication token](/influxdb/cloud/security/tokens/)
- **Query parameters:**
{{< req type="key" >}}
- {{< req "\*" >}} **organization_id:** [organization ID](/influxdb/cloud/organizations/view-orgs/#view-your-organization-id)
- **bucket_id:** [bucket ID](/influxdb/cloud/organizations/buckets/view-buckets/) _(to list DBRP mappings for a specific bucket)_
- **database:** database name _(to list DBRP mappings with a specific database name)_
- **retention_policy:** retention policy name _(to list DBRP mappings with a specific retention policy name)_
- **id:** DBRP mapping ID _(to list a specific DBRP mapping)_
##### View all DBRP mappings
```sh
curl --request GET \
https://cloud2.influxdata.com/api/v2/dbrps?organization_id=00oxo0oXx000x0Xo \
--header "Authorization: Token YourAuthToken"
```
##### Filter DBRP mappings by database
```sh
curl --request GET \
https://cloud2.influxdata.com/api/v2/dbrps?organization_id=00oxo0oXx000x0Xo&database=example-db \
--header "Authorization: Token YourAuthToken"
```
##### Filter DBRP mappings by bucket ID
```sh
curl --request GET \
https://cloud2.influxdata.com/api/v2/dbrps?organization_id=00oxo0oXx000x0Xo&bucket_id=00oxo0oXx000x0Xo \
--header "Authorization: Token YourAuthToken"
```
If you **do not find a DBRP mapping for a bucket**, complete the next procedure to map the unmapped bucket.
_For more information on the DBRP mapping API, see the [`/api/v2/dbrps` endpoint documentation](/influxdb/cloud/api/#tag/DBRPs)._
## Map unmapped buckets
Use the [`/api/v2/dbrps` API endpoint](/influxdb/cloud/api/#operation/PostDBRP)
to create a new DBRP mapping for a bucket.
Include the following:
- **Request method:** `POST`
- **Headers:**
- **Authorization:** `Token` schema with your InfluxDB [authentication token](/influxdb/cloud/security/tokens/)
- **Content-type:** `application/json`
- **Request body:** JSON object with the following fields:
{{< req type="key" >}}
- {{< req "\*" >}} **bucket_id:** [bucket ID](/influxdb/cloud/organizations/buckets/view-buckets/)
- {{< req "\*" >}} **database:** database name
- **default:** set the provided retention policy as the default retention policy for the database
- {{< req "\*" >}} **organization** or **organization_id:** organization name or [organization ID](/influxdb/cloud/organizations/view-orgs/#view-your-organization-id)
- {{< req "\*" >}} **retention_policy:** retention policy name
<!-- -->
```sh
curl --request POST https://cloud2.influxdata.com/api/v2/dbrps \
--header "Authorization: Token YourAuthToken" \
--header 'Content-type: application/json' \
--data '{
"bucket_id": "00oxo0oXx000x0Xo",
"database": "example-db",
"default": true,
"organization_id": "00oxo0oXx000x0Xo",
"retention_policy": "example-rp"
}'
```
After you've verified the bucket is mapped, query the bucket using the `query` 1.x compatibility endpoint.
## Query a mapped bucket with InfluxQL
The [InfluxDB 1.x compatibility API](/influxdb/cloud/reference/api/influxdb-1x/) supports
all InfluxDB 1.x client libraries and integrations in InfluxDB Cloud.
To query a mapped bucket with InfluxQL, use the [`/query` 1.x compatibility endpoint](/influxdb/cloud/reference/api/influxdb-1x/query/).
Include the following in your request:
- **Request method:** `GET`
- **Headers:**
- **Authorization:** _See [compatibility API authentication](/influxdb/cloud/reference/api/influxdb-1x/#authentication)_
- **Query parameters:**
- **db**: 1.x database to query
- **rp**: 1.x retention policy to query _(if no retention policy is specified, InfluxDB uses the default retention policy for the specified database)_
- **q**: InfluxQL query
{{% note %}}
**URL-encode** the InfluxQL query to ensure it's formatted correctly when submitted to InfluxDB.
{{% /note %}}
```sh
curl --request GET https://cloud2.influxdata.com/query?db=example-db \
--header "Authorization: Token YourAuthToken" \
--data-urlencode "q=SELECT used_percent FROM example-db.example-rp.example-measurement WHERE host=host1"
```
By default, the `/query` compatibility endpoint returns results in **JSON**.
To return results as **CSV**, include the `Accept: application/csv` header.
## InfluxQL support
InfluxDB Cloud support InfluxQL **read-only** queries. See supported and unsupported queries below.
To learn more about InfluxQL, see [Influx Query Language (InfluxQL)](/{{< latest "influxdb" "v1" >}}/query_language/).
{{< flex >}}
{{< flex-content >}}
{{% note %}}
##### Supported InfluxQL queries
- `DELETE`*
- `DROP MEASUREMENT`*
- `SELECT` _(read-only)_
- `SHOW DATABASES`
- `SHOW MEASUREMENTS`
- `SHOW TAG KEYS`
- `SHOW TAG VALUES`
- `SHOW FIELD KEYS`
\* These commands delete data.
{{% /note %}}
{{< /flex-content >}}
{{< flex-content >}}
{{% warn %}}
##### Unsupported InfluxQL queries
- `SELECT INTO`
- `ALTER`
- `CREATE`
- `DROP` _(limited support)_
- `GRANT`
- `KILL`
- `REVOKE`
{{% /warn %}}
{{< /flex-content >}}
{{< /flex >}}

View File

@ -14,4 +14,47 @@ related:
- /influxdb/cloud/api/#tag/DBRPs, InfluxDB v2 API /dbrps endpoint
---
{{< duplicate-oss >}}
The InfluxDB 1.x data model includes [databases](/influxdb/v1.8/concepts/glossary/#database)
and [retention policies](/influxdb/v1.8/concepts/glossary/#retention-policy-rp).
InfluxDB Cloud replaces both with [buckets](/influxdb/v2.0/reference/glossary/#bucket).
To support InfluxDB 1.x query and write patterns in InfluxDB Cloud, databases and retention
policies are mapped to buckets using the **database and retention policy (DBRP) mapping service**.
The DBRP mapping service uses the **database** and **retention policy** specified in
[1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/) requests to route operations to a bucket.
{{% note %}}
For more information, see [Map unmapped buckets](/influxdb/v2.0/query-data/influxql/#map-unmapped-buckets).
{{% /note %}}
### Default retention policies
A database can have multiple retention policies with one set as default.
If no retention policy is specified in a query or write request, InfluxDB uses
the default retention policy for the specified database.
### When writing data
When writing data using the
[`/write` compatibility endpoint](/influxdb/v2.0/reference/api/influxdb-1x/write/),
the DBRP mapping service checks for a bucket mapped to the database and retention policy:
- If a mapped bucket is found, data is written to the bucket.
- If an unmapped bucket with a name matching:
- **database/retention policy** exists, a DBRP mapping is added to the bucket,
and data is written to the bucket.
- **database** exists (without a specified retention policy), the default
database retention policy is used, a DBRP mapping is added to the bucket,
and data is written to the bucket.
### When querying data
When querying data from InfluxDB Cloud and InfluxDB OSS 2.0 using the
[`/query` compatibility endpoint](/influxdb/v2.0/reference/api/influxdb-1x/query/),
the DBRP mapping service checks for the specified database and retention policy
(if no retention policy is specified, the database's default retention policy is used):
- If a mapped bucket exists, data is queried from the mapped bucket.
- If no mapped bucket exists, InfluxDB returns an error. See how to [Map unmapped buckets](/influxdb/v2.0/query-data/influxql/#map-unmapped-buckets).
_For more information on the DBRP mapping API, see the [`/api/v2/dbrps` endpoint documentation](/influxdb/v2.0/api/#tag/DBRPs)._

View File

@ -0,0 +1,13 @@
---
title: influx v1 dbrp
description: >
The `influx v1 dbrp` subcommands provide database retention policy (DBRP) mapping management for the InfluxDB 1.x compatibility API.
menu:
influxdb_cloud_ref:
name: influx v1 dbrp
parent: influx v1
weight: 101
influxdb/cloud/tags: [DBRP]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,13 @@
---
title: influx v1 dbrp create
description: >
The `influx v1 dbrp create` command creates a DBRP mapping in the InfluxDB 1.x compatibility API.
menu:
influxdb_cloud_ref:
name: influx v1 dbrp create
parent: influx v1 dbrp
weight: 101
influxdb/cloud/tags: [DBRP]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,13 @@
---
title: influx v1 dbrp delete
description: >
The `influx v1 dbrp delete` command deletes a DBRP mapping in the InfluxDB 1.x compatibility API.
menu:
influxdb_cloud_ref:
name: influx v1 dbrp delete
parent: influx v1 dbrp
weight: 101
influxdb/cloud/tags: [DBRP]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,13 @@
---
title: influx v1 dbrp list
description: >
The `influx v1 dbrp list` command lists and searches DBRP mappings in the InfluxDB 1.x compatibility API.
menu:
influxdb_cloud_ref:
name: influx v1 dbrp list
parent: influx v1 dbrp
weight: 101
influxdb/cloud/tags: [dbrp]
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,13 @@
---
title: influx v1 dbrp update
description: >
The `influx v1 dbrp update` command updates a DBRP mapping in the InfluxDB 1.x compatibility API.
menu:
influxdb_cloud_ref:
name: influx v1 dbrp update
parent: influx v1 dbrp
weight: 101
influxdb/cloud/tags: [DBRP]
---
{{< duplicate-oss >}}

View File

@ -2,7 +2,7 @@
title: influx write
description: >
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
Write data using line protocol or annotated CSV.
Write data using line protocol, annotated CSV, or extended annotated CSV.
menu:
influxdb_cloud_ref:
name: influx write
@ -11,7 +11,10 @@ weight: 101
influxdb/cloud/tags: [write]
related:
- /influxdb/cloud/write-data/
- /influxdb/cloud/write-data/csv/
- /influxdb/cloud/write-data/developer-tools/csv/
- /influxdb/cloud/reference/syntax/line-protocol/
- /influxdb/cloud/reference/syntax/annotated-csv/
- /influxdb/cloud/reference/syntax/annotated-csv/extended/
---
{{< duplicate-oss >}}

View File

@ -9,6 +9,12 @@ menu:
parent: influx write
weight: 101
influxdb/cloud/tags: [write]
related:
- /influxdb/cloud/write-data/
- /influxdb/cloud/write-data/developer-tools/csv/
- /influxdb/cloud/reference/syntax/line-protocol/
- /influxdb/cloud/reference/syntax/annotated-csv/
- /influxdb/cloud/reference/syntax/annotated-csv/extended/
---
{{< duplicate-oss >}}

View File

@ -15,91 +15,4 @@ related:
- /{{< latest "influxdb" "v1" >}}/query_language/functions/#percentile, InfluxQL PERCENTILE()
---
The `quantile()` function returns records from an input table with `_value`s that fall within
a specified quantile or it returns the record with the `_value` that represents the specified quantile.
Which it returns depends on the [method](#method) used.
`quantile()` supports columns with float values.
_**Function type:** Aggregate or Selector_
_**Output data type:** Float | Record_
```js
quantile(
column: "_value",
q: 0.99,
method: "estimate_tdigest",
compression: 1000.0
)
```
When using the `estimate_tdigest` or `exact_mean` methods, it outputs non-null
records with values that fall within the specified quantile.
When using the `exact_selector` method, it outputs the non-null record with the
value that represents the specified quantile.
## Parameters
### column
The column to use to compute the quantile.
Defaults to `"_value"`.
_**Data type:** String_
### q
A value between 0 and 1 indicating the desired quantile.
_**Data type:** Float_
### method
Defines the method of computation.
_**Data type:** String_
The available options are:
##### estimate_tdigest
An aggregate method that uses a [t-digest data structure](https://github.com/tdunning/t-digest)
to compute an accurate quantile estimate on large data sources.
##### exact_mean
An aggregate method that takes the average of the two points closest to the quantile value.
##### exact_selector
A selector method that returns the data point for which at least `q` points are less than.
### compression
Indicates how many centroids to use when compressing the dataset.
A larger number produces a more accurate result at the cost of increased memory requirements.
Defaults to `1000.0`.
_**Data type:** Float_
## Examples
###### Quantile as an aggregate
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system")
|> quantile(
q: 0.99,
method: "estimate_tdigest",
compression: 1000.0
)
```
###### Quantile as a selector
```js
from(bucket: "example-bucket")
|> range(start: -5m)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system")
|> quantile(
q: 0.99,
method: "exact_selector"
)
```
{{< duplicate-oss >}}

View File

@ -24,7 +24,7 @@ covariance(columns: ["column_x", "column_y"], pearsonr: false, valueDst: "_value
## Parameters
### columns
A list of **two columns** on which to operate. <span class="required">Required</span>
({{< req >}}) A list of **two columns** on which to operate.
_**Data type:** Array of strings_

View File

@ -50,7 +50,7 @@ The resulting group keys for all tables will be: `[_time, _field_d1, _field_d2]`
## Parameters
### tables
The map of streams to be joined. <span class="required">Required</span>
({{< req >}}) The map of streams to be joined.
_**Data type:** Record_
@ -59,7 +59,7 @@ _**Data type:** Record_
{{% /note %}}
### on
The list of columns on which to join. <span class="required">Required</span>
({{< req >}}) The list of columns on which to join.
_**Data type:** Array of strings_

View File

@ -41,7 +41,8 @@ The output is constructed as follows:
determined from the input tables by the value in `valueColumn` at the row identified by the
`rowKey` values and the new column's label.
If no value is found, the value is set to `null`.
- Any column that is not part of the group key or not specified in the `rowKey`,
`columnKey` and `valueColumn` parameters is dropped.
## Parameters

View File

@ -13,70 +13,4 @@ related:
- /{{< latest "influxdb" "v1" >}}/query_language/data_exploration/#the-where-clause, InfluxQL WHERE
---
The `range()` function filters records based on time bounds.
Each input table's records are filtered to contain only records that exist within the time bounds.
Records with a `null` value for their time are filtered.
Each input table's group key value is modified to fit within the time bounds.
Tables where all records exists outside the time bounds are filtered entirely.
_**Function type:** Transformation_
_**Output data type:* Record_
```js
range(start: -15m, stop: now())
```
## Parameters
### start
The earliest time to include in results.
Results **include** points that match the specified start time.
Use a relative duration, absolute time, or integer (Unix timestamp in seconds).
For example, `-1h`, `2019-08-28T22:00:00Z`, or `1567029600`.
Durations are relative to `now()`.
_**Data type:** Duration | Time | Integer_
### stop
The latest time to include in results.
Results **exclude** points that match the specified stop time.
Use a relative duration, absolute time, or integer (Unix timestamp in seconds).
For example, `-1h`, `2019-08-28T22:00:00Z`, or `1567029600`.
Durations are relative to `now()`.
Defaults to `now()`.
_**Data type:** Duration | Time | Integer_
{{% note %}}
Time values in Flux must be in [RFC3339 format](/influxdb/cloud/reference/flux/language/types#timestamp-format).
{{% /note %}}
## Examples
###### Time range relative to now
```js
from(bucket:"example-bucket")
|> range(start: -12h)
// ...
```
###### Relative time range
```js
from(bucket:"example-bucket")
|> range(start: -12h, stop: -15m)
// ...
```
###### Absolute time range
```js
from(bucket:"example-bucket")
|> range(start: 2018-05-22T23:30:00Z, stop: 2018-05-23T00:00:00Z)
// ...
```
###### Absolute time range with Unix timestamps
```js
from(bucket:"example-bucket")
|> range(start: 1527031800, stop: 1527033600)
// ...
```
{{< duplicate-oss >}}

View File

@ -14,27 +14,4 @@ related:
- /{{< latest "influxdb" "v1" >}}/query_language/functions/#last, InfluxQL LAST()
---
The `last()` function selects the last non-null record from an input table.
_**Function type:** Selector_
_**Output data type:** Record_
```js
last()
```
{{% warn %}}
#### Empty tables
`last()` drops empty tables.
{{% /warn %}}
## Examples
```js
from(bucket:"example-bucket")
|> range(start:-1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system"
)
|> last()
```
{{< duplicate-oss >}}

View File

@ -48,12 +48,12 @@ querying data from a **different organization** or a **remote InfluxDB instance*
{{% /note %}}
### from
<span class="req">Required</span> Name of the bucket to query.
({{< req >}}) Name of the bucket to query.
_**Data type:** String_
### start
<span class="req">Required</span> Earliest time to include in results.
({{< req >}}) Earliest time to include in results.
Results **include** points that match the specified start time.
Use a relative duration, absolute time, or integer (Unix timestamp in seconds).
For example, `-1h`, `2019-08-28T22:00:00Z`, or `1567029600`.
@ -73,7 +73,7 @@ Defaults to `now()`.
_**Data type:** Duration | Time | Integer_
### m
<span class="req">Required</span> Name of the measurement to query.
({{< req >}}) Name of the measurement to query.
_**Data type:** String_

View File

@ -32,7 +32,7 @@ Defaults to `https://api.opsgenie.com/v2/alerts`.
_**Data type:** String_
### apiKey
<span class="req">Required</span>
({{< req >}})
Opsgenie API authorization key.
_**Data type:** String_

View File

@ -41,13 +41,13 @@ Defaults to `https://api.opsgenie.com/v2/alerts`.
_**Data type:** String_
### apiKey
<span class="req">Required</span>
({{< req >}})
Opsgenie API authorization key.
_**Data type:** String_
### message
<span class="req">Required</span>
({{< req >}})
Alert message text.
130 characters or less.

View File

@ -13,215 +13,4 @@ related:
- /influxdb/cloud/reference/flux/stdlib/built-in/transformations/map/
---
The `rows.map()` function is an alternate implementation of [`map()`](/influxdb/cloud/reference/flux/stdlib/built-in/transformations/map/)
that is faster, but more limited than `map()`.
`rows.map()` cannot modify [groups keys](/influxdb/cloud/reference/glossary/#group-key) and,
therefore, does not need to regroup tables.
**Attempts to change columns in the group key are ignored.**
_**Function type:** Transformation_
```js
import "contrib/jsternberg/rows"
rows.map( fn: (r) => ({_value: r._value * 100.0}))
```
## Parameters
### fn
A single argument function to apply to each record.
The return value must be a record.
_**Data type:** Function_
{{% note %}}
Use the `with` operator to preserve columns **not** in the group and **not**
explicitly mapped in the operation.
{{% /note %}}
## Examples
- [Perform mathemtical operations on column values](#perform-mathemtical-operations-on-column-values)
- [Preserve all columns in the operation](#preserve-all-columns-in-the-operation)
- [Attempt to remap columns in the group key](#attempt-to-remap-columns-in-the-group-key)
---
### Perform mathemtical operations on column values
The following example returns the square of each value in the `_value` column:
```js
import "contrib/jsternberg/rows"
data
|> rows.map(fn: (r) ({ _value: r._value * r._value }))
```
{{% note %}}
#### Important notes
The `_time` column is dropped because:
- It's not in the group key.
- It's not explicitly mapped in the operation.
- The `with` operator was not used to include existing columns.
{{% /note %}}
{{< flex >}}
{{% flex-content %}}
#### Input tables
**Group key:** `tag,_field`
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag1 | foo | 0001 | 1.9 |
| tag1 | foo | 0002 | 2.4 |
| tag1 | foo | 0003 | 2.1 |
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag2 | bar | 0001 | 3.1 |
| tag2 | bar | 0002 | 3.8 |
| tag2 | bar | 0003 | 1.7 |
{{% /flex-content %}}
{{% flex-content %}}
#### Output tables
**Group key:** `tag,_field`
| tag | _field | _value |
|:--- |:------ | ------:|
| tag1 | foo | 3.61 |
| tag1 | foo | 5.76 |
| tag1 | foo | 4.41 |
| tag | _field | _value |
|:--- |:------ | ------:|
| tag2 | bar | 9.61 |
| tag2 | bar | 14.44 |
| tag2 | bar | 2.89 |
{{% /flex-content %}}
{{< /flex >}}
---
### Preserve all columns in the operation
Use the `with` operator in your mapping operation to preserve all columns,
including those not in the group key, without explicitly remapping them.
```js
import "contrib/jsternberg/rows"
data
|> rows.map(fn: (r) ({ r with _value: r._value * r._value }))
```
{{% note %}}
#### Important notes
- The mapping operation remaps the `_value` column.
- The `with` operator preserves all other columns not in the group key (`_time`).
{{% /note %}}
{{< flex >}}
{{% flex-content %}}
#### Input tables
**Group key:** `tag,_field`
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag1 | foo | 0001 | 1.9 |
| tag1 | foo | 0002 | 2.4 |
| tag1 | foo | 0003 | 2.1 |
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag2 | bar | 0001 | 3.1 |
| tag2 | bar | 0002 | 3.8 |
| tag2 | bar | 0003 | 1.7 |
{{% /flex-content %}}
{{% flex-content %}}
#### Output tables
**Group key:** `tag,_field`
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag1 | foo | 0001 | 3.61 |
| tag1 | foo | 0002 | 5.76 |
| tag1 | foo | 0003 | 4.41 |
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag2 | bar | 0001 | 9.61 |
| tag2 | bar | 0002 | 14.44 |
| tag2 | bar | 0003 | 2.89 |
{{% /flex-content %}}
{{< /flex >}}
---
### Attempt to remap columns in the group key
```js
import "contrib/jsternberg/rows"
data
|> rows.map(fn: (r) ({ r with tag: "tag3" }))
```
{{% note %}}
#### Important notes
- Remapping the `tag` column to `"tag3"` is ignored because `tag` is part of the group key.
- The `with` operator preserves columns not in the group key (`_time` and `_value`).
{{% /note %}}
{{< flex >}}
{{% flex-content %}}
#### Input tables
**Group key:** `tag,_field`
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag1 | foo | 0001 | 1.9 |
| tag1 | foo | 0002 | 2.4 |
| tag1 | foo | 0003 | 2.1 |
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag2 | bar | 0001 | 3.1 |
| tag2 | bar | 0002 | 3.8 |
| tag2 | bar | 0003 | 1.7 |
{{% /flex-content %}}
{{% flex-content %}}
#### Output tables
**Group key:** `tag,_field`
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag1 | foo | 0001 | 1.9 |
| tag1 | foo | 0002 | 2.4 |
| tag1 | foo | 0003 | 2.1 |
| tag | _field | _time | _value |
|:--- |:------ |:----- | ------:|
| tag2 | bar | 0001 | 3.1 |
| tag2 | bar | 0002 | 3.8 |
| tag2 | bar | 0003 | 1.7 |
{{% /flex-content %}}
{{< /flex >}}
---
{{% note %}}
#### Package author and maintainer
**Github:** [@jsternberg](https://github.com/jsternberg)
**InfluxDB Slack:** [@Jonathan Sternberg](https://influxdata.com/slack)
{{% /note %}}
{{< duplicate-oss >}}

View File

@ -36,14 +36,14 @@ sensu.endpoint(
## Parameters
### url
<span class="req">Required</span>
({{< req >}})
Base URL of [Sensu API](https://docs.sensu.io/sensu-go/latest/migrate/#architecture)
**without a trailing slash**. Example: `http://localhost:8080`.
_**Data type:** String_
### apiKey
<span class="req">Required</span>
({{< req >}})
Sensu [API Key](https://docs.sensu.io/sensu-go/latest/operations/control-access/).
_**Data type:** String_

View File

@ -38,20 +38,20 @@ sensu.event(
## Parameters
### url
<span class="req">Required</span>
({{< req >}})
Base URL of [Sensu API](https://docs.sensu.io/sensu-go/latest/migrate/#architecture)
**without a trailing slash**. Example: `http://localhost:8080`.
_**Data type:** String_
### apiKey
<span class="req">Required</span>
({{< req >}})
Sensu [API Key](https://docs.sensu.io/sensu-go/latest/operations/control-access/).
_**Data type:** String_
### checkName
<span class="req">Required</span>
({{< req >}})
Check name.
Use alphanumeric characters, underscores (`_`), periods (`.`), and hyphens (`-`).
All other characters are replaced with an underscore.
@ -59,7 +59,7 @@ All other characters are replaced with an underscore.
_**Data type:** String_
### text
<span class="req">Required</span>
({{< req >}})
Event text.
Mapped to `output` in the Sensu Events API request.

View File

@ -40,7 +40,7 @@ Default is `https://api.telegram.org/bot`.
_**Data type:** String_
### token
<span class="req">Required</span>
({{< req >}})
Telegram bot token.
_**Data type:** String_

View File

@ -43,13 +43,13 @@ Default is `https://api.telegram.org/bot`.
_**Data type:** String_
### token
<span class="req">Required</span>
({{< req >}})
Telegram bot token.
_**Data type:** String_
### channel
<span class="req">Required</span>
({{< req >}})
Telegram channel ID.
_**Data type:** String_

View File

@ -48,24 +48,6 @@ _**Data type:** Record_
## Examples
##### Test if geographic points are inside of a region
```js
import "experimental/geo"
region = {
minLat: 40.51757813,
maxLat: 40.86914063,
minLon: -73.65234375,
maxLon: -72.94921875
}
data
|> geo.toRows()
|> map(fn: (r) => ({
r with st_contains: ST_Distance(region: region, geometry: {lat: r.lat, lon: r.lon})
}))
```
##### Calculate the distance between geographic points and a region
```js
import "experimental/geo"
@ -83,3 +65,17 @@ data
r with st_distance: ST_Distance(region: region, geometry: {lat: r.lat, lon: r.lon})
}))
```
##### Find the point nearest to a geographic location
```js
import "experimental/geo"
fixedLocation = {lat: 40.7, lon: -73.3}
data
|> geo.toRows()
|> map(fn: (r) => ({ r with
_value: geo.ST_Distance(region: {lat: r.lat, lon: r.lon}, geometry: fixedLocation)
}))
|> min()
```

View File

@ -82,7 +82,7 @@ from(bucket: "telegraf")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "disk" and
r._field = "used_percent"
r._field == "used_percent"
)
|> group(columns: ["_measurement"])
|> monitor.check(

View File

@ -45,7 +45,7 @@ Defaults to `""`.
_**Data type:** String_
### data
<span class="req">Required</span>
({{< req >}})
Data to send to the Pushbullet API.
The function JSON-encodes data before sending it to Pushbullet.

View File

@ -42,13 +42,13 @@ Defaults to `""`.
_**Data type:** String_
### title
<span class="req">Required</span>
({{< req >}})
Title of the notification.
_**Data type:** String_
### text
<span class="req">Required</span>
({{< req >}})
Text to display in the notification.
_**Data type:** String_

View File

@ -55,18 +55,20 @@ A token is only required if using the Slack chat.postMessage API.
_**Data type:** String_
### channel
The name of channel to post the message to. <span class="required">Required</span>
({{< req >}})
The name of channel to post the message to.
_**Data type:** String_
### text
The text to display in the Slack message. <span class="required">Required</span>
({{< req >}})
The text to display in the Slack message.
_**Data type:** String_
### color
({{< req >}})
The color to include with the message.
<span class="required">Required</span>
**Valid values include:**

View File

@ -185,11 +185,11 @@ sql.from(
To query an Amazon Athena database, use the following query parameters in your Athena
S3 connection string (DSN):
<span class="req">\* Required</span>
{{< req type="key" >}}
- **region** - AWS region <span class="req">\*</span>
- **accessID** - AWS IAM access ID <span class="req">\*</span>
- **secretAccessKey** - AWS IAM secret key <span class="req">\*</span>
- {{< req "\*" >}} **region** - AWS region
- {{< req "\*" >}} **accessID** - AWS IAM access ID
- {{< req "\*" >}} **secretAccessKey** - AWS IAM secret key
- **db** - database name
- **WGRemoteCreation** - controls workgroup and tag creation
- **missingAsDefault** - replace missing data with default values

View File

@ -12,46 +12,4 @@ menu:
weight: 301
---
The `strings.substring()` function returns a substring based on `start` and `end` parameters.
These parameters are represent indices of UTF code points in the string.
_**Output data type:** String_
```js
import "strings"
strings.substring(v: "influx", start: 0, end: 3)
// returns "infl"
```
## Parameters
### v
The string value to search.
_**Data type:** String_
### start
The starting index of the substring.
_**Data type:** Integer_
### end
The ending index of the substring.
_**Data type:** Integer_
## Examples
###### Store the first four characters of a string
```js
import "strings"
data
|> map(fn: (r) => ({
r with
abbr: strings.substring(v: r.name, start: 0, end: 3)
})
)
```
{{< duplicate-oss >}}

View File

@ -11,7 +11,7 @@ menu:
weight: 201
influxdb/cloud/tags: [csv, syntax, write]
related:
- /influxdb/cloud/write-data/csv/
- /influxdb/cloud/write-data/developer-tools/csv/
- /influxdb/cloud/reference/cli/influx/write/
- /influxdb/cloud/reference/syntax/line-protocol/
- /influxdb/cloud/reference/syntax/annotated-csv/

View File

@ -1,7 +1,8 @@
---
title: Telegraf configurations
description: >
placeholder
InfluxDB Cloud lets you automatically generate Telegraf configurations or upload customized
Telegraf configurations that collect metrics and write them to InfluxDB Cloud.
weight: 12
menu: influxdb_cloud
influxdb/cloud/tags: [telegraf]

View File

@ -0,0 +1,18 @@
---
title: Use Chronograf with InfluxDB Cloud
description: >
[Chronograf](/{{< latest "chronograf" >}}/) is a data visualization and dashboarding
tool designed to visualize data in InfluxDB 1.x. It is part of the [TICKstack](/platform/)
that provides an InfluxQL data explorer, Kapacitor integrations, and more.
Continue to use Chronograf with **InfluxDB Cloud** and **InfluxDB OSS 2.0** and the
[1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/).
menu:
influxdb_cloud:
name: Use Chronograf
parent: Tools & integrations
weight: 103
related:
- /{{< latest "chronograf" >}}/
---
{{< duplicate-oss >}}

View File

@ -14,12 +14,4 @@ menu:
influxdb/cloud/tags: [client libraries]
---
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
The following **InfluxDB v2** client libraries are available:
{{% note %}}
These client libraries are in active development and may not be feature-complete.
This list will continue to grow as more client libraries are released.
{{% /note %}}
{{< children type="list" >}}
{{< duplicate-oss >}}

View File

@ -14,193 +14,4 @@ aliases:
- /influxdb/cloud/reference/api/client-libraries/go/
---
Use the [InfluxDB Go client library](https://github.com/influxdata/influxdb-client-go) to integrate InfluxDB into Go scripts and applications.
This guide presumes some familiarity with Go and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/cloud/get-started/).
## Before you begin
1. [Install Go 1.3 or later](https://golang.org/doc/install).
2. Download the client package in your $GOPATH and build the package.
```sh
# Download the InfluxDB Go client package
go get github.com/influxdata/influxdb-client-go
# Build the package
go build
```
3. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/cloud/reference/urls/).
## Boilerplate for the InfluxDB Go Client Library
Use the Go library to write and query data from InfluxDB.
1. In your Go program, import the necessary packages and specify the entry point of your executable program.
```go
package main
import (
"context"
"fmt"
"time"
influxdb2 "github.com/influxdata/influxdb-client-go"
)
```
2. Define variables for your InfluxDB [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), and [token](/influxdb/cloud/security/tokens/).
```go
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "https://cloud2.influxdata.com"
```
3. Create the the InfluxDB Go client and pass in the `url` and `token` parameters.
```go
client := influxdb2.NewClient(url, token)
```
4. Create a **write client** with the `WriteApiBlocking` method and pass in the `org` and `bucket` parameters.
```go
writeApi := client.WriteApiBlocking(org, bucket)
```
5. To query data, create an InfluxDB **query client** and pass in your InfluxDB `org`.
```go
queryApi := client.QueryApi(org)
```
## Write data to InfluxDB with Go
Use the Go library to write data to InfluxDB.
1. Create a [point](/influxdb/cloud/reference/glossary/#point) and write it to InfluxDB using the `WritePoint` method of the API writer struct.
2. Close the client to flush all pending writes and finish.
```go
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
writeApi.WritePoint(context.Background(), p)
client.Close()
```
### Complete example write script
```go
func main() {
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "https://cloud2.influxdata.com"
// Create new client with default option for server url authenticate by token
client := influxdb2.NewClient(url, token)
// User blocking write client for writes to desired bucket
writeApi := client.WriteApiBlocking(org, bucket)
// Create point using full params constructor
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
// Write point immediately
writeApi.WritePoint(context.Background(), p)
// Ensures background processes finishes
client.Close()
}
```
## Query data from InfluxDB with Go
Use the Go library to query data to InfluxDB.
1. Create a Flux query and supply your `bucket` parameter.
```js
from(bucket:"<bucket>")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "stat")
```
The query client sends the Flux query to InfluxDB and returns the results as a FluxRecord object with a table structure.
**The query client includes the following methods:**
- `Query`: Sends the Flux query to InfluxDB.
- `Next`: Iterates over the query response.
- `TableChanged`: Identifies when the group key changes.
- `Record`: Returns the last parsed FluxRecord and gives access to value and row properties.
- `Value`: Returns the actual field value.
```go
result, err := queryApi.Query(context.Background(), `from(bucket:"<bucket>")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
for result.Next() {
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
fmt.Printf("value: %v\n", result.Record().Value())
}
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
```
**The FluxRecord object includes the following methods for accessing your data:**
- `Table()`: Returns the index of the table the record belongs to.
- `Start()`: Returns the inclusive lower time bound of all records in the current table.
- `Stop()`: Returns the exclusive upper time bound of all records in the current table.
- `Time()`: Returns the time of the record.
- `Value() `: Returns the actual field value.
- `Field()`: Returns the field name.
- `Measurement()`: Returns the measurement name of the record.
- `Values()`: Returns a map of column values.
- `ValueByKey(<your_tags>)`: Returns a value from the record for given column key.
### Complete example query script
```go
func main() {
// Create client
client := influxdb2.NewClient(url, token)
// Get query client
queryApi := client.QueryApi(org)
// Get QueryTableResult
result, err := queryApi.Query(context.Background(), `from(bucket:"my-bucket")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
// Iterate over query response
for result.Next() {
// Notice when group key has changed
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
// Access data
fmt.Printf("value: %v\n", result.Record().Value())
}
// Check for an error
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
// Ensures background processes finishes
client.Close()
}
```
For more information, see the [Go client README on GitHub](https://github.com/influxdata/influxdb-client-go).
{{< duplicate-oss >}}

View File

@ -14,159 +14,4 @@ aliases:
- /influxdb/cloud/reference/api/client-libraries/js/
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to integrate InfluxDB into JavaScript scripts and applications. This client supports both client-side (browser) and server-side (NodeJS) environments.
This guide presumes some familiarity with JavaScript, browser environments, and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/cloud/get-started/).
## Before you begin
1. Install [NodeJS](https://nodejs.org/en/download/package-manager/).
2. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/cloud/reference/urls/).
## Easiest way to get started
1. Clone the [examples directory](https://github.com/influxdata/influxdb-client-js/tree/master/examples) in the [influxdb-client-js](https://github.com/influxdata/influxdb-client-js) repo.
2. Navigate to the `examples` directory:
```js
cd examples
```
3. Install `yarn` or `npm` dependencies as needed:
```js
yarn install
npm install
```
3. Update your `./env` and `index.html` with the name of your InfluxDB [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), [token](/influxdb/cloud/security/tokens/), and `proxy` which relies upon proxy to forward requests to the target InfluxDB.
4. Run the following command to run the application at [http://localhost:3001/examples/index.html]()
```sh
npm run browser
```
## Boilerplate for the InfluxDB Javascript client library
Use the Javascript library to write data to and query data from InfluxDB.
1. To write a data point to InfluxDB using the JavaScript library, import the latest InfluxDB Javascript library in your script.
```js
import {InfluxDB, Point} from 'https://unpkg.com/@influxdata/influxdb-client/dist/index.browser.mjs'
```
2. Define constants for your InfluxDB [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), [token](/influxdb/cloud/security/tokens/), and `proxy` which relies on a proxy to forward requests to the target InfluxDB instance.
```js
const proxy = '/influx'
const token = 'example-token'
const org = 'example-org'
const bucket = 'example-bucket'
```
3. Instantiate the InfluxDB JavaScript client and pass in the `proxy` and `token` parameters.
```js
const InfluxDB = new InfluxDB({proxy, token})
```
## Write data to InfluxDB with JavaScript
Use the Javascript library to write data to InfluxDB.
1. Use the `getWriteApi` method of the InfluxDB client to create a **write client**. Provide your InfluxDB `org` and `bucket`.
```js
const writeApi = InfluxDB.getWriteApi(org, bucket)
```
The `useDefaultTags` method instructs the write api to use default tags when writing points. Create a [point](/influxdb/cloud/reference/glossary/#point) and write it to InfluxDB using the `writePoint` method. The `tag` and `floatField` methods add key value pairs for the tags and fields, respectively. Close the client to flush all pending writes and finish.
```js
writeApi.useDefaultTags({location: 'browser'})
const point1 = new Point('temperature')
.tag('example', 'index.html')
.floatField('value', 24)
writeApi.writePoint(point1)
console.log(`${point1}`)
writeApi.close()
```
### Complete example write script
```js
const writeApi = new InfluxDB({proxy, token})
const writeApi = influxDB.getWriteApi(org, bucket)
// setup default tags for all writes through this API
writeApi.useDefaultTags({location: 'browser'})
const point1 = new Point('temperature')
.tag('example', 'index.html')
.floatField('value', 24)
writeApi.writePoint(point1)
console.log(` ${point1}`)
// flush pending writes and close writeApi
writeApi
.close()
.then(() => {
console.log('WRITE FINISHED')
})
```
## Query data from InfluxDB with JavaScript
Use the Javascript library to query data from InfluxDB.
1. Use the `getQueryApi` method of the `InfluxDB` client to create a new **query client**. Provide your InfluxDB `org`.
```js
const queryApi = influxDB.getQueryApi(org)
```
2. Create a Flux query (including your `bucket` parameter).
```js
const fluxQuery =
'from(bucket:"<my-bucket>")
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "temperature")'
```
The **query client** sends the Flux query to InfluxDB and returns line table metadata and rows.
3. Use the `next` method to iterate over the rows.
```js
queryApi.queryRows(fluxQuery, {
next(row: string[], tableMeta: FluxTableMetaData) {
const o = tableMeta.toObject(row)
// console.log(JSON.stringify(o, null, 2))
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.example}): ${o._field}=${o._value}`
)
}
}
```
### Complete example query script
```js
// performs query and receive line table metadata and rows
// https://v2.docs.influxdata.com/v2.0/reference/syntax/annotated-csv/
queryApi.queryRows(fluxQuery, {
next(row: string[], tableMeta: FluxTableMetaData) {
const o = tableMeta.toObject(row)
// console.log(JSON.stringify(o, null, 2))
console.log(
'${o._time} ${o._measurement} in '${o.location}' (${o.example}): ${o._field}=${o._value}`
)
},
error(error: Error) {
console.error(error)
console.log('\nFinished ERROR')
},
complete() {
console.log('\nFinished SUCCESS')
},
})
```
For more information, see the [JavaScript client README on GitHub](https://github.com/influxdata/influxdb-client-js).
{{< duplicate-oss >}}

View File

@ -14,158 +14,4 @@ aliases:
weight: 201
---
Use the [InfluxDB Python client library](https://github.com/influxdata/influxdb-client-python) to integrate InfluxDB into Python scripts and applications.
This guide presumes some familiarity with Python and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/cloud/get-started/).
## Before you begin
1. Install the InfluxDB Python library:
```sh
pip install influxdb-client
```
2. Visit the URL of your InfluxDB Cloud UI.
## Write data to InfluxDB with Python
We are going to write some data in [line protocol](/influxdb/cloud/reference/syntax/line-protocol/) using the Python library.
1. In your Python program, import the InfluxDB client library and use it to write data to InfluxDB.
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
```
2. Define a few variables with the name of your [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), and [token](/influxdb/cloud/security/tokens/).
```python
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="https://cloud2.influxdata.com"
```
3. Instantiate the client. The `InfluxDBClient` object takes three named parameters: `url`, `org`, and `token`. Pass in the named parameters.
```python
client = InfluxDBClient(
url=url,
token=token,
org=org
)
```
The `InfluxDBClient` object has a `write_api` method used for configuration.
4. Instantiate a **write client** using the `client` object and the `write_api` method. Use the `write_api` method to configure the writer object.
```python
write_api = client.write_api(write_options=SYNCHRONOUS)
```
5. Create a [point](/influxdb/cloud/reference/glossary/#point) object and write it to InfluxDB using the `write` method of the API writer object. The write method requires three parameters: `bucket`, `org`, and `record`.
```python
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
### Complete example write script
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="https://cloud2.influxdata.com"
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
write_api = client.write_api(write_options=SYNCHRONOUS)
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
## Query data from InfluxDB with Python
1. Instantiate the **query client**.
```python
query_api = client.query_api()
```
2. Create a Flux query.
```python
query = from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )
```
The query client sends the Flux query to InfluxDB and returns a Flux object with a table structure.
3. Pass the `query()` method two named parameters:`org` and `query`.
```python
result = client.query_api().query(org=org, query=query)
```
4. Iterate through the tables and records in the Flux object.
- Use the `get_value()` method to return values.
- Use the `get_field()` method to return fields.
```python
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
**The Flux object provides the following methods for accessing your data:**
- `get_measurement()`: Returns the measurement name of the record.
- `get_field()`: Returns the field name.
- `get_values()`: Returns the actual field value.
- `values()`: Returns a map of column values.
- `values.get("<your tag>")`: Returns a value from the record for given column.
- `get_time()`: Returns the time of the record.
- `get_start()`: Returns the inclusive lower time bound of all records in the current table.
- `get_stop()`: Returns the exclusive upper time bound of all records in the current table.
### Complete example query script
```python
query_api = client.query_api()
query = from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )
result = client.query_api().query(org=org, query=query)
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
For more information, see the [Python client README on GitHub](https://github.com/influxdata/influxdb-client-python).
{{< duplicate-oss >}}

View File

@ -0,0 +1,15 @@
---
title: Use the Flux VS Code extension
seotitle: Use the Flux Visual Studio Code extension
description: >
The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux)
provides Flux syntax highlighting, autocompletion, and a direct InfluxDB Cloud server
integration that lets you run Flux scripts natively and show results in VS Code.
weight: 103
menu:
influxdb_cloud:
name: Use the Flux VS Code extension
parent: Tools & integrations
---
{{< duplicate-oss >}}

View File

@ -7,7 +7,7 @@ description: >
menu:
influxdb_cloud:
parent: Tools & integrations
weight: 102
weight: 104
influxdb/cloud/tags: [google]
---

View File

@ -0,0 +1,16 @@
---
title: Use Kapacitor with InfluxDB Cloud
description: >
[Kapacitor](/kapacitor/) is a data processing framework that makes it easy to
create alerts, run ETL (Extract, Transform and Load) jobs and detect anomalies.
Use Kapacitor with **InfluxDB Cloud**.
menu:
influxdb_cloud:
name: Use Kapacitor
parent: Tools & integrations
weight: 102
related:
- /{{< latest "kapacitor" >}}/
---
{{< duplicate-oss >}}

View File

@ -72,7 +72,7 @@ cpu = from(bucket: "example-bucket")
r.cpu == "cpu-total"
)
// Scale CPU usage
|> map(fn: (r) => ({
|> map(fn: (r) => ({ r with
_value: r._value + 60.0,
_time: r._time
})

View File

@ -5,7 +5,7 @@ description: >
InfluxDB identifies unique data points by their measurement, tag set, and timestamp.
This article discusses methods for preserving data from two points with a common
measurement, tag set, and timestamp but a different field set.
weight: 202
weight: 204
menu:
influxdb_cloud:
name: Handle duplicate points

View File

@ -2,7 +2,7 @@
title: Optimize writes to InfluxDB
description: >
Simple tips to optimize performance and system overhead when writing data to InfluxDB.
weight: 202
weight: 203
menu:
influxdb_cloud:
parent: write-best-practices

View File

@ -0,0 +1,12 @@
---
title: Resolve high series cardinality
description: >
Reduce high series cardinality in InfluxDB. If reads and writes to InfluxDB have started to slow down, you may have high series cardinality. Find the source of high cardinality and adjust your schema to resolve high cardinality issues.
menu:
influxdb_cloud:
name: Resolve high cardinality
weight: 202
parent: write-best-practices
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,12 @@
---
title: InfluxDB schema design
description: >
Improve InfluxDB schema design and data layout. Store unique values in fields and other tips to make your data more performant.
menu:
influxdb_cloud:
name: Schema design
weight: 201
parent: write-best-practices
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,94 @@
---
title: Delete data
list_title: Delete data
description: >
Delete data in the InfluxDB CLI and API.
menu:
influxdb_cloud:
name: Delete data
parent: Write data
weight: 106
influxdb/cloud/tags: [delete]
related:
- /influxdb/v2.0/reference/syntax/delete-predicate/
- /influxdb/v2.0/reference/cli/influx/delete/
---
<!--
## Delete data in the InfluxDB UI
Delete data from buckets you've created. You cannot delete data from system buckets.
### Delete data from buckets
1. Click **Load Data** in the navigation bar.
{{< nav-icon "load data" >}}
2. Select **Buckets**.
3. Next to the bucket with data you want to delete, click **Delete Data by Filter**.
4. In the **Delete Data** window that appears:
- Select a **Target Bucket** to delete data from.
- Enter a **Time Range** to delete data from.
- Click **+ Add Filter** to filter by tag key and value pair.
- Select **I understand that this cannot be undone**.
5. Click **Confirm Delete** to delete the selected data.
### Delete data from the Data Explorer
1. Click the **Data Explorer** icon in the sidebar.
{{< nav-icon "data-explorer" >}}
2. Click **Delete Data** in the top navigation bar.
3. In the **Delete Data** window that appears:
- Select a **Target Bucket** to delete data from.
- Enter a **Time Range** to delete data from.
- Click **+ Add Filter** to filter by tag key-value pairs.
- Select **I understand that this cannot be undone**.
4. Click **Confirm Delete** to delete the selected data.
!-->
Use the `influx` CLI or the InfluxDB API [`/delete`](/influxdb/v2.0/api/#/paths/~1delete/post) endpoint to delete data.
## Delete data using the influx CLI
{{% note %}}
If you haven't already, download and set up the [`influx` CLI](/influxdb/cloud/get-started/#optional-download-install-and-use-the-influx-cli). Following these setup instructions creates a configuration profile that stores your credentials, including your organization and token.
{{% /note %}}
1. Use the [`influx delete` command](/influxdb/v2.0/reference/cli/influx/delete/) to delete points from InfluxDB.
2. If you set up a configuration profile with your organization and token, specify the bucket (`-b`) to delete from. Otherwise, specify your organization (`-o`), bucket (`-b`), and authentication token (`-t`) with write permissions.
3. Define the time range to delete data from with the `--start` and `--stop` flags.
4. (Optional) Specify which points to delete using the predicate parameter and [delete predicate syntax](/influxdb/v2.0/reference/syntax/delete-predicate/).
#### Example
```sh
influx delete --bucket my-bucket \
--start '1970-01-01T00:00:00.00Z' \
--stop '2020-01-01T00:00:00.00Z' \
```
## Delete data using the API
1. Use the InfluxDB API `/delete` endpoint to delete points from InfluxDB.
2. Include your organization and bucket as query parameters in the request URL.
3. Use the `Authorization` header to provide your InfluxDB authentication token with write permissions.
4. In your request payload, define the time range to delete data from with `start` and `stop`.
5. (Optional) Specify which points to delete using the `predicate` parameter and [delete predicate syntax](/influxdb/v2.0/reference/syntax/delete-predicate/).
#### Example
```sh
curl --request POST \
https://cloud2.influxdata.com/api/v2/delete?org=<org-name>&bucket=<bucket-name> \
--header 'Authorization: Token <INFLUXDB_AUTH_TOKEN>' \
--header 'Content-Type: application/json' \
--data '{
"predicate": "_measurement=\"example-measurement\" and _field=\"example-field\"",
"start": "2020-08-16T08:00:00Z",
"stop": "2020-08-17T08:00:00Z"
}'
```
_For more information, see the [`/delete` API documentation](/influxdb/v2.0/api/#/paths/~1delete/post)._

View File

@ -8,8 +8,6 @@ menu:
influxdb_cloud:
name: Write CSV data
parent: Developer tools
aliases:
- /influxdb/cloud/write-data/csv/
weight: 204
related:
- /influxdb/cloud/reference/syntax/line-protocol/

View File

@ -327,7 +327,7 @@ See [Create a database with CREATE DATABASE](/influxdb/v1.8/query_language/manag
The `ALTER RETENTION POLICY` query takes the following form, where you must declare at least one of the retention policy attributes `DURATION`, `REPLICATION`, `SHARD DURATION`, or `DEFAULT`:
```sql
ALTER RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> SHARD DURATION <duration> DEFAULT
ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [DEFAULT]
```
{{% warn %}} Replication factors do not serve a purpose with single node instances.

View File

@ -0,0 +1,88 @@
---
title: Use the Flux VS Code extension
seotitle: Use the Flux Visual Studio Code extension
description: >
The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux)
provides Flux syntax highlighting, autocompletion, and a direct InfluxDB OSS server
integration that lets you run Flux scripts natively and show results in VS Code.
weight: 60
menu:
influxdb_1_8:
name: Flux VS Code extension
parent: Tools
v2: /influxdb/v2.0/tools/flux-vscode/
---
The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux)
provides Flux syntax highlighting, autocompletion, and a direct InfluxDB server
integration that lets you run Flux scripts natively and show results in VS Code.
{{% note %}}
#### Enable Flux in InfluxDB 1.8
To use the Flux VS Code extension with InfluxDB 1.8, ensure Flux is enabled in
your InfluxDB configuration file.
For more information, see [Enable Flux](/influxdb/v1.8/flux/installation/).
{{% /note %}}
##### On this page
- [Install the Flux VS Code extension](#install-the-flux-vs-code-extension)
- [Connect to InfluxDB 1.8](#connect-to-influxdb-18)
- [Query InfluxDB from VS Code](#query-influxdb-from-vs-code)
- [Explore your schema](#explore-your-schema)
- [Debug Flux queries](#debug-flux-queries)
- [Upgrade the Flux extension](#upgrade-the-flux-extension)
- [Flux extension commands](#flux-extension-commands)
## Install the Flux VS Code extension
The Flux VS Code extension is available in the **Visual Studio Marketplace**.
For information about installing extensions from the Visual Studio marketplace,
see the [Extension Marketplace documentation](https://code.visualstudio.com/docs/editor/extension-gallery).
## Connect to InfluxDB 1.8
To create an InfluxDB connection in VS Code:
1. Open the **VS Code Command Pallet** ({{< keybind mac="⇧⌘P" other="Ctrl+Shift+P" >}}).
2. Run `influxdb.addConnection`.
3. Provide the required connection credentials:
- **Type:** type of InfluxDB data source. Select **InfluxDB v1**.
- **Name:** unique identifier for your InfluxDB connection.
- **Hostname and Port:** InfluxDB host and port.
4. Click **Test** to test the connection.
5. Once tested successfully, click **Save**.
## Query InfluxDB from VS Code
1. Write your Flux query in a new VS Code file.
2. Save your Flux script with the `.flux` extension or set the
[VS Code Language Mode](https://code.visualstudio.com/docs/languages/overview#_changing-the-language-for-the-selected-file) to **Flux**.
3. Execute the query with the `influxdb.runQuery` command or {{< keybind mac="⌃⌥E" other="Ctrl+Alt+E" >}}.
4. Query results appear in a new tab. If query results do not appear, see [Debug Flux queries](#debug-flux-queries).
## Explore your schema
After you've configured an InfluxDB connection, VS Code provides an overview of buckets,
measurements, and tags in your InfluxDB organization.
Use the **InfluxDB** pane in VS code to explore your schema.
{{< img-hd src="/img/influxdb/1-8-tools-vsflux-explore-schema.png" alt="Explore your InfluxDB schema in VS Code" />}}
## Debug Flux queries
To view errors returned from Flux script executions, click the **Errors and Warnings**
icons in the bottom left of your VS Code window, and then select the **Output** tab in the debugging pane.
{{< img-hd src="/img/influxdb/2-0-tools-vsflux-errors-warnings.png" alt="VS Code errors and warnings"/>}}
## Upgrade the Flux extension
VS Code auto-updates extensions by default, but you are able to disable auto-update.
If you disable auto-update, [manually update your VS Code Flux extension](https://code.visualstudio.com/docs/editor/extension-gallery#_update-an-extension-manually).
After updating the extension, reload your VS Code window ({{< keybind mac="⇧⌘P" other="Ctrl+Shift+P" >}},
and then `Reload Window`) to initialize the updated extensions.
## Flux extension commands
| Command | Description | Keyboard shortcut | Menu context |
|:------- |:----------- |:-----------------: | ------------: |
| `influxdb.refresh` | Refresh | | |
| `influxdb.addConnection` | Add Connection | | view/title |
| `influxdb.runQuery` | Run Query | {{< keybind mac="⌃⌥E" other="Ctrl+Alt+E" >}} | editor/context |
| `influxdb.removeConnection` | Remove Connection | | view/item/context |
| `influxdb.switchConnection` | Switch Connection | | |
| `influxdb.editConnection` | Edit Connection | | view/item/context |

View File

@ -1,15 +0,0 @@
---
title: Change your password
seotitle: Change your password in InfluxDB Cloud
description: Change your password in InfluxDB Cloud.
menu:
influxdb_2_0:
name: Change your password
parent: Account management
identifier: change_password_cloud
weight: 105
---
To change or reset your InfluxDB Cloud password, use the **Forgot Password** button on the [login page](https://cloud2.influxdata.com/login).
If you are logged in, log out and then click the **Forgot Password** button.
In the **InfluxCloud: Password Change Requested** email, click the link to choose a new password.

View File

@ -19,19 +19,11 @@ InfluxDB copies all data and metadata to a set of files stored in a specified di
on your local filesystem.
{{% warn %}}
#### InfluxDB 2.0 (release candidate)
The `influx backup` command is not compatible with InfluxDB 2.0 (release candidate).
To back up data,
1. Stop `influxd`.
2. Manually copy the InfluxDB data directories:
```
cp -r ~/.influxdbv2 ~/.influxdbv2_bak
```
For more information, see [Upgrade to InfluxDB OSS 2.0](/influxdb/v2.0/upgrade/).
#### InfluxDB 1.x/2.0 Compatibility
The `influx backup` command is not compatible with versions of InfluxDB prior to 2.0.0.
In addition, the `backup` and `restore` commands do not function across 1.x and 2.x releases.
Use the `influx upgrade` command instead.
For more information, see [Upgrade to InfluxDB OSS 2.0](/influxdb/v2.0/upgrade/v1-to-v2/).
{{% /warn %}}
{{% cloud %}}

View File

@ -8,6 +8,7 @@ weight: 2
influxdb/v2.0/tags: [get-started, install]
aliases:
- /influxdb/v2.0/introduction/get-started/
- /influxdb/v2.0/introduction/getting-started/
---
The InfluxDB 2.0 time series platform is purpose-built to collect, store,
@ -31,7 +32,7 @@ _See [Differences between InfluxDB Cloud and InfluxDB OSS](#differences-between-
Download InfluxDB v2.0 for macOS.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb-2.0.1_darwin_amd64.tar.gz" download>InfluxDB v2.0 (macOS)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb-2.0.2_darwin_amd64.tar.gz" download>InfluxDB v2.0 (macOS)</a>
### (Optional) Verify the authenticity of downloaded binary
@ -49,13 +50,13 @@ If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/)
For example:
```
wget https://dl.influxdata.com/influxdb/releases/influxdb-2.0.1_darwin_amd64.tar.gz.asc
wget https://dl.influxdata.com/influxdb/releases/influxdb-2.0.2_darwin_amd64.tar.gz.asc
```
3. Verify the signature with `gpg --verify`:
```
gpg --verify influxdb-2.0.1_darwin_amd64.tar.gz.asc influxdb-2.0.1_darwin_amd64.tar.gz
gpg --verify influxdb-2.0.2_darwin_amd64.tar.gz.asc influxdb-2.0.2_darwin_amd64.tar.gz
```
The output from this command should include the following:
@ -72,7 +73,7 @@ or run the following command in a macOS command prompt application such
```sh
# Unpackage contents to the current working directory
tar zxvf ~/Downloads/influxdb-2.0.1_darwin_amd64.tar.gz
tar zxvf ~/Downloads/influxdb-2.0.2_darwin_amd64.tar.gz
```
#### (Optional) Place the binaries in your $PATH
@ -82,7 +83,7 @@ prefix the executables with `./` to run then in place.
```sh
# (Optional) Copy the influx and influxd binary to your $PATH
sudo cp influxdb-2.0.1_darwin_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb-2.0.2_darwin_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}
@ -151,7 +152,7 @@ influxd --reporting-disabled
Download InfluxDB v2.0 for Linux.
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb-2.0.1_linux_amd64.tar.gz" download >InfluxDB v2.0 (amd64)</a>
<a class="btn download" href="https://dl.influxdata.com/influxdb/releases/influxdb-2.0.2_linux_amd64.tar.gz" download >InfluxDB v2.0 (amd64)</a>
### (Optional) Verify the authenticity of downloaded binary
@ -169,13 +170,13 @@ If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/)
For example:
```
wget https://dl.influxdata.com/influxdb/releases/influxdb-2.0.1-linux_amd64.tar.gz.asc
wget https://dl.influxdata.com/influxdb/releases/influxdb-2.0.2_linux_amd64.tar.gz.asc
```
3. Verify the signature with `gpg --verify`:
```
gpg --verify influxdb-2.0.1-linux_amd64.tar.gz.asc influxdb-2.0.1-linux_amd64.tar.gz
gpg --verify influxdb-2.0.2_linux_amd64.tar.gz.asc influxdb-2.0.2_linux_amd64.tar.gz
```
The output from this command should include the following:
@ -192,10 +193,10 @@ _**Note:** The following commands are examples. Adjust the file names, paths, an
```sh
# Unpackage contents to the current working directory
tar xvzf path/to/influxdb-2.0.1_linux_amd64.tar.gz
tar xvzf path/to/influxdb-2.0.2_linux_amd64.tar.gz
# Copy the influx and influxd binary to your $PATH
sudo cp influxdb-2.0.1_linux_amd64/{influx,influxd} /usr/local/bin/
sudo cp influxdb-2.0.2_linux_amd64/{influx,influxd} /usr/local/bin/
```
{{% note %}}
@ -247,20 +248,12 @@ influxd --reporting-disabled
{{% tab-content %}}
### Download and run InfluxDB v2.0
{{% note %}}
#### Upgrading from InfluxDB 1.x
We are working on the upgrade process to ensure a smooth upgrade from InfluxDB 1.x to InfluxDB 2.0 on Docker.
If you're upgrading from InfluxDB 1.x on Docker, we recommend waiting to upgrade until we finalize an updated Docker release given the current upgrade process is undefined.
{{% /note %}}
Use `docker run` to download and run the InfluxDB v2.0 Docker image.
Expose port `8086`, which InfluxDB uses for client-server communication over
the [InfluxDB HTTP API](/influxdb/v2.0/reference/api/).
```sh
docker run --name influxdb -p 8086:8086 quay.io/influxdb/influxdb:v2.0.1
docker run --name influxdb -p 8086:8086 quay.io/influxdb/influxdb:v2.0.2
```
_To run InfluxDB in [detached mode](https://docs.docker.com/engine/reference/run/#detached-vs-foreground), include the `-d` flag in the `docker run` command._
@ -275,7 +268,7 @@ To opt-out of sending telemetry data back to InfluxData, include the
`--reporting-disabled` flag when starting the InfluxDB container.
```bash
docker run -p 8086:8086 quay.io/influxdb/influxdb:v2.0.1 --reporting-disabled
docker run -p 8086:8086 quay.io/influxdb/influxdb:2.0.0-rc --reporting-disabled
```
{{% /note %}}

View File

@ -54,7 +54,7 @@ It does the following:
import "strings"
import "regexp"
import "influxdata/influxdb/monitor"
import "influxdata/influxdb/v1"
import "influxdata/influxdb/schema"
option task = {name: "Failed Tasks Check", every: 1h, offset: 4m}
@ -84,7 +84,7 @@ messageFn = (r) =>
("The task: ${r.taskID} - ${r.name} has a status of ${r.status}")
task_data
|> v1["fieldsAsCols"]()
|> schema["fieldsAsCols"]()
|> monitor["check"](
data: check,
messageFn: messageFn,

View File

@ -18,7 +18,7 @@ weight: 202
A list of buckets with their retention policies and IDs appears.
2. Click a bucket to open it in the **Data Explorer**.
3. Click the bucket ID to copy it to the clipboard.
3. Click the **bucket ID** to copy it to the clipboard.
## View buckets using the influx CLI

View File

@ -1,7 +1,7 @@
---
title: Query in Data Explorer
description: >
Query your data in the InfluxDB user interface (UI) Data Explorer.
Query InfluxDB using the InfluxDB user interface (UI) Data Explorer. Discover how to query data in InfluxDB 2.0 using the InfluxDB UI.
aliases:
- /influxdb/v2.0/visualize-data/explore-metrics/
weight: 201

View File

@ -1,6 +1,6 @@
---
title: Query in the Flux REPL
description: Use the Flux REPL to query InfluxDB data.
description: Query InfluxDB using the Flux REPL. Discover how to query data in InfluxDB 2.0 using the Flux REPL.
weight: 203
menu:
influxdb_2_0:

View File

@ -1,6 +1,6 @@
---
title: Query with the InfluxDB API
description: Use the InfluxDB API to query InfluxDB data.
description: Query InfluxDB with the InfluxDB API. Discover how to query data in InfluxDB 2.0 using the InfluxDB API.
weight: 202
menu:
influxdb_2_0:

View File

@ -1,6 +1,6 @@
---
title: Use the `influx query` command
description: Use the influx CLI to query InfluxDB data.
description: Query InfluxDB using the influx CLI. Discover how to query data in InfluxDB 2.0 using `influx query`.
weight: 204
menu:
influxdb_2_0:

View File

@ -27,6 +27,11 @@ to group data into tracks or routes.
- [Group data by area](#group-data-by-area)
- [Group data into tracks or routes](#group-data-by-track-or-route)
{{% note %}}
For example results, use the [bird migration sample data](/influxdb/v2.0/reference/sample-data/#bird-migration-sample-data)
to populate the `sampleGeoData` variable in the queries below.
{{% /note %}}
### Group data by area
Use the [`geo.groupByArea()` function](/influxdb/v2.0/reference/flux/stdlib/experimental/geo/groupbyarea/)
to group geo-temporal data points by geographic area.
@ -69,6 +74,6 @@ sampleGeoData
|> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0})
|> geo.asTracks(
groupBy: ["id"],
sortBy: ["_time"]
orderBy: ["_time"]
)
```

View File

@ -15,84 +15,183 @@ related:
- /influxdb/v2.0/reference/api/influxdb-1x/dbrp
---
In InfluxDB 1.x, data is stored in [databases](/influxdb/v1.8/concepts/glossary/#database) and [retention policies](/influxdb/v1.8/concepts/glossary/#retention-policy-rp). In InfluxDB Cloud and InfluxDB OSS 2.0, data is stored in [buckets](/influxdb/v2.0/reference/glossary/#bucket). Because InfluxQL uses the 1.x data model, before querying in InfluxQL, a bucket must be mapped to a database and retention policy.
In InfluxDB 1.x, data is stored in [databases](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database)
and [retention policies](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp).
In InfluxDB OSS 2.0, data is stored in [buckets](/influxdb/v2.0/reference/glossary/#bucket).
Because InfluxQL uses the 1.x data model, before querying in InfluxQL, a bucket must be mapped to a database and retention policy (DBRP).
**Complete the following steps:**
1. [Verify buckets have a mapping](#verify-buckets-have-a-mapping).
{{% note %}}
If data is written into a bucket using the `/write` 1.x compatibility API, the bucket automatically has a mapping. For more information, see [Database and retention policy mapping](/influxdb/v2.0/reference/api/influxdb-1x/dbrp/).
If you're not sure how data was written into a bucket, we recommend verifying the bucket has a mapping.
{{% /note %}}
2. [Map unmapped buckets](#map-unmapped-buckets).
3. [Query a mapped bucket with InfluxQL](#query-a-mapped-bucket-with-influxql).
## Verify buckets have a mapping
Verify the buckets that you want to query are mapped to a database and retention policy using the [`GET /dbrps` API request](/influxdb/v2.0/api/#operation/GetDBRPs) (see CURL example below). **Include the following in your request**:
{{% note %}}
When [upgrading from InfluxDB 1.x to 2.0](/influxdb/v2.0/upgrade/v1-to-v2/), InfluxDB
automatically creates buckets for each database and retention policy combination
and DBRP mappings for those buckets.
For more information, see [Database and retention policy mapping](/influxdb/v2.0/reference/api/influxdb-1x/dbrp/).
If you're not sure how data was written into a bucket, we recommend verifying the bucket has a mapping.
{{% /note %}}
- `orgID`(**required**). If this is the only parameter included in the request, a list of all database retention policy mappings for the specified organization is returned.
- To find a specific bucket (`bucketID`), database (`database`), retention policy (`rp`), or mapping ID (`id`), include the query parameter in your request.
Use the [`influx` CLI](/influxdb/v2.0/reference/cli/influx/) or the [InfluxDB API](/influxdb/v2.0/reference/api/)
to verify the buckets you want to query are mapped to a database and retention policy.
{{< tabs-wrapper >}}
{{% tabs %}}
[influx CLI](#)
[InfluxDB API](#)
{{% /tabs %}}
{{% tab-content %}}
Use the [`influx v1 dbrp list` command](/influxdb/v2.0/reference/cli/influx/v1/dbrp/list/) to list DBRP mappings.
{{% note %}}
The examples below assume that your organization and authentication token are
provided by the active [InfluxDB connection configuration](/influxdb/v2.0/reference/cli/influx/config/) in the `influx` CLI.
If not, include your organization (`--org`) and authentication token (`--token`) with each command.
{{% /note %}}
##### View all DBRP mappings
```sh
influx v1 dbrp list
```
##### Filter DBRP mappings by database
```sh
influx v1 dbrp list --db example-db
```
##### Filter DBRP mappings by bucket ID
```sh
influx v1 dbrp list --bucket-id 00oxo0oXx000x0Xo
```
{{% /tab-content %}}
{{% tab-content %}}
Use the [`/api/v2/dbrps` API endpoint](/influxdb/v2.0/api/#operation/GetDBRPs) to list DBRP mappings.
Include the following:
- **Request method:** `GET`
- **Headers:**
- **Authorization:** `Token` schema with your InfluxDB [authentication token](/influxdb/v2.0/security/tokens/)
- **Query parameters:**
{{< req type="key" >}}
- {{< req "\*" >}} **orgID:** [organization ID](/influxdb/v2.0/organizations/view-orgs/#view-your-organization-id)
- **bucketID:** [bucket ID](/influxdb/v2.0/organizations/buckets/view-buckets/) _(to list DBRP mappings for a specific bucket)_
- **database:** database name _(to list DBRP mappings with a specific database name)_
- **rp:** retention policy name _(to list DBRP mappings with a specific retention policy name)_
- **id:** DBRP mapping ID _(to list a specific DBRP mapping)_
<!-- -->
##### View all DBRP mappings
```sh
curl --request GET \
http://localhost:8086/api/v2/dbrps?orgID=example-org-id \
--header "Authorization: Token YourAuthToken" \
--header "Content-type: application/json"
http://localhost:8086/api/v2/dbrps?orgID=00oxo0oXx000x0Xo \
--header "Authorization: Token YourAuthToken"
```
##### Filter DBRP mappings by database
```sh
curl --request GET \
http://localhost:8086/api/v2/dbrps?orgID=example-org-id&db=example-db \
--header "Authorization: Token YourAuthToken" \
--header "Content-type: application/json"
http://localhost:8086/api/v2/dbrps?orgID=00oxo0oXx000x0Xo&db=example-db \
--header "Authorization: Token YourAuthToken"
```
If you **do not find a mapping ID (`id`) for a bucket**, complete the next procedure to map the unmapped bucket.
##### Filter DBRP mappings by bucket ID
```sh
curl --request GET \
https://cloud2.influxdata.com/api/v2/dbrps?organization_id=00oxo0oXx000x0Xo&bucketID=00oxo0oXx000x0Xo \
--header "Authorization: Token YourAuthToken"
```
{{% /tab-content %}}
{{% /tabs-wrapper %}}
If you **do not find a DBRP mapping for a bucket**, complete the next procedure to map the unmapped bucket.
_For more information on the DBRP mapping API, see the [`/api/v2/dbrps` endpoint documentation](/influxdb/v2.0/api/#tag/DBRPs)._
## Map unmapped buckets
Use the [`influx` CLI](/influxdb/v2.0/reference/cli/influx/) or the [InfluxDB API](/influxdb/v2.0/reference/api/)
to manually create DBRP mappings for unmapped buckets.
To map an unmapped bucket to a database and retention policy, use the [`POST /dbrps` API request](/influxdb/v2.0/api/#operation/PostDBRP) (see CURL example below).
{{< tabs-wrapper >}}
{{% tabs %}}
[influx CLI](#)
[InfluxDB API](#)
{{% /tabs %}}
{{% tab-content %}}
You must include an **authorization token** with [basic or token authentication](/influxdb/v2.0/reference/api/influxdb-1x/#authentication) in your request header and the following **required parameters** in your request body:
Use the [`influx v1 dbrp create` command](/influxdb/v2.0/reference/cli/influx/v1/dbrp/create/)
to map an unmapped bucket to a database and retention policy.
Include the following:
- organization (`organization` or `organization_id`)
- target bucket (`bucket_id`)
- database and retention policy to map to bucket (`database` and `retention_policy`)
{{< req type="key" >}}
- {{< req "\*" >}} **database name** to map
- {{< req "\*" >}} **retention policy** name to map
- {{< req "\*" >}} [Bucket ID](/influxdb/v2.0/organizations/buckets/view-buckets/#view-buckets-in-the-influxdb-ui) to map to
- **Default flag** to set the provided retention policy as the default retention policy for the database
```sh
influx v1 dbrp create \
--db example-db \
--rp example-rp \
--bucket-id 00oxo0oXx000x0Xo \
--default
```
{{% /tab-content %}}
{{% tab-content %}}
Use the [`/api/v2/dbrps` API endpoint](/influxdb/v2.0/api/#operation/PostDBRP) to create a new DBRP mapping.
Include the following:
- **Request method:** `POST`
- **Headers:**
- **Authorization:** `Token` schema with your InfluxDB [authentication token](/influxdb/v2.0/security/tokens/)
- **Content-type:** `application/json`
- **Request body:** JSON object with the following fields:
{{< req type="key" >}}
- {{< req "\*" >}} **bucketID:** [bucket ID](/influxdb/v2.0/organizations/buckets/view-buckets/)
- {{< req "\*" >}} **database:** database name
- **default:** set the provided retention policy as the default retention policy for the database
- {{< req "\*" >}} **org** or **orgID:** organization name or [organization ID](/influxdb/v2.0/organizations/view-orgs/#view-your-organization-id)
- {{< req "\*" >}} **retention_policy:** retention policy name
<!-- -->
```sh
curl --request POST http://localhost:8086/api/v2/dbrps \
--header "Authorization: Token YourAuthToken" \
--header 'Content-type: application/json' \
--data '{
"bucketID": "12ab34cd56ef",
"database": "example-db",
"default": true,
"org": "example-org",
"orgID": "example-org-id",
"retention_policy": "example-rp",
"bucketID": "00oxo0oXx000x0Xo",
"database": "example-db",
"default": true,
"orgID": "00oxo0oXx000x0Xo",
"retention_policy": "example-rp"
}'
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
After you've verified the bucket is mapped, query the bucket using the `query` 1.x compatibility endpoint.
## Query a mapped bucket with InfluxQL
The [InfluxDB 1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/) supports
all InfluxDB 1.x client libraries and integrations in InfluxDB Cloud and InfluxDB OSS 2.0.
all InfluxDB 1.x client libraries and integrations in InfluxDB OSS 2.0.
To query a mapped bucket with InfluxQL, use the `/query` 1.x compatibility endpoint (see CURL example below), and include the following in your request:
To query a mapped bucket with InfluxQL, use the [`/query` 1.x compatibility endpoint](/influxdb/v2.0/reference/api/influxdb-1x/query/).
Include the following in your request:
- InfluxDB [authentication token](/influxdb/v2.0/security/tokens/)
_(See [compatibility API authentication](/influxdb/v2.0/reference/api/influxdb-1x/#authentication))_
- **db query parameter**: 1.x database to query
- **rp query parameter**: 1.x retention policy to query; if no retention policy is specified, InfluxDB uses the default retention policy for the specified database.
- **q query parameter**: InfluxQL query
- **Request method:** `GET`
- **Headers:**
- **Authorization:** _See [compatibility API authentication](/influxdb/v2.0/reference/api/influxdb-1x/#authentication)_
- **Query parameters:**
- **db**: 1.x database to query
- **rp**: 1.x retention policy to query _(if no retention policy is specified, InfluxDB uses the default retention policy for the specified database)_
- **q**: InfluxQL query
{{% note %}}
**URL-encode** the InfluxQL query to ensure it's formatted correctly when submitted to InfluxDB.
@ -109,8 +208,8 @@ To return results as **CSV**, include the `Accept: application/csv` header.
## InfluxQL support
InfluxDB Cloud and InfluxDB OSS 2.0 support InfluxQL **read-only** queries. See supported and unsupported queries below.
To learn more about InfluxQL, see [Influx Query Language (InfluxQL)](/influxdb/v1.8/query_language/).
InfluxDB OSS 2.0 support InfluxQL **read-only** queries. See supported and unsupported queries below.
To learn more about InfluxQL, see [Influx Query Language (InfluxQL)](/{{< latest "influxdb" "v1" >}}/query_language/).
{{< flex >}}
{{< flex-content >}}
@ -137,10 +236,10 @@ To learn more about InfluxQL, see [Influx Query Language (InfluxQL)](/influxdb/v
- `SELECT INTO`
- `ALTER`
- `CREATE`
- `DROP` (see above)
- `DROP` _(limited support)_
- `GRANT`
- `KILL`
- `REVOKE`
{{% /warn %}}
{{< /flex-content >}}
{{< /flex >}}
{{< /flex >}}

View File

@ -16,15 +16,16 @@ related:
The InfluxDB 1.x data model includes [databases](/influxdb/v1.8/concepts/glossary/#database)
and [retention policies](/influxdb/v1.8/concepts/glossary/#retention-policy-rp).
InfluxDB Cloud and InfluxDB OSS 2.0 replace both with [buckets](/influxdb/v2.0/reference/glossary/#bucket).
To support InfluxDB 1.x query and write patterns in InfluxDB Cloud and InfluxDB OSS 2.0, databases and retention
InfluxDB OSS 2.0 replaces both with [buckets](/influxdb/v2.0/reference/glossary/#bucket).
To support InfluxDB 1.x query and write patterns in InfluxDB OSS 2.0, databases and retention
policies are mapped to buckets using the **database and retention policy (DBRP) mapping service**.
The DBRP mapping service uses the **database** and **retention policy** specified in
[1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/) requests to route operations to a bucket.
{{% note %}}
To query data in InfluxQL that was written using the 2.x `/write` API, you must manually create a DBRP mapping to map a bucket to a database and retention policy.
To query data in InfluxQL that was written using the 2.x `/write` API,
you must **manually create a DBRP mapping** to map a bucket to a database and retention policy.
For more information, see [Map unmapped buckets](/influxdb/v2.0/query-data/influxql/#map-unmapped-buckets).
{{% /note %}}
@ -36,25 +37,13 @@ the default retention policy for the specified database.
### When writing data
When writing data to InfluxDB Cloud and InfluxDB OSS 2.0 using the
When writing data using the
[`/write` compatibility endpoint](/influxdb/v2.0/reference/api/influxdb-1x/write/),
the DBRP mapping service checks for a bucket mapped to the database and retention policy:
- If a mapped bucket is found, data is written to the bucket.
- If an unmapped bucket with a name matching:
- **database/retention policy** exists, a DBRP mapping is added to the bucket,
and data is written to the bucket.
- **database** exists (without a specified retention policy), the default
database retention policy is used, a DBRP mapping is added to the bucket,
and data is written to the bucket.
- If no matching bucket is found, a new **database/retention-policy** bucket is
automatically created with a DBRP mapping, and data is written to the bucket.
If no retention policy is specified, `autogen` is used.
{{% note %}}
To automatically create new buckets, the authentication token used for the
write request must be an **All Access token**.
{{% /note %}}
- If an unmapped bucket, InfluxDB returns an error.
See how to [Map unmapped buckets](/influxdb/v2.0/query-data/influxql/#map-unmapped-buckets).
### When querying data

View File

@ -11,7 +11,7 @@ weight: 301
influxdb/v2.0/tags: [influxql, query]
list_code_example: |
<pre>
<span class="api get">GET</span> https://cloud2.influxdata.com/query
<span class="api get">GET</span> http://localhost:8086/query
</pre>
related:
- /influxdb/v2.0/query-data/influxql
@ -21,7 +21,7 @@ The `/query` 1.x compatibility endpoint queries InfluxDB Cloud and InfluxDB OSS
Use the `GET` request method to query data from the `/query` endpoint.
<pre>
<span class="api get">GET</span> https://cloud2.influxdata.com/query
<span class="api get">GET</span> http://localhost:8086/query
</pre>
The `/query` compatibility endpoint use the **database** and **retention policy**
@ -45,7 +45,7 @@ _For more information, see [Authentication](/influxdb/v2.0/reference/api/influxd
{{% /note %}}
### db
<span class="req">Required</span> The **database** to query data from.
({{< req >}}) The **database** to query data from.
This is mapped to an InfluxDB [bucket](/influxdb/v2.0/reference/glossary/#bucket).
_See [Database and retention policy mapping](/influxdb/v2.0/reference/api/influxdb-1x/dbrp/)._
@ -55,7 +55,7 @@ This is mapped to an InfluxDB [bucket](/influxdb/v2.0/reference/glossary/#bucket
_See [Database and retention policy mapping](/influxdb/v2.0/reference/api/influxdb-1x/dbrp/)._
### q
<span class="req">Required</span> The **InfluxQL** query to execute.
({{< req >}}) The **InfluxQL** query to execute.
To execute multiple queries, delimit queries with a semicolon (`;`).
### epoch

View File

@ -12,7 +12,7 @@ weight: 301
influxdb/v2.0/tags: [write]
list_code_example: |
<pre>
<span class="api post">POST</span> https://cloud2.influxdata.com/write
<span class="api post">POST</span> http://localhost:8086/write
</pre>
related:
- /influxdb/v2.0/reference/syntax/line-protocol
@ -24,7 +24,7 @@ Use the `POST` request method to write [line protocol](/influxdb/v2.0/reference/
to the `/write` endpoint.
<pre>
<span class="api post">POST</span> https://cloud2.influxdata.com/write
<span class="api post">POST</span> http://localhost:8086/write
</pre>
## Authentication
@ -40,7 +40,7 @@ encode the line protocol.
## Query string parameters
### db
<span class="req">Required</span> The **database** to write data to.
({{< req >}}) The **database** to write data to.
This is mapped to an InfluxDB [bucket](/influxdb/v2.0/reference/glossary/#bucket).
_See [Database and retention policy mapping](/influxdb/v2.0/reference/api/influxdb-1x/dbrp/)._

View File

@ -22,6 +22,7 @@ influx v1 [command]
| Subcommand | Description |
|:-----------------------------------------------------|:----------------------------------------------|
| [auth](/influxdb/v2.0/reference/cli/influx/v1/auth/) | Authorization management commands for v1 APIs |
| [dbrp](/influxdb/v2.0/reference/cli/influx/v1/dbrp/) | Database retention policy mapping management commands for v1 APIs |
## Flags
| Flag | | Description |

View File

@ -30,6 +30,7 @@ influx v1 auth create [flags]
| | `--no-password` | Don't prompt for a password. (You must use the `v1 auth set-password` command before using the token.) | | |
| `-o` | `--org` | Organization name | string | `$INFLUX_ORG` |
| | `--org-id` | Organization ID | string | `$INFLUX_ORG_ID` |
| | `--password` | Password to set on the authorization | | |
| | `--read-bucket` | Bucket ID to assign read permissions to | | |
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `$INFLUX_TOKEN` |

View File

@ -25,6 +25,7 @@ influx v1 auth set-password [flags]
| `-h` | `--help` | Help for the `set-password` command | | |
| | `--host` | HTTP address of InfluxDB | string | `$INFLUX_HOST` |
| `-i` | `--id` | Authorization ID | string | |
| | `--password` | Password to set on the authorization | string | |
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `$INFLUX_TOKEN` |
| | `--username` | Authorization username | string | `$INFLUX_USERNAME` |

View File

@ -0,0 +1,33 @@
---
title: influx v1 dbrp
description: >
The `influx v1 dbrp` subcommands provide database retention policy (DBRP) mapping management for the InfluxDB 1.x compatibility API.
menu:
influxdb_2_0_ref:
name: influx v1 dbrp
parent: influx v1
weight: 101
influxdb/v2.0/tags: [DBRP]
---
The `influx v1 dbrp` subcommands provide database retention policy (DBRP) management for the [InfluxDB 1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/).
## Usage
```
influx v1 dbrp [flags]
influx v1 dbrp [command]
```
## Commands
| Command | Description |
|:------------------------------------------------------------- |:--------------------- |
| [create](/influxdb/v2.0/reference/cli/influx/v1/dbrp/create/) | Create a DBRP mapping |
| [delete](/influxdb/v2.0/reference/cli/influx/v1/dbrp/delete/) | Delete a DBRP mapping |
| [list](/influxdb/v2.0/reference/cli/influx/v1/dbrp/list/) | List DBRP mappings |
| [update](/influxdb/v2.0/reference/cli/influx/v1/dbrp/update/) | Update a DBRP mapping |
## Flags
| Flag | | Description |
|:-----|:---------|:--------------------------------|
| `-h` | `--help` | Help for the `v1 dbrp ` command |

View File

@ -0,0 +1,50 @@
---
title: influx v1 dbrp create
description: >
The `influx v1 dbrp create` command creates a DBRP mapping in the InfluxDB 1.x compatibility API.
menu:
influxdb_2_0_ref:
name: influx v1 dbrp create
parent: influx v1 dbrp
weight: 101
influxdb/v2.0/tags: [DBRP]
---
The `influx v1 dbrp create` command creates a DBRP mapping with the [InfluxDB 1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/).
## Usage
```
influx v1 dbrp create [flags]
```
## Flags
| Flag | | Description | Input type | {{< cli/mapped >}} |
|:------|:------------------|:-------------------------------------------------------------------------|:----------:|:------------------------|
| `-c` | `--active-config` | Config name to use for command | string | `$INFLUX_ACTIVE_CONFIG` |
| | `--bucket-id` | Bucket ID to map to | | |
| | `--configs-path` | Path to the influx CLI configurations (default: `~/.influxdbv2/configs`) | string | `$INFLUX_CONFIGS_PATH` |
| | `--db` | InfluxDB v1 database to map from | | |
| | `--default` | Set DBRP mapping's retention policy as default | | |
| `-h` | `--help` | Help for the `create` command | | |
| | `--hide-headers` | Hide table headers (default: `false`) | | `$INFLUX_HIDE_HEADERS` |
| | `--host` | HTTP address of InfluxDB | string | `$INFLUX_HOST` |
| | `--json` | Output data as JSON (default: `false`) | | `$INFLUX_OUTPUT_JSON` |
| `-o` | `--org` | Organization name | string | `$INFLUX_ORG` |
| | `--org-id` | Organization ID | string | `$INFLUX_ORG_ID` |
| | `--rp` | InfluxDB v1 retention policy to map from | | |
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `$INFLUX_TOKEN` |
## Examples
##### Create a DBRP mapping
```sh
influx v1 dbrp create \
--bucket-id 12ab34cd56ef \
--database example-db \
--rp example-rp \
--org example-org \
--token $INFLUX_TOKEN \
--default
```

View File

@ -0,0 +1,33 @@
---
title: influx v1 dbrp delete
description: >
The `influx v1 dbrp delete` command deletes a DBRP mapping in the InfluxDB 1.x compatibility API.
menu:
influxdb_2_0_ref:
name: influx v1 dbrp delete
parent: influx v1 dbrp
weight: 101
influxdb/v2.0/tags: [DBRP]
---
The `influx v1 dbrp delete` command deletes a DBRP mapping in the [InfluxDB 1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/).
## Usage
```
influx v1 dbrp delete [flags]
```
## Flags
| Flag | | Description | Input type | {{< cli/mapped >}} |
|------|-------------------|--------------------------------------------------------------------------|------------|-------------------------|
| `-c` | `--active-config` | Config name to use for command | string | `$INFLUX_ACTIVE_CONFIG` |
| | `--configs-path` | Path to the influx CLI configurations (default: `~/.influxdbv2/configs`) | string | `$INFLUX_CONFIGS_PATH` |
| `-h` | `--help` | Help for the `delete` command | | |
| | `--hide-headers` | Hide the table headers (default: `false`) | | `$INFLUX_HIDE_HEADERS` |
| | `--host` | HTTP address of InfluxDB | string | `$INFLUX_HOST` |
| | `--id` | DBRP ID (required) | string | |
| | `--json` | Output data as JSON (default: `false`) | | `$INFLUX_OUTPUT_JSON` |
| `-o` | `--org` | Organization name | string | `$INFLUX_ORG` |
| | `--org-id` | Organization ID | string | `$INFLUX_ORG_ID` |
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `$INFLUX_TOKEN` |

View File

@ -0,0 +1,62 @@
---
title: influx v1 dbrp list
description: >
The `influx v1 dbrp list` command lists and searches DBRP mappings in the InfluxDB 1.x compatibility API.
menu:
influxdb_2_0_ref:
name: influx v1 dbrp list
parent: influx v1 dbrp
weight: 101
influxdb/v2.0/tags: [dbrp]
---
The `influx v1 dbrp list` command lists and searches DBRP mappings in the [InfluxDB 1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/).
## Usage
```
influx v1 dbrp list [flags]
```
## Flags
| Flag | | Description | Input type | {{< cli/mapped >}} |
|------|-------------------|--------------------------------------------------------------------------|------------|-------------------------|
| `-c` | `--active-config` | Config name to use for command | string | `$INFLUX_ACTIVE_CONFIG` |
| | `--bucket-id` | Bucket ID | | |
| | `--configs-path` | Path to the influx CLI configurations (default: `~/.influxdbv2/configs`) | string | `$INFLUX_CONFIGS_PATH` |
| | `--db` | Filter DBRP mappings by database | | |
| | `--default` | Limit results to default mapping | | |
| `-h` | `--help` | Help for the `list` command | | |
| | `--hide-headers` | Hide the table headers (default: `false`) | | `$INFLUX_HIDE_HEADERS` |
| | `--host` | HTTP address of InfluxDB | string | `$INFLUX_HOST` |
| | `--id` | Limit results to a specified mapping | string | |
| | `--json` | Output data as JSON (default: `false`) | | `$INFLUX_OUTPUT_JSON` |
| `-o` | `--org` | Organization name | string | `$INFLUX_ORG` |
| | `--org-id` | Organization ID | string | `$INFLUX_ORG_ID` |
| | `--rp` | Filter DBRP mappings by InfluxDB v1 retention policy | string | `$INFLUX_ORG` |
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `$INFLUX_TOKEN` |
## Examples
##### List all DBRP mappings in your organization
```sh
influx v1 dbrp list
```
##### List DBRP mappings for specific buckets
```sh
influx v1 dbrp list \
--bucket-id 12ab34cd56ef78 \
--bucket-id 09zy87xw65vu43
```
##### List DBRP mappings with a specific database
```sh
influx v1 dbrp list --db example-db
```
##### List DBRP mappings with a specific retention policy
```sh
influx v1 dbrp list --rp example-rp
```

View File

@ -0,0 +1,51 @@
---
title: influx v1 dbrp update
description: >
The `influx v1 dbrp update` command updates a DBRP mapping in the InfluxDB 1.x compatibility API.
menu:
influxdb_2_0_ref:
name: influx v1 dbrp update
parent: influx v1 dbrp
weight: 101
influxdb/v2.0/tags: [DBRP]
---
The `influx v1 dbrp update` command updates a DBRP mapping in the [InfluxDB 1.x compatibility API](/influxdb/v2.0/reference/api/influxdb-1x/).
## Usage
```
influx v1 dbrp update [flags]
```
## Flags
| Flag | | Description | Input type | {{< cli/mapped >}} |
|:-----|:------------------|:-------------------------------------------------------------------------|:----------:|:------------------------|
| `-c` | `--active-config` | Config name to use for command | string | `$INFLUX_ACTIVE_CONFIG` |
| | `--configs-path` | Path to the influx CLI configurations (default: `~/.influxdbv2/configs`) | string | `$INFLUX_CONFIGS_PATH` |
| | `--default` | Set DBRP mapping's retention policy as default | | |
| `-h` | `--help` | Help for the `update` command | | |
| | `--hide-headers` | Hide the table headers (default: `false`) | | `$INFLUX_HIDE_HEADERS` |
| | `--host` | HTTP address of InfluxDB | string | `$INFLUX_HOST` |
| | `--id` | ({{< req >}}) DBRP ID | string | |
| | `--json` | Output data as JSON (default: `false`) | | `$INFLUX_OUTPUT_JSON` |
| `-o` | `--org` | Organization name | string | `$INFLUX_ORG` |
| | `--org-id` | Organization ID | string | `$INFLUX_ORG_ID` |
| `-r` | `--rp` | InfluxDB v1 retention policy to map from | | |
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `$INFLUX_TOKEN` |
## Examples
##### Set a DBRP mapping as default
```sh
influx v1 dbrp update \
--id 12ab34cd56ef78 \
--default
```
##### Update the retention policy of a DBRP mapping
```sh
influx v1 dbrp update \
--id 12ab34cd56ef78 \
--rp new-rp-name
```

View File

@ -2,7 +2,7 @@
title: influx write
description: >
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
Write data using line protocol or annotated CSV.
Write data using line protocol, annotated CSV, or extended annotated CSV.
menu:
influxdb_2_0_ref:
name: influx write
@ -11,12 +11,17 @@ weight: 101
influxdb/v2.0/tags: [write]
related:
- /influxdb/v2.0/write-data/
- /influxdb/v2.0/write-data/csv/
- /influxdb/v2.0/write-data/developer-tools/csv/
- /influxdb/v2.0/reference/syntax/line-protocol/
- /influxdb/v2.0/reference/syntax/annotated-csv/
- /influxdb/v2.0/reference/syntax/annotated-csv/extended/
---
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
Write data using [line protocol](/influxdb/v2.0/reference/syntax/line-protocol) or
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv).
Write data using [line protocol](/influxdb/v2.0/reference/syntax/line-protocol),
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv), or
[extended annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv/extended/).
If you write CSV data, CSV annotations determine how the data translates into line protocol.
## Usage
```
@ -54,3 +59,138 @@ influx write [command]
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `INFLUX_TOKEN` |
| `-u` | `--url` | URL to import data from | string | |
## Examples
- [Write line protocol](#line-protocol)
- [via stdin](#write-line-protocol-via-stdin)
- [from a file](#write-line-protocol-from-a-file)
- [from multiple files](#write-line-protocol-from-multiple-files)
- [from a URL](#write-line-protocol-from-a-url)
- [from multiple URLs](#write-line-protocol-from-multiple-urls)
- [from multiple sources](#write-line-protocol-from-multiple-sources)
- [Write CSV data](#csv)
- [via stdin](#write-annotated-csv-data-via-stdin)
- [from a file](#write-annotated-csv-data-from-a-file)
- [from multiple files](#write-annotated-csv-data-from-multiple-files)
- [from a URL](#write-annotated-csv-data-from-a-url)
- [from multiple URLs](#write-annotated-csv-data-from-multiple-urls)
- [from multiple sources](#write-annotated-csv-data-from-multiple-sources)
- [and prepend annotation headers](#prepend-csv-data-with-annotation-headers)
### Line protocol
##### Write line protocol via stdin
```sh
influx write --bucket example-bucket "
m,host=host1 field1=1.2
m,host=host2 field1=2.4
m,host=host1 field2=5i
m,host=host2 field2=3i
"
```
##### Write line protocol from a file
```sh
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt
```
##### Write line protocol from multiple files
```sh
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--file path/to/line-protocol-2.txt
```
##### Write line protocol from a URL
```sh
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol.txt
```
##### Write line protocol from multiple URLs
```sh
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
```
##### Write line protocol from multiple sources
```sh
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
```
---
### CSV
##### Write annotated CSV data via stdin
```sh
influx write \
--bucket example-bucket \
--format csv \
"#datatype measurement,tag,tag,field,field,ignored,time
m,cpu,host,time_steal,usage_user,nothing,time
cpu,cpu1,host1,0,2.7,a,1482669077000000000
cpu,cpu1,host2,0,2.2,b,1482669087000000000
"
```
##### Write annotated CSV data from a file
```sh
influx write \
--bucket example-bucket \
--file path/to/data.csv
```
##### Write annotated CSV data from multiple files
```sh
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--file path/to/data-2.csv
```
##### Write annotated CSV data from a URL
```sh
influx write \
--bucket example-bucket \
--url https://example.com/data.csv
```
##### Write annotated CSV data from multiple URLs
```sh
influx write \
--bucket example-bucket \
--url https://example.com/data-1.csv \
--url https://example.com/data-2.csv
```
##### Write annotated CSV data from multiple sources
```sh
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--url https://example.com/data-2.csv
```
##### Prepend CSV data with annotation headers
```sh
influx write \
--bucket example-bucket \
--header "#constant measurement,birds" \
--header "#datatype dataTime:2006-01-02,long,tag" \
--file path/to/data.csv
```

View File

@ -9,13 +9,20 @@ menu:
parent: influx write
weight: 101
influxdb/v2.0/tags: [write]
related:
- /influxdb/v2.0/write-data/
- /influxdb/v2.0/write-data/developer-tools/csv/
- /influxdb/v2.0/reference/syntax/line-protocol/
- /influxdb/v2.0/reference/syntax/annotated-csv/
- /influxdb/v2.0/reference/syntax/annotated-csv/extended/
---
The `influx write dryrun` command prints write output to stdout instead of writing
to InfluxDB. Use this command to test writing data.
Supports [line protocol](/influxdb/v2.0/reference/syntax/line-protocol) and
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv).
Supports [line protocol](/influxdb/v2.0/reference/syntax/line-protocol),
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv), and
[extended annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv/extended).
Output is always **line protocol**.
## Usage

View File

@ -17,19 +17,20 @@ influxd inspect [subcommand]
```
## Subcommands
| Subcommand | Description |
|:---------- |:----------- |
| [build-tsi](/influxdb/v2.0/reference/cli/influxd/inspect/build-tsi/) | Rebuild the TSI index and series file |
| [compact-series-file](/influxdb/v2.0/reference/cli/influxd/inspect/compact-series-file) | Compact the series file |
| [dump-tsi](/influxdb/v2.0/reference/cli/influxd/inspect/dump-tsi/) | Output low level TSI information |
| [dumpwal](/influxdb/v2.0/reference/cli/influxd/inspect/dumpwal/) | Output TSM data from WAL files |
| [export-blocks](/influxdb/v2.0/reference/cli/influxd/inspect/export-blocks/) | Export block data |
| [export-index](/influxdb/v2.0/reference/cli/influxd/inspect/export-index/) | Export TSI index data |
| [report-tsi](/influxdb/v2.0/reference/cli/influxd/inspect/report-tsi/) | Report the cardinality of TSI files |
| [report-tsm](/influxdb/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report |
| [verify-seriesfile](/influxdb/v2.0/reference/cli/influxd/inspect/verify-seriesfile/) | Verify the integrity of series files |
| [verify-tsm](/influxdb/v2.0/reference/cli/influxd/inspect/verify-tsm/) | Check the consistency of TSM files |
| [verify-wal](/influxdb/v2.0/reference/cli/influxd/inspect/verify-wal/) | Check for corrupt WAL files |
| Subcommand | Description |
|:---------- |:----------- |
| [export-index](/influxdb/v2.0/reference/cli/influxd/inspect/export-index/) | Export TSI index data |
<!-- | [build-tsi](/influxdb/v2.0/reference/cli/influxd/inspect/build-tsi/) | Rebuild the TSI index and series file | -->
<!-- | [compact-series-file](/influxdb/v2.0/reference/cli/influxd/inspect/compact-series-file) | Compact the series file | -->
<!-- | [dump-tsi](/influxdb/v2.0/reference/cli/influxd/inspect/dump-tsi/) | Output low level TSI information | -->
<!-- | [dumpwal](/influxdb/v2.0/reference/cli/influxd/inspect/dumpwal/) | Output TSM data from WAL files | -->
<!-- | [export-blocks](/influxdb/v2.0/reference/cli/influxd/inspect/export-blocks/) | Export block data | -->
<!-- | [report-tsi](/influxdb/v2.0/reference/cli/influxd/inspect/report-tsi/) | Report the cardinality of TSI files | -->
<!-- | [report-tsm](/influxdb/v2.0/reference/cli/influxd/inspect/report-tsm/) | Run TSM report | -->
<!-- | [verify-seriesfile](/influxdb/v2.0/reference/cli/influxd/inspect/verify-seriesfile/) | Verify the integrity of series files | -->
<!-- | [verify-tsm](/influxdb/v2.0/reference/cli/influxd/inspect/verify-tsm/) | Check the consistency of TSM files | -->
<!-- | [verify-wal](/influxdb/v2.0/reference/cli/influxd/inspect/verify-wal/) | Check for corrupt WAL files | -->
## Flags
| Flag | | Description |

View File

@ -8,7 +8,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect build-tsi` command rebuilds the TSI index and, if necessary,

View File

@ -8,7 +8,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect compact-series-file` command compacts the [series file](/influxdb/v2.0/reference/glossary/#series-file)

View File

@ -7,7 +7,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect dump-tsi` command outputs low-level information about

View File

@ -7,7 +7,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect dumpwal` command outputs data from Write Ahead Log (WAL) files.

View File

@ -8,7 +8,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect export-blocks` command exports all blocks in one or more

View File

@ -8,7 +8,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect report-tsi` command analyzes Time Series Index (TSI) files

View File

@ -9,7 +9,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect report-tsm` command analyzes Time-Structured Merge Tree (TSM)

View File

@ -7,7 +7,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect verify-seriesfile` command verifies the integrity of series files.

View File

@ -8,7 +8,7 @@ menu:
influxdb_2_0_ref:
parent: influxd inspect
weight: 301
products: [oss]
draft: true
---
The `influxd inspect verify-tsm` command analyzes a set of TSM files for inconsistencies

Some files were not shown because too many files have changed in this diff Show More