diff --git a/api-docs/openapi/content/v2.5/info.yml b/api-docs/openapi/content/v2.5/info.yml index 090a4057b..e6e2b3637 100644 --- a/api-docs/openapi/content/v2.5/info.yml +++ b/api-docs/openapi/content/v2.5/info.yml @@ -4,7 +4,7 @@ description: | The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. This documentation is generated from the - [InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.5.0/contracts/ref/oss.yml). + [InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.6.0/contracts/ref/oss.yml). license: name: MIT url: 'https://opensource.org/licenses/MIT' diff --git a/api-docs/v2.5/ref.yml b/api-docs/v2.5/ref.yml index 6c9934d3c..f2b385098 100644 --- a/api-docs/v2.5/ref.yml +++ b/api-docs/v2.5/ref.yml @@ -274,14 +274,14 @@ components: org: description: | The organization name. - Specifies the [organization](/influxdb/v2.5/reference/glossary/#organization) + Specifies the [organization](/influxdb/v2.6/reference/glossary/#organization) that the token is scoped to. readOnly: true type: string orgID: description: | The organization ID. - Specifies the [organization](/influxdb/v2.5/reference/glossary/#organization) that the authorization is scoped to. + Specifies the [organization](/influxdb/v2.6/reference/glossary/#organization) that the authorization is scoped to. type: string permissions: description: | @@ -295,7 +295,7 @@ components: description: | The API token. The token value is unique to the authorization. - [API tokens](/influxdb/v2.5/reference/glossary/#token) are + [API tokens](/influxdb/v2.6/reference/glossary/#token) are used to authenticate and authorize InfluxDB API requests and `influx` CLI commands--after receiving the request, InfluxDB checks that the token is valid and that the `permissions` allow the requested action(s). @@ -308,13 +308,13 @@ components: user: description: | The user name. - Specifies the [user](/influxdb/v2.5/reference/glossary/#user) that owns the authorization. + Specifies the [user](/influxdb/v2.6/reference/glossary/#user) that owns the authorization. If the authorization is _scoped_ to a user, the user; otherwise, the creator of the authorization. readOnly: true type: string userID: - description: The user ID. Specifies the [user](/influxdb/v2.5/reference/glossary/#user) that owns the authorization. If _scoped_, the user that the authorization is scoped to; otherwise, the creator of the authorization. + description: The user ID. Specifies the [user](/influxdb/v2.6/reference/glossary/#user) that owns the authorization. If _scoped_, the user that the authorization is scoped to; otherwise, the creator of the authorization. readOnly: true type: string type: object @@ -6713,7 +6713,7 @@ info: The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. This documentation is generated from the - [InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.5.0/contracts/ref/oss.yml). + [InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.6.0/contracts/ref/oss.yml). license: name: MIT url: https://opensource.org/licenses/MIT @@ -10978,7 +10978,7 @@ paths: #### Related guides - - [Manage users](/influxdb/v2.5/users/) + - [Manage users](/influxdb/v2.6/users/) example: influxdb-oss-session=19aaaZZZGOvP2GGryXVT2qYftlFKu3bIopurM6AGFow1yF1abhtOlbHfsc-d8gozZFC_6WxmlQIAwLMW5xs523w== in: cookie name: influxdb-oss-session @@ -18995,28 +18995,28 @@ paths: - $ref: '#/components/parameters/TraceSpan' - description: | A user ID. - Only returns legacy authorizations scoped to the specified [user](/influxdb/v2.5/reference/glossary/#user). + Only returns legacy authorizations scoped to the specified [user](/influxdb/v2.6/reference/glossary/#user). in: query name: userID schema: type: string - description: | A user name. - Only returns legacy authorizations scoped to the specified [user](/influxdb/v2.5/reference/glossary/#user). + Only returns legacy authorizations scoped to the specified [user](/influxdb/v2.6/reference/glossary/#user). in: query name: user schema: type: string - description: | An organization ID. - Only returns legacy authorizations that belong to the specified [organization](/influxdb/v2.5/reference/glossary/#organization). + Only returns legacy authorizations that belong to the specified [organization](/influxdb/v2.6/reference/glossary/#organization). in: query name: orgID schema: type: string - description: | An organization name. - Only returns legacy authorizations that belong to the specified [organization](/influxdb/v2.5/reference/glossary/#organization). + Only returns legacy authorizations that belong to the specified [organization](/influxdb/v2.6/reference/glossary/#organization). in: query name: org schema: @@ -19242,7 +19242,7 @@ paths: description: | Media type that the client can understand. - **Note**: With `application/csv`, query results include [**unix timestamps**](/influxdb/v2.5/reference/glossary/#unix-timestamp) instead of [RFC3339 timestamps](/influxdb/v2.5/reference/glossary/#rfc3339-timestamp). + **Note**: With `application/csv`, query results include [**unix timestamps**](/influxdb/v2.6/reference/glossary/#unix-timestamp) instead of [RFC3339 timestamps](/influxdb/v2.6/reference/glossary/#rfc3339-timestamp). enum: - application/json - application/csv @@ -19277,8 +19277,8 @@ paths: type: string - description: | The database to query data from. - This is mapped to an InfluxDB [bucket](/influxdb/v2.5/reference/glossary/#bucket). - For more information, see [Database and retention policy mapping](/influxdb/v2.5/api/influxdb-1x/dbrp/). + This is mapped to an InfluxDB [bucket](/influxdb/v2.6/reference/glossary/#bucket). + For more information, see [Database and retention policy mapping](/influxdb/v2.6/api/influxdb-1x/dbrp/). in: query name: db required: true @@ -19286,8 +19286,8 @@ paths: type: string - description: | The retention policy to query data from. - This is mapped to an InfluxDB [bucket](/influxdb/v2.5/reference/glossary/#bucket). - For more information, see [Database and retention policy mapping](/influxdb/v2.5/api/influxdb-1x/dbrp/). + This is mapped to an InfluxDB [bucket](/influxdb/v2.6/reference/glossary/#bucket). + For more information, see [Database and retention policy mapping](/influxdb/v2.6/api/influxdb-1x/dbrp/). in: query name: rp schema: @@ -19300,8 +19300,8 @@ paths: type: string - description: | A unix timestamp precision. - Formats timestamps as [unix (epoch) timestamps](/influxdb/v2.5/reference/glossary/#unix-timestamp) the specified precision - instead of [RFC3339 timestamps](/influxdb/v2.5/reference/glossary/#rfc3339-timestamp) with nanosecond precision. + Formats timestamps as [unix (epoch) timestamps](/influxdb/v2.6/reference/glossary/#unix-timestamp) the specified precision + instead of [RFC3339 timestamps](/influxdb/v2.6/reference/glossary/#rfc3339-timestamp) with nanosecond precision. in: query name: epoch schema: @@ -19353,9 +19353,9 @@ paths: description: | #### InfluxDB Cloud: - returns this error if a **read** or **write** request exceeds your - plan's [adjustable service quotas](/influxdb/v2.5/account-management/limits/#adjustable-service-quotas) + plan's [adjustable service quotas](/influxdb/v2.6/account-management/limits/#adjustable-service-quotas) or if a **delete** request exceeds the maximum - [global limit](/influxdb/v2.5/account-management/limits/#global-limits) + [global limit](/influxdb/v2.6/account-management/limits/#global-limits) - returns `Retry-After` header that describes when to try the write again. #### InfluxDB OSS: diff --git a/api-docs/v2.6/ref.yml b/api-docs/v2.6/ref.yml new file mode 100644 index 000000000..b7683bc64 --- /dev/null +++ b/api-docs/v2.6/ref.yml @@ -0,0 +1,20143 @@ +components: + examples: + AuthorizationPostRequest: + description: Creates an authorization. + summary: An authorization for a resource type + value: + description: iot_users read buckets + orgID: INFLUX_ORG_ID + permissions: + - action: read + resource: + type: buckets + AuthorizationWithResourcePostRequest: + description: Creates an authorization for access to a specific resource. + summary: An authorization for a resource + value: + description: iot_users read buckets + orgID: INFLUX_ORG_ID + permissions: + - action: read + resource: + id: INFLUX_BUCKET_ID + type: buckets + AuthorizationWithUserPostRequest: + description: Creates an authorization scoped to a specific user. + summary: An authorization scoped to a user + value: + description: iot_user write to bucket + orgID: INFLUX_ORG_ID + permissions: + - action: write + resource: + id: INFLUX_BUCKET_ID + type: buckets + userID: INFLUX_USER_ID + TaskWithFluxRequest: + description: Sets the `flux` property with Flux task options and a query. + summary: A task with Flux + value: + description: This task contains Flux that configures the task schedule and downsamples CPU data every hour. + flux: "option task = {name: \"CPU Total 1 Hour New\", every: 1h}from(bucket: \"telegraf\")|> range(start: -1h)|> filter(fn: (r) => (r._measurement == \"cpu\"))|> filter(fn: (r) =>\n\t\t(r._field == \"usage_system\"))|> filter(fn: (r) => (r.cpu == \"cpu-total\"))|> aggregateWindow(every: 1h, fn: max)|> to(bucket: \"cpu_usage_user_total_1h\", org: \"INFLUX_ORG\")" + status: active + parameters: + After: + description: | + A resource ID to seek from. + Returns records created after the specified record; + results don't include the specified record. + + Use `after` instead of the `offset` parameter. + For more information about pagination parameters, see [Pagination](/influxdb/latest/api/#tag/Pagination). + in: query + name: after + required: false + schema: + type: string + Descending: + in: query + name: descending + required: false + schema: + default: false + type: boolean + Limit: + description: | + Limits the number of records returned. Default is `20`. + in: query + name: limit + required: false + schema: + default: 20 + maximum: 100 + minimum: 1 + type: integer + Offset: + description: | + The offset for pagination. + The number of records to skip. + + For more information about pagination parameters, see [Pagination](/influxdb/latest/api/#tag/Pagination). + in: query + name: offset + required: false + schema: + minimum: 0 + type: integer + SortBy: + in: query + name: sortBy + required: false + schema: + type: string + TraceSpan: + description: OpenTracing span context + example: + baggage: + key: value + span_id: '1' + trace_id: '1' + in: header + name: Zap-Trace-Span + required: false + schema: + type: string + responses: + AuthorizationError: + content: + application/json: + examples: + tokenNotAuthorized: + summary: Token is not authorized to access a resource + value: + code: unauthorized + message: unauthorized access + schema: + properties: + code: + description: | + The HTTP status code description. Default is `unauthorized`. + enum: + - unauthorized + readOnly: true + type: string + message: + description: A human-readable message that may contain detail about the error. + readOnly: true + type: string + description: | + Unauthorized. The error may indicate one of the following: + + * The `Authorization: Token` header is missing or malformed. + * The API token value is missing from the header. + * The token doesn't have sufficient permissions to write to this organization and bucket. + BadRequestError: + content: + application/json: + examples: + orgProvidedNotFound: + summary: The org or orgID passed doesn't own the token passed in the header + value: + code: invalid + message: 'failed to decode request body: organization not found' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + + #### InfluxDB OSS + + - Returns this error if an incorrect value is passed in the `org` parameter or `orgID` parameter. + GeneralServerError: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Non 2XX error response from server. + InternalServerError: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Internal server error. + The server encountered an unexpected situation. + ResourceNotFoundError: + content: + application/json: + examples: + bucket-not-found: + summary: Bucket name not found + value: + code: not found + message: bucket "air_sensor" not found + org-not-found: + summary: Organization name not found + value: + code: not found + message: organization name "my-org" not found + orgID-not-found: + summary: Organization ID not found + value: + code: not found + message: organization not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + A requested resource was not found. + The response body contains the requested resource type and the name value + (if you passed it)--for example: + + - `"organization name \"my-org\" not found"` + - `"organization not found"`: indicates you passed an ID that did not match + an organization. + ServerError: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Non 2XX error response from server. + schemas: + ASTResponse: + description: Contains the AST for the supplied Flux query + properties: + ast: + $ref: '#/components/schemas/Package' + type: object + AddResourceMemberRequestBody: + properties: + id: + description: | + The ID of the user to add to the resource. + type: string + name: + description: | + The name of the user to add to the resource. + type: string + required: + - id + type: object + AnalyzeQueryResponse: + properties: + errors: + items: + properties: + character: + type: integer + column: + type: integer + line: + type: integer + message: + type: string + type: object + type: array + type: object + ArrayExpression: + description: Used to create and directly specify the elements of an array object + properties: + elements: + description: Elements of the array + items: + $ref: '#/components/schemas/Expression' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + Authorization: + allOf: + - $ref: '#/components/schemas/AuthorizationUpdateRequest' + - properties: + createdAt: + format: date-time + readOnly: true + type: string + id: + description: The authorization ID. + readOnly: true + type: string + links: + example: + self: /api/v2/authorizations/1 + user: /api/v2/users/12 + properties: + self: + $ref: '#/components/schemas/Link' + readOnly: true + user: + $ref: '#/components/schemas/Link' + readOnly: true + readOnly: true + type: object + org: + description: | + The organization name. + Specifies the [organization](/influxdb/v2.6/reference/glossary/#organization) + that the token is scoped to. + readOnly: true + type: string + orgID: + description: | + The organization ID. + Specifies the [organization](/influxdb/v2.6/reference/glossary/#organization) that the authorization is scoped to. + type: string + permissions: + description: | + The list of permissions. + An authorization must have at least one permission. + items: + $ref: '#/components/schemas/Permission' + minItems: 1 + type: array + token: + description: | + The API token. + The token value is unique to the authorization. + [API tokens](/influxdb/v2.6/reference/glossary/#token) are + used to authenticate and authorize InfluxDB API requests and `influx` + CLI commands--after receiving the request, InfluxDB checks that the + token is valid and that the `permissions` allow the requested action(s). + readOnly: true + type: string + updatedAt: + format: date-time + readOnly: true + type: string + user: + description: | + The user name. + Specifies the [user](/influxdb/v2.6/reference/glossary/#user) that owns the authorization. + If the authorization is _scoped_ to a user, the user; + otherwise, the creator of the authorization. + readOnly: true + type: string + userID: + description: The user ID. Specifies the [user](/influxdb/v2.6/reference/glossary/#user) that owns the authorization. If _scoped_, the user that the authorization is scoped to; otherwise, the creator of the authorization. + readOnly: true + type: string + type: object + required: + - orgID + - permissions + AuthorizationPostRequest: + allOf: + - $ref: '#/components/schemas/AuthorizationUpdateRequest' + - properties: + orgID: + description: | + An organization ID. + Specifies the organization that owns the authorization. + type: string + permissions: + description: | + A list of permissions for an authorization. + In the list, provide at least one `permission` object. + + In a `permission`, the `resource.type` property grants access to all + resources of the specified type. + To grant access to only a specific resource, specify the + `resource.id` property. + items: + $ref: '#/components/schemas/Permission' + minItems: 1 + type: array + userID: + description: | + A user ID. + Specifies the user that the authorization is scoped to. + + When a user authenticates with username and password, + InfluxDB generates a _user session_ with all the permissions + specified by all the user's authorizations. + type: string + type: object + required: + - orgID + - permissions + AuthorizationUpdateRequest: + properties: + description: + description: A description of the token. + type: string + status: + default: active + description: Status of the token. If `inactive`, InfluxDB rejects requests that use the token. + enum: + - active + - inactive + type: string + Authorizations: + properties: + authorizations: + items: + $ref: '#/components/schemas/Authorization' + type: array + links: + $ref: '#/components/schemas/Links' + readOnly: true + type: object + Axes: + description: The viewport for a View's visualizations + properties: + x: + $ref: '#/components/schemas/Axis' + 'y': + $ref: '#/components/schemas/Axis' + required: + - x + - 'y' + type: object + Axis: + description: Axis used in a visualization. + properties: + base: + description: Radix for formatting axis values. + enum: + - '' + - '2' + - '10' + type: string + bounds: + description: The extents of the axis in the form [lower, upper]. Clients determine whether bounds are inclusive or exclusive of their limits. + items: + type: string + maxItems: 2 + minItems: 0 + type: array + label: + description: Description of the axis. + type: string + prefix: + description: Label prefix for formatting axis values. + type: string + scale: + $ref: '#/components/schemas/AxisScale' + suffix: + description: Label suffix for formatting axis values. + type: string + type: object + AxisScale: + description: 'Scale is the axis formatting scale. Supported: "log", "linear"' + enum: + - log + - linear + type: string + BadStatement: + description: A placeholder for statements for which no correct statement nodes can be created + properties: + text: + description: Raw source text + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + BandViewProperties: + properties: + adaptiveZoomHide: + type: boolean + axes: + $ref: '#/components/schemas/Axes' + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + geom: + $ref: '#/components/schemas/XYGeom' + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + lowerColumn: + type: string + mainColumn: + type: string + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + timeFormat: + type: string + type: + enum: + - band + type: string + upperColumn: + type: string + xColumn: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yColumn: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - geom + - queries + - shape + - axes + - colors + - note + - showNoteWhenEmpty + type: object + BinaryExpression: + description: uses binary operators to act on two operands in an expression + properties: + left: + $ref: '#/components/schemas/Expression' + operator: + type: string + right: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Block: + description: A set of statements + properties: + body: + description: Block body + items: + $ref: '#/components/schemas/Statement' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + BooleanLiteral: + description: Represents boolean values + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: boolean + type: object + Bucket: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + labels: /api/v2/buckets/1/labels + members: /api/v2/buckets/1/members + org: /api/v2/orgs/2 + owners: /api/v2/buckets/1/owners + self: /api/v2/buckets/1 + write: /api/v2/write?org=2&bucket=1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: The URL to retrieve labels for this bucket. + members: + $ref: '#/components/schemas/Link' + description: The URL to retrieve members that can read this bucket. + org: + $ref: '#/components/schemas/Link' + description: The URL to retrieve parent organization for this bucket. + owners: + $ref: '#/components/schemas/Link' + description: The URL to retrieve owners that can read and write to this bucket. + self: + $ref: '#/components/schemas/Link' + description: The URL for this bucket. + write: + $ref: '#/components/schemas/Link' + description: The URL to write line protocol to this bucket. + readOnly: true + type: object + name: + type: string + orgID: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + rp: + type: string + schemaType: + $ref: '#/components/schemas/SchemaType' + default: implicit + type: + default: user + enum: + - user + - system + readOnly: true + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - name + - retentionRules + BucketMetadataManifest: + properties: + bucketID: + type: string + bucketName: + type: string + defaultRetentionPolicy: + type: string + description: + type: string + organizationID: + type: string + organizationName: + type: string + retentionPolicies: + $ref: '#/components/schemas/RetentionPolicyManifests' + required: + - organizationID + - organizationName + - bucketID + - bucketName + - defaultRetentionPolicy + - retentionPolicies + type: object + BucketMetadataManifests: + items: + $ref: '#/components/schemas/BucketMetadataManifest' + type: array + BucketShardMapping: + properties: + newId: + format: int64 + type: integer + oldId: + format: int64 + type: integer + required: + - oldId + - newId + type: object + BucketShardMappings: + items: + $ref: '#/components/schemas/BucketShardMapping' + type: array + Buckets: + properties: + buckets: + items: + $ref: '#/components/schemas/Bucket' + type: array + links: + $ref: '#/components/schemas/Links' + readOnly: true + type: object + BuilderAggregateFunctionType: + enum: + - filter + - group + type: string + BuilderConfig: + properties: + aggregateWindow: + properties: + fillValues: + type: boolean + period: + type: string + type: object + buckets: + items: + type: string + type: array + functions: + items: + $ref: '#/components/schemas/BuilderFunctionsType' + type: array + tags: + items: + $ref: '#/components/schemas/BuilderTagsType' + type: array + type: object + BuilderFunctionsType: + properties: + name: + type: string + type: object + BuilderTagsType: + properties: + aggregateFunctionType: + $ref: '#/components/schemas/BuilderAggregateFunctionType' + key: + type: string + values: + items: + type: string + type: array + type: object + BuiltinStatement: + description: Declares a builtin identifier and its type + properties: + id: + $ref: '#/components/schemas/Identifier' + type: + $ref: '#/components/schemas/NodeType' + type: object + CallExpression: + description: Represents a function call + properties: + arguments: + description: Function arguments + items: + $ref: '#/components/schemas/Expression' + type: array + callee: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Cell: + properties: + h: + format: int32 + type: integer + id: + type: string + links: + properties: + self: + type: string + view: + type: string + type: object + viewID: + description: The reference to a view from the views API. + type: string + w: + format: int32 + type: integer + x: + format: int32 + type: integer + 'y': + format: int32 + type: integer + type: object + CellUpdate: + properties: + h: + format: int32 + type: integer + w: + format: int32 + type: integer + x: + format: int32 + type: integer + 'y': + format: int32 + type: integer + type: object + CellWithViewProperties: + allOf: + - $ref: '#/components/schemas/Cell' + - properties: + name: + type: string + properties: + $ref: '#/components/schemas/ViewProperties' + type: object + type: object + Cells: + items: + $ref: '#/components/schemas/Cell' + type: array + CellsWithViewProperties: + items: + $ref: '#/components/schemas/CellWithViewProperties' + type: array + Check: + allOf: + - $ref: '#/components/schemas/CheckDiscriminator' + CheckBase: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + description: An optional description of the check. + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + lastRunError: + readOnly: true + type: string + lastRunStatus: + enum: + - failed + - success + - canceled + readOnly: true + type: string + latestCompleted: + description: A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)) of the latest scheduled and completed run. + format: date-time + readOnly: true + type: string + links: + example: + labels: /api/v2/checks/1/labels + members: /api/v2/checks/1/members + owners: /api/v2/checks/1/owners + query: /api/v2/checks/1/query + self: /api/v2/checks/1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: The URL to retrieve labels for this check. + members: + $ref: '#/components/schemas/Link' + description: The URL to retrieve members for this check. + owners: + $ref: '#/components/schemas/Link' + description: The URL to retrieve owners for this check. + query: + $ref: '#/components/schemas/Link' + description: The URL to retrieve the Flux script for this check. + self: + $ref: '#/components/schemas/Link' + description: The URL for this check. + readOnly: true + type: object + name: + type: string + orgID: + description: The ID of the organization that owns this check. + type: string + ownerID: + description: The ID of creator used to create this check. + readOnly: true + type: string + query: + $ref: '#/components/schemas/DashboardQuery' + status: + $ref: '#/components/schemas/TaskStatusType' + taskID: + description: The ID of the task associated with this check. + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - name + - orgID + - query + CheckDiscriminator: + discriminator: + mapping: + custom: '#/components/schemas/CustomCheck' + deadman: '#/components/schemas/DeadmanCheck' + threshold: '#/components/schemas/ThresholdCheck' + propertyName: type + oneOf: + - $ref: '#/components/schemas/DeadmanCheck' + - $ref: '#/components/schemas/ThresholdCheck' + - $ref: '#/components/schemas/CustomCheck' + CheckPatch: + properties: + description: + type: string + name: + type: string + status: + enum: + - active + - inactive + type: string + type: object + CheckStatusLevel: + description: The state to record if check matches a criteria. + enum: + - UNKNOWN + - OK + - INFO + - CRIT + - WARN + type: string + CheckViewProperties: + properties: + adaptiveZoomHide: + type: boolean + check: + $ref: '#/components/schemas/Check' + checkID: + type: string + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + type: + enum: + - check + type: string + required: + - type + - shape + - checkID + - queries + - colors + type: object + Checks: + properties: + checks: + items: + $ref: '#/components/schemas/Check' + type: array + links: + $ref: '#/components/schemas/Links' + ColorMapping: + additionalProperties: + type: string + description: A color mapping is an object that maps time series data to a UI color scheme to allow the UI to render graphs consistent colors across reloads. + example: + configcat_deployments-autopromotionblocker: '#663cd0' + measurement_birdmigration_europe: '#663cd0' + series_id_1: '#edf529' + series_id_2: '#edf529' + type: object + ConditionalExpression: + description: Selects one of two expressions, `Alternate` or `Consequent`, depending on a third boolean expression, `Test` + properties: + alternate: + $ref: '#/components/schemas/Expression' + consequent: + $ref: '#/components/schemas/Expression' + test: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Config: + properties: + config: + type: object + type: object + ConstantVariableProperties: + properties: + type: + enum: + - constant + type: string + values: + items: + type: string + type: array + CreateCell: + properties: + h: + format: int32 + type: integer + name: + type: string + usingView: + description: Makes a copy of the provided view. + type: string + w: + format: int32 + type: integer + x: + format: int32 + type: integer + 'y': + format: int32 + type: integer + type: object + CreateDashboardRequest: + properties: + description: + description: The user-facing description of the dashboard. + type: string + name: + description: The user-facing name of the dashboard. + type: string + orgID: + description: The ID of the organization that owns the dashboard. + type: string + required: + - orgID + - name + CustomCheck: + allOf: + - $ref: '#/components/schemas/CheckBase' + - properties: + type: + enum: + - custom + type: string + required: + - type + type: object + DBRP: + properties: + bucketID: + description: | + A bucket ID. + Identifies the bucket used as the target for the translation. + type: string + database: + description: | + A database name. + Identifies the InfluxDB v1 database. + type: string + default: + description: | + If set to `true`, this DBRP mapping is the default retention policy + for the database (specified by the `database` property's value). + type: boolean + id: + description: | + The resource ID that InfluxDB uses to uniquely identify the database retention policy (DBRP) mapping. + readOnly: true + type: string + links: + $ref: '#/components/schemas/Links' + orgID: + description: | + An organization ID. + Identifies the [organization](/influxdb/latest/reference/glossary/#organization) that owns the mapping. + type: string + retention_policy: + description: | + A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) name. + Identifies the InfluxDB v1 retention policy mapping. + type: string + virtual: + description: Indicates an autogenerated, virtual mapping based on the bucket name. Currently only available in OSS. + type: boolean + required: + - id + - orgID + - bucketID + - database + - retention_policy + - default + type: object + DBRPCreate: + properties: + bucketID: + description: | + A bucket ID. + Identifies the bucket used as the target for the translation. + type: string + database: + description: | + A database name. + Identifies the InfluxDB v1 database. + type: string + default: + description: | + Set to `true` to use this DBRP mapping as the default retention policy + for the database (specified by the `database` property's value). + type: boolean + org: + description: | + An organization name. + Identifies the [organization](/influxdb/latest/reference/glossary/#organization) that owns the mapping. + type: string + orgID: + description: | + An organization ID. + Identifies the [organization](/influxdb/latest/reference/glossary/#organization) that owns the mapping. + type: string + retention_policy: + description: | + A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) name. + Identifies the InfluxDB v1 retention policy mapping. + type: string + required: + - bucketID + - database + - retention_policy + type: object + DBRPGet: + properties: + content: + $ref: '#/components/schemas/DBRP' + required: true + type: object + DBRPUpdate: + properties: + default: + description: | + Set to `true` to use this DBRP mapping as the default retention policy + for the database (specified by the `database` property's value). + To remove the default mapping, set to `false`. + type: boolean + retention_policy: + description: | + A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) name. + Identifies the InfluxDB v1 retention policy mapping. + type: string + DBRPs: + properties: + content: + items: + $ref: '#/components/schemas/DBRP' + type: array + Dashboard: + allOf: + - $ref: '#/components/schemas/CreateDashboardRequest' + - properties: + cells: + $ref: '#/components/schemas/Cells' + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + cells: /api/v2/dashboards/1/cells + labels: /api/v2/dashboards/1/labels + members: /api/v2/dashboards/1/members + org: /api/v2/labels/1 + owners: /api/v2/dashboards/1/owners + self: /api/v2/dashboards/1 + properties: + cells: + $ref: '#/components/schemas/Link' + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + org: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + type: object + meta: + properties: + createdAt: + format: date-time + type: string + updatedAt: + format: date-time + type: string + type: object + type: object + type: object + DashboardColor: + description: Defines an encoding of data value into color space. + properties: + hex: + description: The hex number of the color + maxLength: 7 + minLength: 7 + type: string + id: + description: The unique ID of the view color. + type: string + name: + description: The user-facing name of the hex color. + type: string + type: + description: Type is how the color is used. + enum: + - min + - max + - threshold + - scale + - text + - background + type: string + value: + description: The data value mapped to this color. + format: float + type: number + required: + - id + - type + - hex + - name + - value + type: object + DashboardQuery: + properties: + builderConfig: + $ref: '#/components/schemas/BuilderConfig' + editMode: + $ref: '#/components/schemas/QueryEditMode' + name: + type: string + text: + description: The text of the Flux query. + type: string + type: object + DashboardWithViewProperties: + allOf: + - $ref: '#/components/schemas/CreateDashboardRequest' + - properties: + cells: + $ref: '#/components/schemas/CellsWithViewProperties' + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + cells: /api/v2/dashboards/1/cells + labels: /api/v2/dashboards/1/labels + members: /api/v2/dashboards/1/members + org: /api/v2/labels/1 + owners: /api/v2/dashboards/1/owners + self: /api/v2/dashboards/1 + properties: + cells: + $ref: '#/components/schemas/Link' + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + org: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + type: object + meta: + properties: + createdAt: + format: date-time + type: string + updatedAt: + format: date-time + type: string + type: object + type: object + type: object + Dashboards: + properties: + dashboards: + items: + $ref: '#/components/schemas/Dashboard' + type: array + links: + $ref: '#/components/schemas/Links' + type: object + DateTimeLiteral: + description: Represents an instant in time with nanosecond precision in [RFC3339Nano date/time format](/influxdb/latest/reference/glossary/#rfc3339nano-timestamp). + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + format: date-time + type: string + type: object + DeadmanCheck: + allOf: + - $ref: '#/components/schemas/CheckBase' + - properties: + every: + description: Check repetition interval. + type: string + level: + $ref: '#/components/schemas/CheckStatusLevel' + offset: + description: Duration to delay after the schedule, before executing check. + type: string + reportZero: + description: If only zero values reported since time, trigger an alert + type: boolean + staleTime: + description: String duration for time that a series is considered stale and should not trigger deadman. + type: string + statusMessageTemplate: + description: The template used to generate and write a status message. + type: string + tags: + description: List of tags to write to each status. + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + timeSince: + description: String duration before deadman triggers. + type: string + type: + enum: + - deadman + type: string + required: + - type + type: object + DecimalPlaces: + description: Indicates whether decimal places should be enforced, and how many digits it should show. + properties: + digits: + description: The number of digits after decimal to display + format: int32 + type: integer + isEnforced: + description: Indicates whether decimal point setting should be enforced + type: boolean + type: object + DeletePredicateRequest: + description: The delete predicate request. + properties: + predicate: + description: | + An expression in [delete predicate syntax](/influxdb/latest/reference/syntax/delete-predicate/). + example: tag1="value1" and (tag2="value2" and tag3!="value3") + type: string + start: + description: | + A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)). + The earliest time to delete from. + format: date-time + type: string + stop: + description: | + A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)). + The latest time to delete from. + format: date-time + type: string + required: + - start + - stop + type: object + Dialect: + description: | + Options for tabular data output. + Default output is [annotated CSV](/influxdb/latest/reference/syntax/annotated-csv/#csv-response-format) with headers. + + For more information about tabular data **dialect**, + see [W3 metadata vocabulary for tabular data](https://www.w3.org/TR/2015/REC-tabular-metadata-20151217/#dialect-descriptions). + properties: + annotations: + description: | + Annotation rows to include in the results. + An _annotation_ is metadata associated with an object (column) in the data model. + + #### Related guides + + - See [Annotated CSV annotations](/influxdb/latest/reference/syntax/annotated-csv/#annotations) for examples and more information. + + For more information about **annotations** in tabular data, + see [W3 metadata vocabulary for tabular data](https://www.w3.org/TR/2015/REC-tabular-data-model-20151217/#columns). + items: + enum: + - group + - datatype + - default + type: string + type: array + uniqueItems: true + commentPrefix: + default: '#' + description: The character prefixed to comment strings. Default is a number sign (`#`). + maxLength: 1 + minLength: 0 + type: string + dateTimeFormat: + default: RFC3339 + description: | + The format for timestamps in results. + Default is [`RFC3339` date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp). + To include nanoseconds in timestamps, use `RFC3339Nano`. + + #### Example formatted date/time values + + | Format | Value | + |:------------|:----------------------------| + | `RFC3339` | `"2006-01-02T15:04:05Z07:00"` | + | `RFC3339Nano` | `"2006-01-02T15:04:05.999999999Z07:00"` | + enum: + - RFC3339 + - RFC3339Nano + type: string + delimiter: + default: ',' + description: The separator used between cells. Default is a comma (`,`). + maxLength: 1 + minLength: 1 + type: string + header: + default: true + description: If true, the results contain a header row. + type: boolean + type: object + DictExpression: + description: Used to create and directly specify the elements of a dictionary + properties: + elements: + description: Elements of the dictionary + items: + $ref: '#/components/schemas/DictItem' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + DictItem: + description: A key-value pair in a dictionary. + properties: + key: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + val: + $ref: '#/components/schemas/Expression' + type: object + Duration: + description: A pair consisting of length of time and the unit of time measured. It is the atomic unit from which all duration literals are composed. + properties: + magnitude: + type: integer + type: + $ref: '#/components/schemas/NodeType' + unit: + type: string + type: object + DurationLiteral: + description: Represents the elapsed time between two instants as an int64 nanosecond count with syntax of golang's time.Duration + properties: + type: + $ref: '#/components/schemas/NodeType' + values: + description: Duration values + items: + $ref: '#/components/schemas/Duration' + type: array + type: object + Error: + properties: + code: + $ref: '#/components/schemas/ErrorCode' + description: code is the machine-readable error code. + enum: + - internal error + - not found + - conflict + - invalid + - unprocessable entity + - empty value + - unavailable + - forbidden + - too many requests + - unauthorized + - method not allowed + - request too large + - unsupported media type + readOnly: true + type: string + err: + description: Stack of errors that occurred during processing of the request. Useful for debugging. + readOnly: true + type: string + message: + description: Human-readable message. + readOnly: true + type: string + op: + description: Describes the logical code operation when the error occurred. Useful for debugging. + readOnly: true + type: string + required: + - code + ErrorCode: + description: code is the machine-readable error code. + enum: + - internal error + - not found + - conflict + - invalid + - unprocessable entity + - empty value + - unavailable + - forbidden + - too many requests + - unauthorized + - method not allowed + - request too large + - unsupported media type + readOnly: true + type: string + Expression: + oneOf: + - $ref: '#/components/schemas/ArrayExpression' + - $ref: '#/components/schemas/DictExpression' + - $ref: '#/components/schemas/FunctionExpression' + - $ref: '#/components/schemas/BinaryExpression' + - $ref: '#/components/schemas/CallExpression' + - $ref: '#/components/schemas/ConditionalExpression' + - $ref: '#/components/schemas/LogicalExpression' + - $ref: '#/components/schemas/MemberExpression' + - $ref: '#/components/schemas/IndexExpression' + - $ref: '#/components/schemas/ObjectExpression' + - $ref: '#/components/schemas/ParenExpression' + - $ref: '#/components/schemas/PipeExpression' + - $ref: '#/components/schemas/UnaryExpression' + - $ref: '#/components/schemas/BooleanLiteral' + - $ref: '#/components/schemas/DateTimeLiteral' + - $ref: '#/components/schemas/DurationLiteral' + - $ref: '#/components/schemas/FloatLiteral' + - $ref: '#/components/schemas/IntegerLiteral' + - $ref: '#/components/schemas/PipeLiteral' + - $ref: '#/components/schemas/RegexpLiteral' + - $ref: '#/components/schemas/StringLiteral' + - $ref: '#/components/schemas/UnsignedIntegerLiteral' + - $ref: '#/components/schemas/Identifier' + ExpressionStatement: + description: May consist of an expression that doesn't return a value and is executed solely for its side-effects + properties: + expression: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Field: + properties: + alias: + description: Alias overrides the field name in the returned response. Applies only if type is `func` + type: string + args: + description: Args are the arguments to the function + items: + $ref: '#/components/schemas/Field' + type: array + type: + description: '`type` describes the field type. `func` is a function. `field` is a field reference.' + enum: + - func + - field + - integer + - number + - regex + - wildcard + type: string + value: + description: value is the value of the field. Meaning of the value is implied by the `type` key + type: string + type: object + File: + description: Represents a source from a single file + properties: + body: + description: List of Flux statements + items: + $ref: '#/components/schemas/Statement' + type: array + imports: + description: A list of package imports + items: + $ref: '#/components/schemas/ImportDeclaration' + type: array + name: + description: The name of the file. + type: string + package: + $ref: '#/components/schemas/PackageClause' + type: + $ref: '#/components/schemas/NodeType' + type: object + Flags: + additionalProperties: true + type: object + FloatLiteral: + description: Represents floating point numbers according to the double representations defined by the IEEE-754-1985 + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: number + type: object + FluxResponse: + description: Rendered flux that backs the check or notification. + properties: + flux: + type: string + FluxSuggestion: + properties: + name: + type: string + params: + additionalProperties: + type: string + type: object + type: object + FluxSuggestions: + properties: + funcs: + items: + $ref: '#/components/schemas/FluxSuggestion' + type: array + type: object + FunctionExpression: + description: Function expression + properties: + body: + $ref: '#/components/schemas/Node' + params: + description: Function parameters + items: + $ref: '#/components/schemas/Property' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + GaugeViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + note: + type: string + prefix: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + suffix: + type: string + tickPrefix: + type: string + tickSuffix: + type: string + type: + enum: + - gauge + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - prefix + - tickPrefix + - suffix + - tickSuffix + - decimalPlaces + type: object + GeoCircleViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - properties: + colorDimension: + $ref: '#/components/schemas/Axis' + colorField: + description: Circle color field + type: string + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + interpolateColors: + description: Interpolate circle color based on displayed value + type: boolean + radius: + description: Maximum radius size in pixels + type: integer + radiusDimension: + $ref: '#/components/schemas/Axis' + radiusField: + description: Radius field + type: string + required: + - radiusField + - radiusDimension + - colorField + - colorDimension + - colors + type: object + GeoHeatMapViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - properties: + blur: + description: Blur for heatmap points + type: integer + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + intensityDimension: + $ref: '#/components/schemas/Axis' + intensityField: + description: Intensity field + type: string + radius: + description: Radius size in pixels + type: integer + required: + - intensityField + - intensityDimension + - radius + - blur + - colors + type: object + GeoPointMapViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - properties: + colorDimension: + $ref: '#/components/schemas/Axis' + colorField: + description: Marker color field + type: string + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + isClustered: + description: Cluster close markers together + type: boolean + tooltipColumns: + description: An array for which columns to display in tooltip + items: + type: string + type: array + required: + - colorField + - colorDimension + - colors + type: object + GeoTrackMapViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - required: + - trackWidth + - speed + - randomColors + - trackPointVisualization + type: object + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + randomColors: + description: Assign different colors to different tracks + type: boolean + speed: + description: Speed of the track animation + type: integer + trackWidth: + description: Width of the track + type: integer + GeoViewLayer: + oneOf: + - $ref: '#/components/schemas/GeoCircleViewLayer' + - $ref: '#/components/schemas/GeoHeatMapViewLayer' + - $ref: '#/components/schemas/GeoPointMapViewLayer' + - $ref: '#/components/schemas/GeoTrackMapViewLayer' + type: object + GeoViewLayerProperties: + properties: + type: + enum: + - heatmap + - circleMap + - pointMap + - trackMap + type: string + required: + - type + type: object + GeoViewProperties: + properties: + allowPanAndZoom: + default: true + description: If true, map zoom and pan controls are enabled on the dashboard view + type: boolean + center: + description: Coordinates of the center of the map + properties: + lat: + description: Latitude of the center of the map + format: double + type: number + lon: + description: Longitude of the center of the map + format: double + type: number + required: + - lat + - lon + type: object + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + detectCoordinateFields: + default: true + description: If true, search results get automatically regroupped so that lon,lat and value are treated as columns + type: boolean + latLonColumns: + $ref: '#/components/schemas/LatLonColumns' + layers: + description: List of individual layers shown in the map + items: + $ref: '#/components/schemas/GeoViewLayer' + type: array + mapStyle: + description: Define map type - regular, satellite etc. + type: string + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + s2Column: + description: String to define the column + type: string + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + type: + enum: + - geo + type: string + useS2CellID: + description: If true, S2 column is used to calculate lat/lon + type: boolean + zoom: + description: Zoom level used for initial display of the map + format: double + maximum: 28 + minimum: 1 + type: number + required: + - type + - shape + - queries + - note + - showNoteWhenEmpty + - center + - zoom + - allowPanAndZoom + - detectCoordinateFields + - layers + type: object + GreaterThreshold: + allOf: + - $ref: '#/components/schemas/ThresholdBase' + - properties: + type: + enum: + - greater + type: string + value: + format: float + type: number + required: + - type + - value + type: object + HTTPNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + authMethod: + enum: + - none + - basic + - bearer + type: string + contentTemplate: + type: string + headers: + additionalProperties: + type: string + description: Customized headers. + type: object + method: + enum: + - POST + - GET + - PUT + type: string + password: + type: string + token: + type: string + url: + type: string + username: + type: string + required: + - url + - authMethod + - method + type: object + type: object + HTTPNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/HTTPNotificationRuleBase' + HTTPNotificationRuleBase: + properties: + type: + enum: + - http + type: string + url: + type: string + required: + - type + type: object + HealthCheck: + properties: + checks: + items: + $ref: '#/components/schemas/HealthCheck' + type: array + commit: + type: string + message: + type: string + name: + type: string + status: + enum: + - pass + - fail + type: string + version: + type: string + required: + - name + - status + type: object + HeatmapViewProperties: + properties: + adaptiveZoomHide: + type: boolean + binSize: + type: number + colors: + description: Colors define color encoding of data into a visualization + items: + type: string + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + timeFormat: + type: string + type: + enum: + - heatmap + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + type: number + maxItems: 2 + type: array + xPrefix: + type: string + xSuffix: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yAxisLabel: + type: string + yColumn: + type: string + yDomain: + items: + type: number + maxItems: 2 + type: array + yPrefix: + type: string + ySuffix: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - yColumn + - xDomain + - yDomain + - xAxisLabel + - yAxisLabel + - xPrefix + - yPrefix + - xSuffix + - ySuffix + - binSize + type: object + HistogramViewProperties: + properties: + binCount: + type: integer + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + fillColumns: + items: + type: string + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + position: + enum: + - overlaid + - stacked + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + type: + enum: + - histogram + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + format: float + type: number + type: array + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - fillColumns + - xDomain + - xAxisLabel + - position + - binCount + type: object + Identifier: + description: A valid Flux identifier + properties: + name: + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + ImportDeclaration: + description: Declares a package import + properties: + as: + $ref: '#/components/schemas/Identifier' + path: + $ref: '#/components/schemas/StringLiteral' + type: + $ref: '#/components/schemas/NodeType' + type: object + IndexExpression: + description: Represents indexing into an array + properties: + array: + $ref: '#/components/schemas/Expression' + index: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + InfluxqlCsvResponse: + description: CSV Response to InfluxQL Query + example: | + name,tags,time,test_field,test_tag test_measurement,,1603740794286107366,1,tag_value test_measurement,,1603740870053205649,2,tag_value test_measurement,,1603741221085428881,3,tag_value + type: string + InfluxqlJsonResponse: + description: JSON Response to InfluxQL Query + properties: + results: + items: + properties: + error: + type: string + series: + items: + properties: + columns: + items: + type: string + type: array + name: + type: string + partial: + type: boolean + tags: + additionalProperties: + type: string + type: object + values: + items: + items: {} + type: array + type: array + type: object + type: array + statement_id: + type: integer + type: object + type: array + type: object + IntegerLiteral: + description: Represents integer numbers + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + IsOnboarding: + properties: + allowed: + description: | + If `true`, the InfluxDB instance hasn't had initial setup; + `false` otherwise. + type: boolean + type: object + Label: + properties: + id: + readOnly: true + type: string + name: + type: string + orgID: + readOnly: true + type: string + properties: + additionalProperties: + type: string + description: | + Key-value pairs associated with this label. + To remove a property, send an update with an empty value (`""`) for the key. + example: + color: ffb3b3 + description: this is a description + type: object + type: object + LabelCreateRequest: + properties: + name: + type: string + orgID: + type: string + properties: + additionalProperties: + type: string + description: | + Key-value pairs associated with this label. + + To remove a property, send an update with an empty value (`""`) for the key. + example: + color: ffb3b3 + description: this is a description + type: object + required: + - orgID + - name + type: object + LabelMapping: + description: A _label mapping_ contains a `label` ID to attach to a resource. + properties: + labelID: + description: | + A label ID. + Specifies the label to attach. + type: string + required: + - labelID + type: object + LabelResponse: + properties: + label: + $ref: '#/components/schemas/Label' + links: + $ref: '#/components/schemas/Links' + type: object + LabelUpdate: + properties: + name: + type: string + properties: + additionalProperties: + description: | + Key-value pairs associated with this label. + + To remove a property, send an update with an empty value (`""`) for the key. + type: string + example: + color: ffb3b3 + description: this is a description + type: object + type: object + Labels: + items: + $ref: '#/components/schemas/Label' + type: array + LabelsResponse: + properties: + labels: + $ref: '#/components/schemas/Labels' + links: + $ref: '#/components/schemas/Links' + type: object + LanguageRequest: + description: Flux query to be analyzed. + properties: + query: + description: | + The Flux query script to be analyzed. + type: string + required: + - query + type: object + LatLonColumn: + description: Object type for key and column definitions + properties: + column: + description: Column to look up Lat/Lon + type: string + key: + description: Key to determine whether the column is tag/field + type: string + required: + - key + - column + type: object + LatLonColumns: + description: Object type to define lat/lon columns + properties: + lat: + $ref: '#/components/schemas/LatLonColumn' + lon: + $ref: '#/components/schemas/LatLonColumn' + required: + - lat + - lon + type: object + LegacyAuthorizationPostRequest: + allOf: + - $ref: '#/components/schemas/AuthorizationUpdateRequest' + - properties: + orgID: + description: The organization ID. Identifies the organization that the authorization is scoped to. + type: string + permissions: + description: | + The list of permissions that provide `read` and `write` access to organization resources. + An authorization must contain at least one permission. + items: + $ref: '#/components/schemas/Permission' + minItems: 1 + type: array + token: + description: The name that you provide for the authorization. + type: string + userID: + description: The user ID. Identifies the user that the authorization is scoped to. + type: string + type: object + required: + - orgID + - permissions + LesserThreshold: + allOf: + - $ref: '#/components/schemas/ThresholdBase' + - properties: + type: + enum: + - lesser + type: string + value: + format: float + type: number + required: + - type + - value + type: object + LinePlusSingleStatProperties: + properties: + adaptiveZoomHide: + type: boolean + axes: + $ref: '#/components/schemas/Axes' + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + position: + enum: + - overlaid + - stacked + type: string + prefix: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shadeBelow: + type: boolean + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + suffix: + type: string + timeFormat: + type: string + type: + enum: + - line-plus-single-stat + type: string + xColumn: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yColumn: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - queries + - shape + - axes + - colors + - note + - showNoteWhenEmpty + - prefix + - suffix + - decimalPlaces + - position + type: object + LineProtocolError: + properties: + code: + description: Code is the machine-readable error code. + enum: + - internal error + - not found + - conflict + - invalid + - empty value + - unavailable + readOnly: true + type: string + err: + description: Stack of errors that occurred during processing of the request. Useful for debugging. + readOnly: true + type: string + line: + description: First line in the request body that contains malformed data. + format: int32 + readOnly: true + type: integer + message: + description: Human-readable message. + readOnly: true + type: string + op: + description: Describes the logical code operation when the error occurred. Useful for debugging. + readOnly: true + type: string + required: + - code + LineProtocolLengthError: + properties: + code: + description: Code is the machine-readable error code. + enum: + - invalid + readOnly: true + type: string + message: + description: Human-readable message. + readOnly: true + type: string + required: + - code + - message + Link: + description: URI of resource. + format: uri + readOnly: true + type: string + Links: + description: | + URI pointers for additional paged results. + properties: + next: + $ref: '#/components/schemas/Link' + prev: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + required: + - self + type: object + LogEvent: + properties: + message: + description: A description of the event that occurred. + example: Halt and catch fire + readOnly: true + type: string + runID: + description: The ID of the task run that generated the event. + readOnly: true + type: string + time: + description: The time ([RFC3339Nano date/time format](/influxdb/latest/reference/glossary/#rfc3339nano-timestamp)) that the event occurred. + example: 2006-01-02T15:04:05.999999999Z07:00 + format: date-time + readOnly: true + type: string + type: object + LogicalExpression: + description: Represents the rule conditions that collectively evaluate to either true or false + properties: + left: + $ref: '#/components/schemas/Expression' + operator: + type: string + right: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Logs: + properties: + events: + items: + $ref: '#/components/schemas/LogEvent' + readOnly: true + type: array + type: object + MapVariableProperties: + properties: + type: + enum: + - map + type: string + values: + additionalProperties: + type: string + type: object + MarkdownViewProperties: + properties: + note: + type: string + shape: + enum: + - chronograf-v2 + type: string + type: + enum: + - markdown + type: string + required: + - type + - shape + - note + type: object + MemberAssignment: + description: Object property assignment + properties: + init: + $ref: '#/components/schemas/Expression' + member: + $ref: '#/components/schemas/MemberExpression' + type: + $ref: '#/components/schemas/NodeType' + type: object + MemberExpression: + description: Represents accessing a property of an object + properties: + object: + $ref: '#/components/schemas/Expression' + property: + $ref: '#/components/schemas/PropertyKey' + type: + $ref: '#/components/schemas/NodeType' + type: object + MetadataBackup: + properties: + buckets: + $ref: '#/components/schemas/BucketMetadataManifests' + kv: + format: binary + type: string + sql: + format: binary + type: string + required: + - kv + - sql + - buckets + type: object + MosaicViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + type: string + type: array + fillColumns: + items: + type: string + type: array + generateXAxisTicks: + items: + type: string + type: array + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + timeFormat: + type: string + type: + enum: + - mosaic + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + type: number + maxItems: 2 + type: array + xPrefix: + type: string + xSuffix: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yAxisLabel: + type: string + yDomain: + items: + type: number + maxItems: 2 + type: array + yLabelColumnSeparator: + type: string + yLabelColumns: + items: + type: string + type: array + yPrefix: + type: string + ySeriesColumns: + items: + type: string + type: array + ySuffix: + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - ySeriesColumns + - fillColumns + - xDomain + - yDomain + - xAxisLabel + - yAxisLabel + - xPrefix + - yPrefix + - xSuffix + - ySuffix + type: object + Node: + oneOf: + - $ref: '#/components/schemas/Expression' + - $ref: '#/components/schemas/Block' + NodeType: + description: Type of AST node + type: string + NotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointDiscriminator' + NotificationEndpointBase: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + description: An optional description of the notification endpoint. + type: string + id: + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + labels: /api/v2/notificationEndpoints/1/labels + members: /api/v2/notificationEndpoints/1/members + owners: /api/v2/notificationEndpoints/1/owners + self: /api/v2/notificationEndpoints/1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: The URL to retrieve labels for this endpoint. + members: + $ref: '#/components/schemas/Link' + description: The URL to retrieve members for this endpoint. + owners: + $ref: '#/components/schemas/Link' + description: The URL to retrieve owners for this endpoint. + self: + $ref: '#/components/schemas/Link' + description: The URL for this endpoint. + readOnly: true + type: object + name: + type: string + orgID: + type: string + status: + default: active + description: The status of the endpoint. + enum: + - active + - inactive + type: string + type: + $ref: '#/components/schemas/NotificationEndpointType' + updatedAt: + format: date-time + readOnly: true + type: string + userID: + type: string + required: + - type + - name + type: object + NotificationEndpointDiscriminator: + discriminator: + mapping: + http: '#/components/schemas/HTTPNotificationEndpoint' + pagerduty: '#/components/schemas/PagerDutyNotificationEndpoint' + slack: '#/components/schemas/SlackNotificationEndpoint' + telegram: '#/components/schemas/TelegramNotificationEndpoint' + propertyName: type + oneOf: + - $ref: '#/components/schemas/SlackNotificationEndpoint' + - $ref: '#/components/schemas/PagerDutyNotificationEndpoint' + - $ref: '#/components/schemas/HTTPNotificationEndpoint' + - $ref: '#/components/schemas/TelegramNotificationEndpoint' + NotificationEndpointType: + enum: + - slack + - pagerduty + - http + - telegram + type: string + NotificationEndpointUpdate: + properties: + description: + type: string + name: + type: string + status: + enum: + - active + - inactive + type: string + type: object + NotificationEndpoints: + properties: + links: + $ref: '#/components/schemas/Links' + notificationEndpoints: + items: + $ref: '#/components/schemas/NotificationEndpoint' + type: array + NotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleDiscriminator' + NotificationRuleBase: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + description: An optional description of the notification rule. + type: string + endpointID: + type: string + every: + description: The notification repetition interval. + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + lastRunError: + readOnly: true + type: string + lastRunStatus: + enum: + - failed + - success + - canceled + readOnly: true + type: string + latestCompleted: + description: A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)) of the latest scheduled and completed run. + format: date-time + readOnly: true + type: string + limit: + description: Don't notify me more than times every seconds. If set, limitEvery cannot be empty. + type: integer + limitEvery: + description: Don't notify me more than times every seconds. If set, limit cannot be empty. + type: integer + links: + example: + labels: /api/v2/notificationRules/1/labels + members: /api/v2/notificationRules/1/members + owners: /api/v2/notificationRules/1/owners + query: /api/v2/notificationRules/1/query + self: /api/v2/notificationRules/1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: The URL to retrieve labels for this notification rule. + members: + $ref: '#/components/schemas/Link' + description: The URL to retrieve members for this notification rule. + owners: + $ref: '#/components/schemas/Link' + description: The URL to retrieve owners for this notification rule. + query: + $ref: '#/components/schemas/Link' + description: The URL to retrieve the Flux script for this notification rule. + self: + $ref: '#/components/schemas/Link' + description: The URL for this endpoint. + readOnly: true + type: object + name: + description: Human-readable name describing the notification rule. + type: string + offset: + description: Duration to delay after the schedule, before executing check. + type: string + orgID: + description: The ID of the organization that owns this notification rule. + type: string + ownerID: + description: The ID of creator used to create this notification rule. + readOnly: true + type: string + runbookLink: + type: string + sleepUntil: + type: string + status: + $ref: '#/components/schemas/TaskStatusType' + statusRules: + description: List of status rules the notification rule attempts to match. + items: + $ref: '#/components/schemas/StatusRule' + minItems: 1 + type: array + tagRules: + description: List of tag rules the notification rule attempts to match. + items: + $ref: '#/components/schemas/TagRule' + type: array + taskID: + description: The ID of the task associated with this notification rule. + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - orgID + - status + - name + - statusRules + - endpointID + type: object + NotificationRuleDiscriminator: + discriminator: + mapping: + http: '#/components/schemas/HTTPNotificationRule' + pagerduty: '#/components/schemas/PagerDutyNotificationRule' + slack: '#/components/schemas/SlackNotificationRule' + smtp: '#/components/schemas/SMTPNotificationRule' + telegram: '#/components/schemas/TelegramNotificationRule' + propertyName: type + oneOf: + - $ref: '#/components/schemas/SlackNotificationRule' + - $ref: '#/components/schemas/SMTPNotificationRule' + - $ref: '#/components/schemas/PagerDutyNotificationRule' + - $ref: '#/components/schemas/HTTPNotificationRule' + - $ref: '#/components/schemas/TelegramNotificationRule' + NotificationRuleUpdate: + properties: + description: + type: string + name: + type: string + status: + enum: + - active + - inactive + type: string + type: object + NotificationRules: + properties: + links: + $ref: '#/components/schemas/Links' + notificationRules: + items: + $ref: '#/components/schemas/NotificationRule' + type: array + ObjectExpression: + description: Allows the declaration of an anonymous object within a declaration + properties: + properties: + description: Object properties + items: + $ref: '#/components/schemas/Property' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + OnboardingRequest: + properties: + bucket: + type: string + org: + type: string + password: + type: string + retentionPeriodHrs: + deprecated: true + description: | + Retention period *in nanoseconds* for the new bucket. This key's name has been misleading since OSS 2.0 GA, please transition to use `retentionPeriodSeconds` + type: integer + retentionPeriodSeconds: + format: int64 + type: integer + token: + description: | + Authentication token to set on the initial user. If not specified, the server will generate a token. + type: string + username: + type: string + required: + - username + - org + - bucket + type: object + OnboardingResponse: + properties: + auth: + $ref: '#/components/schemas/Authorization' + bucket: + $ref: '#/components/schemas/Bucket' + org: + $ref: '#/components/schemas/Organization' + user: + $ref: '#/components/schemas/UserResponse' + type: object + OptionStatement: + description: A single variable declaration + properties: + assignment: + oneOf: + - $ref: '#/components/schemas/VariableAssignment' + - $ref: '#/components/schemas/MemberAssignment' + type: + $ref: '#/components/schemas/NodeType' + type: object + Organization: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + type: string + id: + readOnly: true + type: string + links: + example: + buckets: /api/v2/buckets?org=myorg + dashboards: /api/v2/dashboards?org=myorg + labels: /api/v2/orgs/1/labels + members: /api/v2/orgs/1/members + owners: /api/v2/orgs/1/owners + secrets: /api/v2/orgs/1/secrets + self: /api/v2/orgs/1 + tasks: /api/v2/tasks?org=myorg + properties: + buckets: + $ref: '#/components/schemas/Link' + dashboards: + $ref: '#/components/schemas/Link' + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + secrets: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + tasks: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + name: + type: string + status: + default: active + description: If inactive the organization is inactive. + enum: + - active + - inactive + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - name + Organizations: + properties: + links: + $ref: '#/components/schemas/Links' + orgs: + items: + $ref: '#/components/schemas/Organization' + type: array + type: object + Package: + description: Represents a complete package source tree. + properties: + files: + description: Package files + items: + $ref: '#/components/schemas/File' + type: array + package: + description: Package name + type: string + path: + description: Package import path + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + PackageClause: + description: Defines a package identifier + properties: + name: + $ref: '#/components/schemas/Identifier' + type: + $ref: '#/components/schemas/NodeType' + type: object + PagerDutyNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + clientURL: + type: string + routingKey: + type: string + required: + - routingKey + type: object + type: object + PagerDutyNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/PagerDutyNotificationRuleBase' + PagerDutyNotificationRuleBase: + properties: + messageTemplate: + type: string + type: + enum: + - pagerduty + type: string + required: + - type + - messageTemplate + type: object + ParenExpression: + description: Represents an expression wrapped in parenthesis + properties: + expression: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + PasswordResetBody: + properties: + password: + type: string + required: + - password + PatchBucketRequest: + description: | + An object that contains updated bucket properties to apply. + properties: + description: + description: | + A description of the bucket. + type: string + name: + description: | + The name of the bucket. + type: string + retentionRules: + $ref: '#/components/schemas/PatchRetentionRules' + type: object + PatchOrganizationRequest: + description: | + An object that contains updated organization properties to apply. + properties: + description: + description: | + The description of the organization. + type: string + name: + description: | + The name of the organization. + type: string + type: object + PatchRetentionRule: + properties: + everySeconds: + default: 2592000 + description: | + The number of seconds to keep data. + Default duration is `2592000` (30 days). + `0` represents infinite retention. + example: 86400 + format: int64 + minimum: 0 + type: integer + shardGroupDurationSeconds: + description: | + The [shard group duration](/influxdb/latest/reference/glossary/#shard). + The number of seconds that each shard group covers. + + #### InfluxDB Cloud + + - Doesn't use `shardGroupDurationsSeconds`. + + #### InfluxDB OSS + + - Default value depends on the [bucket retention period](/influxdb/latest/reference/internals/shards/#shard-group-duration). + + #### Related guides + + - InfluxDB [shards and shard groups](/influxdb/latest/reference/internals/shards/) + format: int64 + type: integer + type: + default: expire + enum: + - expire + type: string + required: + - everySeconds + type: object + PatchRetentionRules: + description: Updates to rules to expire or retain data. No rules means no updates. + items: + $ref: '#/components/schemas/PatchRetentionRule' + type: array + Permission: + properties: + action: + enum: + - read + - write + type: string + resource: + $ref: '#/components/schemas/Resource' + properties: + id: + description: | + A resource ID. + Identifies a specific resource. + type: string + name: + description: | + The name of the resource. + _Note: not all resource types have a `name` property_. + type: string + org: + description: | + An organization name. + The organization that owns the resource. + type: string + orgID: + description: | + An organization ID. + Identifies the organization that owns the resource. + type: string + type: + description: | + A resource type. + Identifies the API resource's type (or _kind_). + enum: + - authorizations + - buckets + - dashboards + - orgs + - tasks + - telegrafs + - users + - variables + - secrets + - labels + - views + - documents + - notificationRules + - notificationEndpoints + - checks + - dbrp + - annotations + - sources + - scrapers + - notebooks + - remotes + - replications + - instance + - flows + - functions + - subscriptions + type: string + required: + - type + type: object + required: + - action + - resource + PipeExpression: + description: Call expression with pipe argument + properties: + argument: + $ref: '#/components/schemas/Expression' + call: + $ref: '#/components/schemas/CallExpression' + type: + $ref: '#/components/schemas/NodeType' + type: object + PipeLiteral: + description: Represents a specialized literal value, indicating the left hand value of a pipe expression + properties: + type: + $ref: '#/components/schemas/NodeType' + type: object + PostBucketRequest: + properties: + description: + description: | + A description of the bucket. + type: string + name: + description: | + The bucket name. + type: string + orgID: + description: | + The organization ID. + Specifies the organization that owns the bucket. + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + rp: + default: '0' + description: | + The retention policy for the bucket. + For InfluxDB 1.x, specifies the duration of time that each data point + in the retention policy persists. + + If you need compatibility with InfluxDB 1.x, specify a value for the `rp` property; + otherwise, see the `retentionRules` property. + + [Retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) + is an InfluxDB 1.x concept. + The InfluxDB 2.x and Cloud equivalent is + [retention period](/influxdb/latest/reference/glossary/#retention-period). + The InfluxDB `/api/v2` API uses `RetentionRules` to configure the retention period. + type: string + schemaType: + $ref: '#/components/schemas/SchemaType' + default: implicit + description: | + The schema Type. Default is `implicit`. + + #### InfluxDB Cloud + + - Use `explicit` to enforce column names, tags, fields, and data types for + your data. + + #### InfluxDB OSS + + - Doesn't support `explicit` bucket schemas. + required: + - orgID + - name + PostCheck: + allOf: + - $ref: '#/components/schemas/CheckDiscriminator' + PostNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointDiscriminator' + PostNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleDiscriminator' + PostOrganizationRequest: + properties: + description: + description: | + The description of the organization. + type: string + name: + description: | + The name of the organization. + type: string + required: + - name + type: object + Property: + description: The value associated with a key + properties: + key: + $ref: '#/components/schemas/PropertyKey' + type: + $ref: '#/components/schemas/NodeType' + value: + $ref: '#/components/schemas/Expression' + type: object + PropertyKey: + oneOf: + - $ref: '#/components/schemas/Identifier' + - $ref: '#/components/schemas/StringLiteral' + Query: + description: Query InfluxDB with the Flux language + properties: + dialect: + $ref: '#/components/schemas/Dialect' + extern: + $ref: '#/components/schemas/File' + now: + description: | + Specifies the time that should be reported as `now` in the query. + Default is the server `now` time. + format: date-time + type: string + params: + additionalProperties: true + description: | + Key-value pairs passed as parameters during query execution. + + To use parameters in your query, pass a _`query`_ with `params` references (in dot notation)--for example: + + ```json + query: "from(bucket: params.mybucket)\ + |> range(start: params.rangeStart) |> limit(n:1)" + ``` + + and pass _`params`_ with the key-value pairs--for example: + + ```json + params: { + "mybucket": "environment", + "rangeStart": "-30d" + } + ``` + + During query execution, InfluxDB passes _`params`_ to your script and substitutes the values. + + #### Limitations + + - If you use _`params`_, you can't use _`extern`_. + type: object + query: + description: The query script to execute. + type: string + type: + description: The type of query. Must be "flux". + enum: + - flux + type: string + required: + - query + type: object + QueryEditMode: + enum: + - builder + - advanced + type: string + QueryVariableProperties: + properties: + type: + enum: + - query + type: string + values: + properties: + language: + type: string + query: + type: string + type: object + RangeThreshold: + allOf: + - $ref: '#/components/schemas/ThresholdBase' + - properties: + max: + format: float + type: number + min: + format: float + type: number + type: + enum: + - range + type: string + within: + type: boolean + required: + - type + - min + - max + - within + type: object + Ready: + properties: + started: + example: '2019-03-13T10:09:33.891196-04:00' + format: date-time + type: string + status: + enum: + - ready + type: string + up: + example: 14m45.911966424s + type: string + type: object + RegexpLiteral: + description: Expressions begin and end with `/` and are regular expressions with syntax accepted by RE2 + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + RemoteConnection: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + id: + type: string + name: + type: string + orgID: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + required: + - id + - name + - orgID + - remoteURL + - allowInsecureTLS + type: object + RemoteConnectionCreationRequest: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + name: + type: string + orgID: + type: string + remoteAPIToken: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + required: + - name + - orgID + - remoteURL + - remoteAPIToken + - allowInsecureTLS + type: object + RemoteConnectionUpdateRequest: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + name: + type: string + remoteAPIToken: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + type: object + RemoteConnections: + properties: + remotes: + items: + $ref: '#/components/schemas/RemoteConnection' + type: array + type: object + RenamableField: + description: Describes a field that can be renamed and made visible or invisible. + properties: + displayName: + description: The name that a field is renamed to by the user. + type: string + internalName: + description: The calculated name of a field. + readOnly: true + type: string + visible: + description: Indicates whether this field should be visible on the table. + type: boolean + type: object + Replication: + properties: + currentQueueSizeBytes: + format: int64 + type: integer + description: + type: string + dropNonRetryableData: + type: boolean + id: + type: string + latestErrorMessage: + type: string + latestResponseCode: + type: integer + localBucketID: + type: string + maxQueueSizeBytes: + format: int64 + type: integer + name: + type: string + orgID: + type: string + remoteBucketID: + type: string + remoteBucketName: + type: string + remoteID: + type: string + required: + - id + - name + - remoteID + - orgID + - localBucketID + - maxQueueSizeBytes + - currentQueueSizeBytes + type: object + ReplicationCreationRequest: + properties: + description: + type: string + dropNonRetryableData: + default: false + type: boolean + localBucketID: + type: string + maxAgeSeconds: + default: 604800 + format: int64 + minimum: 0 + type: integer + maxQueueSizeBytes: + default: 67108860 + format: int64 + minimum: 33554430 + type: integer + name: + type: string + orgID: + type: string + remoteBucketID: + type: string + remoteBucketName: + type: string + remoteID: + type: string + required: + - name + - orgID + - remoteID + - localBucketID + - maxQueueSizeBytes + - maxAgeSeconds + type: object + ReplicationUpdateRequest: + properties: + description: + type: string + dropNonRetryableData: + type: boolean + maxAgeSeconds: + format: int64 + minimum: 0 + type: integer + maxQueueSizeBytes: + format: int64 + minimum: 33554430 + type: integer + name: + type: string + remoteBucketID: + type: string + remoteBucketName: + type: string + remoteID: + type: string + type: object + Replications: + properties: + replications: + items: + $ref: '#/components/schemas/Replication' + type: array + type: object + Resource: + properties: + id: + description: | + A resource ID. + Identifies a specific resource. + type: string + name: + description: | + The name of the resource. + _Note: not all resource types have a `name` property_. + type: string + org: + description: | + An organization name. + The organization that owns the resource. + type: string + orgID: + description: | + An organization ID. + Identifies the organization that owns the resource. + type: string + type: + description: | + A resource type. + Identifies the API resource's type (or _kind_). + enum: + - authorizations + - buckets + - dashboards + - orgs + - tasks + - telegrafs + - users + - variables + - secrets + - labels + - views + - documents + - notificationRules + - notificationEndpoints + - checks + - dbrp + - annotations + - sources + - scrapers + - notebooks + - remotes + - replications + - instance + - flows + - functions + - subscriptions + type: string + required: + - type + type: object + ResourceMember: + allOf: + - $ref: '#/components/schemas/UserResponse' + - properties: + role: + default: member + enum: + - member + type: string + type: object + ResourceMembers: + properties: + links: + properties: + self: + format: uri + type: string + type: object + users: + items: + $ref: '#/components/schemas/ResourceMember' + type: array + type: object + ResourceOwner: + allOf: + - $ref: '#/components/schemas/UserResponse' + - properties: + role: + default: owner + enum: + - owner + type: string + type: object + ResourceOwners: + properties: + links: + properties: + self: + format: uri + type: string + type: object + users: + items: + $ref: '#/components/schemas/ResourceOwner' + type: array + type: object + RestoredBucketMappings: + properties: + id: + description: New ID of the restored bucket + type: string + name: + type: string + shardMappings: + $ref: '#/components/schemas/BucketShardMappings' + required: + - id + - name + - shardMappings + type: object + RetentionPolicyManifest: + properties: + duration: + format: int64 + type: integer + name: + type: string + replicaN: + type: integer + shardGroupDuration: + format: int64 + type: integer + shardGroups: + $ref: '#/components/schemas/ShardGroupManifests' + subscriptions: + $ref: '#/components/schemas/SubscriptionManifests' + required: + - name + - replicaN + - duration + - shardGroupDuration + - shardGroups + - subscriptions + type: object + RetentionPolicyManifests: + items: + $ref: '#/components/schemas/RetentionPolicyManifest' + type: array + RetentionRule: + properties: + everySeconds: + default: 2592000 + description: | + The duration in seconds for how long data will be kept in the database. + The default duration is 2592000 (30 days). + 0 represents infinite retention. + example: 86400 + format: int64 + minimum: 0 + type: integer + shardGroupDurationSeconds: + description: | + The shard group duration. + The duration or interval (in seconds) that each shard group covers. + + #### InfluxDB Cloud + + - Does not use `shardGroupDurationsSeconds`. + + #### InfluxDB OSS + + - Default value depends on the + [bucket retention period](/influxdb/latest/reference/internals/shards/#shard-group-duration). + format: int64 + type: integer + type: + default: expire + enum: + - expire + type: string + required: + - everySeconds + type: object + RetentionRules: + description: | + Retention rules to expire or retain data. + The InfluxDB `/api/v2` API uses `RetentionRules` to configure the [retention period](/influxdb/latest/reference/glossary/#retention-period). + + #### InfluxDB Cloud + + - `retentionRules` is required. + + #### InfluxDB OSS + + - `retentionRules` isn't required. + items: + $ref: '#/components/schemas/RetentionRule' + type: array + ReturnStatement: + description: Defines an expression to return + properties: + argument: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Routes: + properties: + authorizations: + format: uri + type: string + buckets: + format: uri + type: string + dashboards: + format: uri + type: string + external: + properties: + statusFeed: + format: uri + type: string + type: object + flags: + format: uri + type: string + me: + format: uri + type: string + orgs: + format: uri + type: string + query: + properties: + analyze: + format: uri + type: string + ast: + format: uri + type: string + self: + format: uri + type: string + suggestions: + format: uri + type: string + type: object + setup: + format: uri + type: string + signin: + format: uri + type: string + signout: + format: uri + type: string + sources: + format: uri + type: string + system: + properties: + debug: + format: uri + type: string + health: + format: uri + type: string + metrics: + format: uri + type: string + type: object + tasks: + format: uri + type: string + telegrafs: + format: uri + type: string + users: + format: uri + type: string + variables: + format: uri + type: string + write: + format: uri + type: string + RuleStatusLevel: + description: The state to record if check matches a criteria. + enum: + - UNKNOWN + - OK + - INFO + - CRIT + - WARN + - ANY + type: string + Run: + properties: + finishedAt: + description: The time ([RFC3339Nano date/time format](https://go.dev/src/time/format.go)) the run finished executing. + example: 2006-01-02T15:04:05.999999999Z07:00 + format: date-time + readOnly: true + type: string + flux: + description: Flux used for the task + readOnly: true + type: string + id: + readOnly: true + type: string + links: + example: + retry: /api/v2/tasks/1/runs/1/retry + self: /api/v2/tasks/1/runs/1 + task: /api/v2/tasks/1 + properties: + retry: + format: uri + type: string + self: + format: uri + type: string + task: + format: uri + type: string + readOnly: true + type: object + log: + description: An array of logs associated with the run. + items: + $ref: '#/components/schemas/LogEvent' + readOnly: true + type: array + requestedAt: + description: The time ([RFC3339Nano date/time format](/influxdb/latest/reference/glossary/#rfc3339nano-timestamp)) the run was manually requested. + example: 2006-01-02T15:04:05.999999999Z07:00 + format: date-time + readOnly: true + type: string + scheduledFor: + description: The time [RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp) used for the run's `now` option. + format: date-time + type: string + startedAt: + description: The time ([RFC3339Nano date/time format](https://go.dev/src/time/format.go)) the run started executing. + example: 2006-01-02T15:04:05.999999999Z07:00 + format: date-time + readOnly: true + type: string + status: + enum: + - scheduled + - started + - failed + - success + - canceled + readOnly: true + type: string + taskID: + readOnly: true + type: string + RunManually: + properties: + scheduledFor: + description: | + The time [RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp) + used for the run's `now` option. + Default is the server _now_ time. + format: date-time + nullable: true + type: string + Runs: + properties: + links: + $ref: '#/components/schemas/Links' + runs: + items: + $ref: '#/components/schemas/Run' + type: array + type: object + SMTPNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/SMTPNotificationRuleBase' + SMTPNotificationRuleBase: + properties: + bodyTemplate: + type: string + subjectTemplate: + type: string + to: + type: string + type: + enum: + - smtp + type: string + required: + - type + - subjectTemplate + - to + type: object + ScatterViewProperties: + properties: + adaptiveZoomHide: + type: boolean + colors: + description: Colors define color encoding of data into a visualization + items: + type: string + type: array + fillColumns: + items: + type: string + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + symbolColumns: + items: + type: string + type: array + timeFormat: + type: string + type: + enum: + - scatter + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + type: number + maxItems: 2 + type: array + xPrefix: + type: string + xSuffix: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yAxisLabel: + type: string + yColumn: + type: string + yDomain: + items: + type: number + maxItems: 2 + type: array + yPrefix: + type: string + ySuffix: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - yColumn + - fillColumns + - symbolColumns + - xDomain + - yDomain + - xAxisLabel + - yAxisLabel + - xPrefix + - yPrefix + - xSuffix + - ySuffix + type: object + SchemaType: + enum: + - implicit + - explicit + type: string + ScraperTargetRequest: + properties: + allowInsecure: + default: false + description: Skip TLS verification on endpoint. + type: boolean + bucketID: + description: The ID of the bucket to write to. + type: string + name: + description: The name of the scraper target. + type: string + orgID: + description: The organization ID. + type: string + type: + description: The type of the metrics to be parsed. + enum: + - prometheus + type: string + url: + description: The URL of the metrics endpoint. + example: http://localhost:9090/metrics + type: string + type: object + ScraperTargetResponse: + allOf: + - $ref: '#/components/schemas/ScraperTargetRequest' + - properties: + bucket: + description: The bucket name. + type: string + id: + readOnly: true + type: string + links: + example: + bucket: /api/v2/buckets/1 + members: /api/v2/scrapers/1/members + organization: /api/v2/orgs/1 + owners: /api/v2/scrapers/1/owners + self: /api/v2/scrapers/1 + properties: + bucket: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + organization: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + org: + description: The name of the organization. + type: string + type: object + type: object + ScraperTargetResponses: + properties: + configurations: + items: + $ref: '#/components/schemas/ScraperTargetResponse' + type: array + type: object + SecretKeys: + properties: + secrets: + items: + type: string + type: array + type: object + SecretKeysResponse: + allOf: + - $ref: '#/components/schemas/SecretKeys' + - properties: + links: + properties: + org: + type: string + self: + type: string + readOnly: true + type: object + type: object + Secrets: + additionalProperties: + type: string + example: + apikey: abc123xyz + ShardGroupManifest: + properties: + deletedAt: + format: date-time + type: string + endTime: + format: date-time + type: string + id: + format: int64 + type: integer + shards: + $ref: '#/components/schemas/ShardManifests' + startTime: + format: date-time + type: string + truncatedAt: + format: date-time + type: string + required: + - id + - startTime + - endTime + - shards + type: object + ShardGroupManifests: + items: + $ref: '#/components/schemas/ShardGroupManifest' + type: array + ShardManifest: + properties: + id: + format: int64 + type: integer + shardOwners: + $ref: '#/components/schemas/ShardOwners' + required: + - id + - shardOwners + type: object + ShardManifests: + items: + $ref: '#/components/schemas/ShardManifest' + type: array + ShardOwner: + properties: + nodeID: + description: The ID of the node that owns the shard. + format: int64 + type: integer + required: + - nodeID + type: object + ShardOwners: + items: + $ref: '#/components/schemas/ShardOwner' + type: array + SimpleTableViewProperties: + properties: + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showAll: + type: boolean + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + type: + enum: + - simple-table + type: string + required: + - type + - showAll + - queries + - shape + - note + - showNoteWhenEmpty + type: object + SingleStatViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + note: + type: string + prefix: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + suffix: + type: string + tickPrefix: + type: string + tickSuffix: + type: string + type: + enum: + - single-stat + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - prefix + - tickPrefix + - suffix + - tickSuffix + - decimalPlaces + type: object + SlackNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + token: + description: Specifies the API token string. Specify either `URL` or `Token`. + type: string + url: + description: Specifies the URL of the Slack endpoint. Specify either `URL` or `Token`. + type: string + type: object + type: object + SlackNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/SlackNotificationRuleBase' + SlackNotificationRuleBase: + properties: + channel: + type: string + messageTemplate: + type: string + type: + enum: + - slack + type: string + required: + - type + - messageTemplate + type: object + Source: + properties: + default: + type: boolean + defaultRP: + type: string + id: + type: string + insecureSkipVerify: + type: boolean + languages: + items: + enum: + - flux + - influxql + type: string + readOnly: true + type: array + links: + properties: + buckets: + type: string + health: + type: string + query: + type: string + self: + type: string + type: object + metaUrl: + format: uri + type: string + name: + type: string + orgID: + type: string + password: + type: string + sharedSecret: + type: string + telegraf: + type: string + token: + type: string + type: + enum: + - v1 + - v2 + - self + type: string + url: + format: uri + type: string + username: + type: string + type: object + Sources: + properties: + links: + properties: + self: + format: uri + type: string + type: object + sources: + items: + $ref: '#/components/schemas/Source' + type: array + type: object + Stack: + properties: + createdAt: + format: date-time + readOnly: true + type: string + events: + items: + properties: + description: + type: string + eventType: + type: string + name: + type: string + resources: + items: + properties: + apiVersion: + type: string + associations: + items: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + metaName: + type: string + type: object + type: array + kind: + $ref: '#/components/schemas/TemplateKind' + links: + properties: + self: + type: string + type: object + resourceID: + type: string + templateMetaName: + type: string + type: object + type: array + sources: + items: + type: string + type: array + updatedAt: + format: date-time + readOnly: true + type: string + urls: + items: + type: string + type: array + type: object + type: array + id: + type: string + orgID: + type: string + type: object + Statement: + oneOf: + - $ref: '#/components/schemas/BadStatement' + - $ref: '#/components/schemas/VariableAssignment' + - $ref: '#/components/schemas/MemberAssignment' + - $ref: '#/components/schemas/ExpressionStatement' + - $ref: '#/components/schemas/ReturnStatement' + - $ref: '#/components/schemas/OptionStatement' + - $ref: '#/components/schemas/BuiltinStatement' + - $ref: '#/components/schemas/TestStatement' + StaticLegend: + description: StaticLegend represents the options specific to the static legend + properties: + colorizeRows: + type: boolean + heightRatio: + format: float + type: number + opacity: + format: float + type: number + orientationThreshold: + type: integer + show: + type: boolean + valueAxis: + type: string + widthRatio: + format: float + type: number + type: object + StatusRule: + properties: + count: + type: integer + currentLevel: + $ref: '#/components/schemas/RuleStatusLevel' + period: + type: string + previousLevel: + $ref: '#/components/schemas/RuleStatusLevel' + type: object + StringLiteral: + description: Expressions begin and end with double quote marks + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + SubscriptionManifest: + properties: + destinations: + items: + type: string + type: array + mode: + type: string + name: + type: string + required: + - name + - mode + - destinations + type: object + SubscriptionManifests: + items: + $ref: '#/components/schemas/SubscriptionManifest' + type: array + TableViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + fieldOptions: + description: fieldOptions represent the fields retrieved by the query with customization options + items: + $ref: '#/components/schemas/RenamableField' + type: array + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + tableOptions: + properties: + fixFirstColumn: + description: fixFirstColumn indicates whether the first column of the table should be locked + type: boolean + sortBy: + $ref: '#/components/schemas/RenamableField' + verticalTimeAxis: + description: verticalTimeAxis describes the orientation of the table by indicating whether the time axis will be displayed vertically + type: boolean + wrapping: + description: Wrapping describes the text wrapping style to be used in table views + enum: + - truncate + - wrap + - single-line + type: string + type: object + timeFormat: + description: timeFormat describes the display format for time values according to moment.js date formatting + type: string + type: + enum: + - table + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - tableOptions + - fieldOptions + - timeFormat + - decimalPlaces + type: object + TagRule: + properties: + key: + type: string + operator: + enum: + - equal + - notequal + - equalregex + - notequalregex + type: string + value: + type: string + type: object + Task: + properties: + authorizationID: + description: | + An authorization ID. + Identifies the authorization used when the task communicates with the query engine. + + To find an authorization ID, use the + [`GET /api/v2/authorizations` endpoint](#operation/GetAuthorizations) to + list authorizations. + type: string + createdAt: + format: date-time + readOnly: true + type: string + cron: + $ref: '#/components/schemas/TaskCron' + description: + $ref: '#/components/schemas/TaskDescription' + every: + $ref: '#/components/schemas/TaskEvery' + flux: + $ref: '#/components/schemas/TaskFlux' + id: + description: | + The resource ID that InfluxDB uses to uniquely identify the task. + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + lastRunError: + readOnly: true + type: string + lastRunStatus: + enum: + - failed + - success + - canceled + readOnly: true + type: string + latestCompleted: + description: A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)) of the latest scheduled and completed run. + format: date-time + readOnly: true + type: string + links: + example: + labels: /api/v2/tasks/1/labels + logs: /api/v2/tasks/1/logs + members: /api/v2/tasks/1/members + owners: /api/v2/tasks/1/owners + runs: /api/v2/tasks/1/runs + self: /api/v2/tasks/1 + properties: + labels: + $ref: '#/components/schemas/Link' + logs: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + runs: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + name: + $ref: '#/components/schemas/TaskName' + offset: + $ref: '#/components/schemas/TaskOffset' + org: + $ref: '#/components/schemas/TaskOrg' + orgID: + $ref: '#/components/schemas/TaskOrgID' + ownerID: + description: | + A user ID. + Identifies the [user](/influxdb/latest/reference/glossary/#user) that owns the task. + + To find a user ID, use the + [`GET /api/v2/users` endpoint](#operation/GetUsers) to + list users. + type: string + status: + $ref: '#/components/schemas/TaskStatusType' + updatedAt: + format: date-time + readOnly: true + type: string + required: + - id + - name + - orgID + - flux + type: object + TaskCreateRequest: + properties: + description: + $ref: '#/components/schemas/TaskDescription' + flux: + $ref: '#/components/schemas/TaskFlux' + name: + $ref: '#/components/schemas/TaskName' + org: + $ref: '#/components/schemas/TaskOrg' + orgID: + $ref: '#/components/schemas/TaskOrgID' + status: + $ref: '#/components/schemas/TaskStatusType' + required: + - flux + type: object + TaskCron: + description: A [Cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that defines the schedule on which the task runs. InfluxDB uses the system time when evaluating Cron expressions. + format: cron + type: string + TaskDescription: + description: A description of the task. + type: string + TaskEvery: + description: The interval ([duration literal](/influxdb/latest/reference/glossary/#rfc3339-timestamp)) at which the task runs. `every` also determines when the task first runs, depending on the specified time. + format: duration + type: string + TaskFlux: + description: | + Flux with [task configuration options](/influxdb/latest/process-data/task-options/) + and the script for the task to run. + + #### Related guides + + - [Task configuration options](/influxdb/latest/process-data/task-options/) + format: Flux + type: string + TaskName: + description: The name of the task. + type: string + TaskOffset: + description: A [duration](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals) to delay execution of the task after the scheduled time has elapsed. `0` removes the offset. + format: duration + type: string + TaskOrg: + description: | + An organization name. + Identifies the [organization](/influxdb/latest/reference/glossary/#organization) that owns the task. + type: string + TaskOrgID: + description: | + An organization ID. + Identifies the [organization](/influxdb/latest/reference/glossary/#organization) that owns the task. + type: string + TaskStatusType: + description: | + `inactive` cancels scheduled runs and prevents manual runs of the task. + enum: + - active + - inactive + type: string + TaskUpdateRequest: + properties: + cron: + $ref: '#/components/schemas/TaskCron' + description: + $ref: '#/components/schemas/TaskDescription' + every: + $ref: '#/components/schemas/TaskEvery' + flux: + $ref: '#/components/schemas/TaskFlux' + name: + $ref: '#/components/schemas/TaskName' + offset: + $ref: '#/components/schemas/TaskOffset' + status: + $ref: '#/components/schemas/TaskStatusType' + type: object + Tasks: + properties: + links: + $ref: '#/components/schemas/Links' + readOnly: true + tasks: + items: + $ref: '#/components/schemas/Task' + type: array + type: object + Telegraf: + allOf: + - $ref: '#/components/schemas/TelegrafRequest' + - properties: + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + readOnly: true + links: + example: + lables: /api/v2/telegrafs/1/labels + members: /api/v2/telegrafs/1/members + owners: /api/v2/telegrafs/1/owners + self: /api/v2/telegrafs/1 + properties: + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + type: object + type: object + TelegrafPlugin: + properties: + config: + type: string + description: + type: string + name: + type: string + type: + type: string + type: object + TelegrafPluginRequest: + properties: + config: + type: string + description: + type: string + metadata: + properties: + buckets: + items: + type: string + type: array + type: object + name: + type: string + orgID: + type: string + plugins: + items: + properties: + alias: + type: string + config: + type: string + description: + type: string + name: + type: string + type: + type: string + type: object + type: array + type: object + TelegrafPlugins: + properties: + os: + type: string + plugins: + items: + $ref: '#/components/schemas/TelegrafPlugin' + type: array + version: + type: string + type: object + TelegrafRequest: + properties: + config: + type: string + description: + type: string + metadata: + properties: + buckets: + items: + type: string + type: array + type: object + name: + type: string + orgID: + type: string + type: object + Telegrafs: + properties: + configurations: + items: + $ref: '#/components/schemas/Telegraf' + type: array + type: object + TelegramNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + channel: + description: The ID of the telegram channel; a chat_id in https://core.telegram.org/bots/api#sendmessage . + type: string + token: + description: Specifies the Telegram bot token. See https://core.telegram.org/bots#creating-a-new-bot . + type: string + required: + - token + - channel + type: object + type: object + TelegramNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/TelegramNotificationRuleBase' + TelegramNotificationRuleBase: + properties: + disableWebPagePreview: + description: Disables preview of web links in the sent messages when "true". Defaults to "false". + type: boolean + messageTemplate: + description: The message template as a flux interpolated string. + type: string + parseMode: + description: Parse mode of the message text per https://core.telegram.org/bots/api#formatting-options. Defaults to "MarkdownV2". + enum: + - MarkdownV2 + - HTML + - Markdown + type: string + type: + description: The discriminator between other types of notification rules is "telegram". + enum: + - telegram + type: string + required: + - type + - messageTemplate + - channel + type: object + Template: + items: + description: | + A template entry. + Defines an InfluxDB resource in a template. + properties: + apiVersion: + example: influxdata.com/v2alpha1 + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + metadata: + description: | + Metadata properties used for the resource when the template is applied. + properties: + name: + type: string + type: object + spec: + description: | + Configuration properties used for the resource when the template is applied. + Key-value pairs map to the specification for the resource. + + The following code samples show `spec` configurations for template resources: + + - A bucket: + + ```json + { "spec": { + "name": "iot_center", + "retentionRules": [{ + "everySeconds": 2.592e+06, + "type": "expire" + }] + } + } + ``` + + - A variable: + + ```json + { "spec": { + "language": "flux", + "name": "Node_Service", + "query": "import \"influxdata/influxdb/v1\"\r\nv1.tagValues(bucket: \"iot_center\", + tag: \"service\")", + "type": "query" + } + } + ``` + type: object + type: object + type: array + TemplateApply: + properties: + actions: + description: | + A list of `action` objects. + Actions let you customize how InfluxDB applies templates in the request. + + You can use the following actions to prevent creating or updating resources: + + - A `skipKind` action skips template resources of a specified `kind`. + - A `skipResource` action skips template resources with a specified `metadata.name` + and `kind`. + items: + oneOf: + - properties: + action: + enum: + - skipKind + type: string + properties: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + required: + - kind + type: object + type: object + - properties: + action: + enum: + - skipResource + type: string + properties: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + resourceTemplateName: + type: string + required: + - kind + - resourceTemplateName + type: object + type: object + type: array + dryRun: + description: | + Only applies a dry run of the templates passed in the request. + + - Validates the template and generates a resource diff and summary. + - Doesn't install templates or make changes to the InfluxDB instance. + type: boolean + envRefs: + additionalProperties: + oneOf: + - type: string + - type: integer + - type: number + - type: boolean + description: | + An object with key-value pairs that map to **environment references** in templates. + + Environment references in templates are `envRef` objects with an `envRef.key` + property. + To substitute a custom environment reference value when applying templates, + pass `envRefs` with the `envRef.key` and the value. + + When you apply a template, InfluxDB replaces `envRef` objects in the template + with the values that you provide in the `envRefs` parameter. + For more examples, see how to [define environment references](/influxdb/latest/influxdb-templates/use/#define-environment-references). + + The following template fields may use environment references: + + - `metadata.name` + - `spec.endpointName` + - `spec.associations.name` + + For more information about including environment references in template fields, see how to + [include user-definable resource names](/influxdb/latest/influxdb-templates/create/#include-user-definable-resource-names). + type: object + orgID: + description: | + Organization ID. + InfluxDB applies templates to this organization. + The organization owns all resources created by the template. + + To find your organization, see how to + [view organizations](/influxdb/latest/organizations/view-orgs/). + type: string + remotes: + description: | + A list of URLs for template files. + + To apply a template manifest file located at a URL, pass `remotes` + with an array that contains the URL. + items: + properties: + contentType: + type: string + url: + type: string + required: + - url + type: object + type: array + secrets: + additionalProperties: + type: string + description: | + An object with key-value pairs that map to **secrets** in queries. + + Queries may reference secrets stored in InfluxDB--for example, + the following Flux script retrieves `POSTGRES_USERNAME` and `POSTGRES_PASSWORD` + secrets and then uses them to connect to a PostgreSQL database: + + ```js + import "sql" + import "influxdata/influxdb/secrets" + + username = secrets.get(key: "POSTGRES_USERNAME") + password = secrets.get(key: "POSTGRES_PASSWORD") + + sql.from( + driverName: "postgres", + dataSourceName: "postgresql://${username}:${password}@localhost:5432", + query: "SELECT * FROM example_table", + ) + ``` + + To define secret values in your `/api/v2/templates/apply` request, + pass the `secrets` parameter with key-value pairs--for example: + + ```json + { + ... + "secrets": { + "POSTGRES_USERNAME": "pguser", + "POSTGRES_PASSWORD": "foo" + } + ... + } + ``` + + InfluxDB stores the key-value pairs as secrets that you can access with `secrets.get()`. + Once stored, you can't view secret values in InfluxDB. + + #### Related guides + + - [How to pass secrets when installing a template](/influxdb/latest/influxdb-templates/use/#pass-secrets-when-installing-a-template) + type: object + stackID: + description: | + ID of the stack to update. + + To apply templates to an existing stack in the organization, use the `stackID` parameter. + If you apply templates without providing a stack ID, + InfluxDB initializes a new stack with all new resources. + + To find a stack ID, use the InfluxDB [`/api/v2/stacks` API endpoint](#operation/ListStacks) to list stacks. + + #### Related guides + + - [Stacks](/influxdb/latest/influxdb-templates/stacks/) + - [View stacks](/influxdb/latest/influxdb-templates/stacks/view/) + type: string + template: + description: | + A template object to apply. + A template object has a `contents` property + with an array of InfluxDB resource configurations. + + Pass `template` to apply only one template object. + If you use `template`, you can't use the `templates` parameter. + If you want to apply multiple template objects, use `templates` instead. + properties: + contentType: + type: string + contents: + $ref: '#/components/schemas/Template' + sources: + items: + type: string + type: array + type: object + templates: + description: | + A list of template objects to apply. + A template object has a `contents` property + with an array of InfluxDB resource configurations. + + Use the `templates` parameter to apply multiple template objects. + If you use `templates`, you can't use the `template` parameter. + items: + properties: + contentType: + type: string + contents: + $ref: '#/components/schemas/Template' + sources: + items: + type: string + type: array + type: object + type: array + type: object + TemplateChart: + properties: + height: + type: integer + properties: + $ref: '#/components/schemas/ViewProperties' + width: + type: integer + xPos: + type: integer + yPos: + type: integer + type: object + TemplateEnvReferences: + items: + properties: + defaultValue: + description: Default value that will be provided for the reference when no value is provided + nullable: true + oneOf: + - type: string + - type: integer + - type: number + - type: boolean + envRefKey: + description: Key identified as environment reference and is the key identified in the template + type: string + resourceField: + description: Field the environment reference corresponds too + type: string + value: + description: Value provided to fulfill reference + nullable: true + oneOf: + - type: string + - type: integer + - type: number + - type: boolean + required: + - resourceField + - envRefKey + type: object + type: array + TemplateExportByID: + properties: + orgIDs: + items: + properties: + orgID: + type: string + resourceFilters: + properties: + byLabel: + items: + type: string + type: array + byResourceKind: + items: + $ref: '#/components/schemas/TemplateKind' + type: array + type: object + type: object + type: array + resources: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + name: + description: if defined with id, name is used for resource exported by id. if defined independently, resources strictly matching name are exported + type: string + required: + - id + - kind + type: object + type: array + stackID: + type: string + type: object + TemplateExportByName: + properties: + orgIDs: + items: + properties: + orgID: + type: string + resourceFilters: + properties: + byLabel: + items: + type: string + type: array + byResourceKind: + items: + $ref: '#/components/schemas/TemplateKind' + type: array + type: object + type: object + type: array + resources: + items: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + name: + type: string + required: + - name + - kind + type: object + type: array + stackID: + type: string + type: object + TemplateKind: + enum: + - Bucket + - Check + - CheckDeadman + - CheckThreshold + - Dashboard + - Label + - NotificationEndpoint + - NotificationEndpointHTTP + - NotificationEndpointPagerDuty + - NotificationEndpointSlack + - NotificationRule + - Task + - Telegraf + - Variable + type: string + TemplateSummary: + properties: + diff: + properties: + buckets: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + description: + type: string + name: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + type: object + old: + properties: + description: + type: string + name: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + checks: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + $ref: '#/components/schemas/CheckDiscriminator' + old: + $ref: '#/components/schemas/CheckDiscriminator' + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + dashboards: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + charts: + items: + $ref: '#/components/schemas/TemplateChart' + type: array + description: + type: string + name: + type: string + type: object + old: + properties: + charts: + items: + $ref: '#/components/schemas/TemplateChart' + type: array + description: + type: string + name: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + labelMappings: + items: + properties: + labelID: + type: string + labelName: + type: string + labelTemplateMetaName: + type: string + resourceID: + type: string + resourceName: + type: string + resourceTemplateMetaName: + type: string + resourceType: + type: string + status: + type: string + type: object + type: array + labels: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + color: + type: string + description: + type: string + name: + type: string + type: object + old: + properties: + color: + type: string + description: + type: string + name: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + notificationEndpoints: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + $ref: '#/components/schemas/NotificationEndpointDiscriminator' + old: + $ref: '#/components/schemas/NotificationEndpointDiscriminator' + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + notificationRules: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + description: + type: string + endpointID: + type: string + endpointName: + type: string + endpointType: + type: string + every: + type: string + messageTemplate: + type: string + name: + type: string + offset: + type: string + status: + type: string + statusRules: + items: + properties: + currentLevel: + type: string + previousLevel: + type: string + type: object + type: array + tagRules: + items: + properties: + key: + type: string + operator: + type: string + value: + type: string + type: object + type: array + type: object + old: + properties: + description: + type: string + endpointID: + type: string + endpointName: + type: string + endpointType: + type: string + every: + type: string + messageTemplate: + type: string + name: + type: string + offset: + type: string + status: + type: string + statusRules: + items: + properties: + currentLevel: + type: string + previousLevel: + type: string + type: object + type: array + tagRules: + items: + properties: + key: + type: string + operator: + type: string + value: + type: string + type: object + type: array + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + tasks: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + cron: + type: string + description: + type: string + every: + type: string + name: + type: string + offset: + type: string + query: + type: string + status: + type: string + type: object + old: + properties: + cron: + type: string + description: + type: string + every: + type: string + name: + type: string + offset: + type: string + query: + type: string + status: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + telegrafConfigs: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + $ref: '#/components/schemas/TelegrafRequest' + old: + $ref: '#/components/schemas/TelegrafRequest' + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + variables: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + args: + $ref: '#/components/schemas/VariableProperties' + description: + type: string + name: + type: string + type: object + old: + properties: + args: + $ref: '#/components/schemas/VariableProperties' + description: + type: string + name: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + type: object + errors: + items: + properties: + fields: + items: + type: string + type: array + indexes: + items: + type: integer + type: array + kind: + $ref: '#/components/schemas/TemplateKind' + reason: + type: string + type: object + type: array + sources: + items: + type: string + type: array + stackID: + type: string + summary: + properties: + buckets: + items: + properties: + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + name: + type: string + orgID: + type: string + retentionPeriod: + type: integer + templateMetaName: + type: string + type: object + type: array + checks: + items: + allOf: + - $ref: '#/components/schemas/CheckDiscriminator' + - properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + templateMetaName: + type: string + type: object + type: array + dashboards: + items: + properties: + charts: + items: + $ref: '#/components/schemas/TemplateChart' + type: array + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + name: + type: string + orgID: + type: string + templateMetaName: + type: string + type: object + type: array + labelMappings: + items: + properties: + labelID: + type: string + labelName: + type: string + labelTemplateMetaName: + type: string + resourceID: + type: string + resourceName: + type: string + resourceTemplateMetaName: + type: string + resourceType: + type: string + status: + type: string + type: object + type: array + labels: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + missingEnvRefs: + items: + type: string + type: array + missingSecrets: + items: + type: string + type: array + notificationEndpoints: + items: + allOf: + - $ref: '#/components/schemas/NotificationEndpointDiscriminator' + - properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + templateMetaName: + type: string + type: object + type: array + notificationRules: + items: + properties: + description: + type: string + endpointID: + type: string + endpointTemplateMetaName: + type: string + endpointType: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + every: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + messageTemplate: + type: string + name: + type: string + offset: + type: string + status: + type: string + statusRules: + items: + properties: + currentLevel: + type: string + previousLevel: + type: string + type: object + type: array + tagRules: + items: + properties: + key: + type: string + operator: + type: string + value: + type: string + type: object + type: array + templateMetaName: + type: string + type: object + type: array + tasks: + items: + properties: + cron: + type: string + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + every: + type: string + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + name: + type: string + offset: + type: string + query: + type: string + status: + type: string + templateMetaName: + type: string + type: object + type: array + telegrafConfigs: + items: + allOf: + - $ref: '#/components/schemas/TelegrafRequest' + - properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + templateMetaName: + type: string + type: object + type: array + variables: + items: + properties: + arguments: + $ref: '#/components/schemas/VariableProperties' + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + name: + type: string + orgID: + type: string + templateMetaName: + type: string + type: object + type: array + type: object + type: object + TemplateSummaryLabel: + properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + name: + type: string + orgID: + type: string + properties: + properties: + color: + type: string + description: + type: string + type: object + templateMetaName: + type: string + type: object + TestStatement: + description: Declares a Flux test case + properties: + assignment: + $ref: '#/components/schemas/VariableAssignment' + type: + $ref: '#/components/schemas/NodeType' + type: object + Threshold: + discriminator: + mapping: + greater: '#/components/schemas/GreaterThreshold' + lesser: '#/components/schemas/LesserThreshold' + range: '#/components/schemas/RangeThreshold' + propertyName: type + oneOf: + - $ref: '#/components/schemas/GreaterThreshold' + - $ref: '#/components/schemas/LesserThreshold' + - $ref: '#/components/schemas/RangeThreshold' + ThresholdBase: + properties: + allValues: + description: If true, only alert if all values meet threshold. + type: boolean + level: + $ref: '#/components/schemas/CheckStatusLevel' + ThresholdCheck: + allOf: + - $ref: '#/components/schemas/CheckBase' + - properties: + every: + description: Check repetition interval. + type: string + offset: + description: Duration to delay after the schedule, before executing check. + type: string + statusMessageTemplate: + description: The template used to generate and write a status message. + type: string + tags: + description: List of tags to write to each status. + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + thresholds: + items: + $ref: '#/components/schemas/Threshold' + type: array + type: + enum: + - threshold + type: string + required: + - type + type: object + Token: + properties: + token: + type: string + type: object + UnaryExpression: + description: Uses operators to act on a single operand in an expression + properties: + argument: + $ref: '#/components/schemas/Expression' + operator: + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + UnsignedIntegerLiteral: + description: Represents integer numbers + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + User: + properties: + id: + description: The user ID. + readOnly: true + type: string + name: + description: The user name. + type: string + status: + default: active + description: | + If `inactive`, the user is inactive. + Default is `active`. + enum: + - active + - inactive + type: string + required: + - name + UserResponse: + properties: + id: + description: | + The user ID. + readOnly: true + type: string + links: + example: + self: /api/v2/users/1 + properties: + self: + format: uri + type: string + readOnly: true + type: object + name: + description: | + The user name. + type: string + status: + default: active + description: | + The status of a user. + An inactive user can't read or write resources. + enum: + - active + - inactive + type: string + required: + - name + Users: + properties: + links: + properties: + self: + format: uri + type: string + type: object + users: + items: + $ref: '#/components/schemas/UserResponse' + type: array + type: object + Variable: + properties: + arguments: + $ref: '#/components/schemas/VariableProperties' + createdAt: + format: date-time + type: string + description: + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + properties: + labels: + format: uri + type: string + org: + format: uri + type: string + self: + format: uri + type: string + readOnly: true + type: object + name: + type: string + orgID: + type: string + selected: + items: + type: string + type: array + updatedAt: + format: date-time + type: string + required: + - name + - orgID + - arguments + type: object + VariableAssignment: + description: Represents the declaration of a variable + properties: + id: + $ref: '#/components/schemas/Identifier' + init: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + VariableProperties: + oneOf: + - $ref: '#/components/schemas/QueryVariableProperties' + - $ref: '#/components/schemas/ConstantVariableProperties' + - $ref: '#/components/schemas/MapVariableProperties' + type: object + Variables: + example: + variables: + - arguments: + type: constant + values: + - howdy + - hello + - hi + - yo + - oy + id: '1221432' + name: ':ok:' + selected: + - hello + - arguments: + type: map + values: + a: fdjaklfdjkldsfjlkjdsa + b: dfaksjfkljekfajekdljfas + c: fdjksajfdkfeawfeea + id: '1221432' + name: ':ok:' + selected: + - c + - arguments: + language: flux + query: 'from(bucket: "foo") |> showMeasurements()' + type: query + id: '1221432' + name: ':ok:' + selected: + - host + properties: + variables: + items: + $ref: '#/components/schemas/Variable' + type: array + type: object + View: + properties: + id: + readOnly: true + type: string + links: + properties: + self: + type: string + readOnly: true + type: object + name: + type: string + properties: + $ref: '#/components/schemas/ViewProperties' + required: + - name + - properties + ViewProperties: + oneOf: + - $ref: '#/components/schemas/LinePlusSingleStatProperties' + - $ref: '#/components/schemas/XYViewProperties' + - $ref: '#/components/schemas/SingleStatViewProperties' + - $ref: '#/components/schemas/HistogramViewProperties' + - $ref: '#/components/schemas/GaugeViewProperties' + - $ref: '#/components/schemas/TableViewProperties' + - $ref: '#/components/schemas/SimpleTableViewProperties' + - $ref: '#/components/schemas/MarkdownViewProperties' + - $ref: '#/components/schemas/CheckViewProperties' + - $ref: '#/components/schemas/ScatterViewProperties' + - $ref: '#/components/schemas/HeatmapViewProperties' + - $ref: '#/components/schemas/MosaicViewProperties' + - $ref: '#/components/schemas/BandViewProperties' + - $ref: '#/components/schemas/GeoViewProperties' + Views: + properties: + links: + properties: + self: + type: string + type: object + views: + items: + $ref: '#/components/schemas/View' + type: array + type: object + WritePrecision: + enum: + - ms + - s + - us + - ns + type: string + XYGeom: + enum: + - line + - step + - stacked + - bar + - monotoneX + - stepBefore + - stepAfter + type: string + XYViewProperties: + properties: + adaptiveZoomHide: + type: boolean + axes: + $ref: '#/components/schemas/Axes' + colorMapping: + $ref: '#/components/schemas/ColorMapping' + description: An object that contains information about the color mapping + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + geom: + $ref: '#/components/schemas/XYGeom' + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + position: + enum: + - overlaid + - stacked + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shadeBelow: + type: boolean + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + timeFormat: + type: string + type: + enum: + - xy + type: string + xColumn: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yColumn: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - geom + - queries + - shape + - axes + - colors + - note + - showNoteWhenEmpty + - position + type: object + securitySchemes: + BasicAuthentication: + description: | + ### Basic authentication scheme + + Use the HTTP Basic authentication scheme for InfluxDB `/api/v2` API operations that support it: + + ### Syntax + + `Authorization: Basic BASE64_ENCODED_CREDENTIALS` + + To construct the `BASE64_ENCODED_CREDENTIALS`, combine the username and + the password with a colon (`USERNAME:PASSWORD`), and then encode the + resulting string in [base64](https://developer.mozilla.org/en-US/docs/Glossary/Base64). + Many HTTP clients encode the credentials for you before sending the + request. + + _**Warning**: Base64-encoding can easily be reversed to obtain the original + username and password. It is used to keep the data intact and does not provide + security. You should always use HTTPS when authenticating or sending a request with + sensitive information._ + + ### Examples + + In the examples, replace the following: + + - **`USERNAME`**: InfluxDB username + - **`PASSWORD`**: InfluxDB [API token](/influxdb/latest/reference/glossary/#token) + - **`INFLUX_URL`**: your InfluxDB URL + + #### Encode credentials with cURL + + The following example shows how to use cURL to send an API request that uses Basic authentication. + With the `--user` option, cURL encodes the credentials and passes them + in the `Authorization: Basic` header. + + ```sh + curl --get "INFLUX_URL/api/v2/signin" + --user "USERNAME":"PASSWORD" + ``` + + #### Encode credentials with Flux + + The Flux [`http.basicAuth()` function](https://docs.influxdata.com/flux/v0.x/stdlib/http/basicauth/) returns a Base64-encoded + basic authentication header using a specified username and password combination. + + #### Encode credentials with JavaScript + + The following example shows how to use the JavaScript `btoa()` function + to create a Base64-encoded string: + + ```js + btoa('USERNAME:PASSWORD') + ``` + + The output is the following: + + ```js + 'VVNFUk5BTUU6UEFTU1dPUkQ=' + ``` + + Once you have the Base64-encoded credentials, you can pass them in the + `Authorization` header--for example: + + ```sh + curl --get "INFLUX_URL/api/v2/signin" + --header "Authorization: Basic VVNFUk5BTUU6UEFTU1dPUkQ=" + ``` + + To learn more about HTTP authentication, see + [Mozilla Developer Network (MDN) Web Docs, HTTP authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication)._ + scheme: basic + type: http + TokenAuthentication: + description: | + Use the [Token authentication](#section/Authentication/TokenAuthentication) + scheme to authenticate to the InfluxDB API. + + In your API requests, send an `Authorization` header. + For the header value, provide the word `Token` followed by a space and an InfluxDB API token. + The word `Token` is case-sensitive. + + ### Syntax + + `Authorization: Token INFLUX_API_TOKEN` + + ### Example + + #### Use Token authentication with cURL + + The following example shows how to use cURL to send an API request that uses Token authentication: + + ```sh + curl --request GET "INFLUX_URL/api/v2/buckets" \ + --header "Authorization: Token INFLUX_API_TOKEN" + ``` + + Replace the following: + + - *`INFLUX_URL`*: your InfluxDB URL + - *`INFLUX_API_TOKEN`*: your [InfluxDB API token](/influxdb/latest/reference/glossary/#token) + + ### Related endpoints + + - [`/authorizations` endpoints](#tag/Authorizations-(API-tokens)) + + ### Related guides + + - [Authorize API requests](/influxdb/latest/api-guide/api_intro/#authentication) + - [Manage API tokens](/influxdb/latest/security/tokens/) + in: header + name: Authorization + type: apiKey +info: + title: Complete InfluxDB OSS API +openapi: 3.0.0 +paths: + /api/v2: + get: + description: | + Retrieves all the top level routes for the InfluxDB API. + + #### Limitations + + - Only returns top level routes--for example, the response contains + `"tasks":"/api/v2/tasks"`, and doesn't contain resource-specific routes + for tasks (`/api/v2/tasks/TASK_ID/...`). + operationId: GetRoutes + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Routes' + description: | + Success. + The response body contains key-value pairs with the resource name and + top-level route. + summary: List all top level routes + tags: + - Routes + - System information endpoints + /api/v2/authorizations: + get: + description: | + Lists authorizations. + + To limit which authorizations are returned, pass query parameters in your request. + If no query parameters are passed, InfluxDB returns all authorizations. + + #### InfluxDB Cloud + + - InfluxDB Cloud doesn't expose [API token](/influxdb/latest/reference/glossary/#token) + values in `GET /api/v2/authorizations` responses; + returns `token: redacted` for all authorizations. + + #### Required permissions + + To retrieve an authorization, the request must use an API token that has the + following permissions: + + - `read-authorizations` + - `read-user` for the user that the authorization is scoped to + + #### Related guides + + - [View tokens](/influxdb/latest/security/tokens/view-tokens/) + operationId: GetAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A user ID. + Only returns authorizations scoped to the specified [user](/influxdb/latest/reference/glossary/#user). + in: query + name: userID + schema: + type: string + - description: | + A user name. + Only returns authorizations scoped to the specified [user](/influxdb/latest/reference/glossary/#user). + in: query + name: user + schema: + type: string + - description: An organization ID. Only returns authorizations that belong to the specified [organization](/influxdb/latest/reference/glossary/#organization). + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Only returns authorizations that belong to the specified [organization](/influxdb/latest/reference/glossary/#organization). + in: query + name: org + schema: + type: string + - description: | + An API [token](/influxdb/latest/reference/glossary/#token) value. + Specifies an authorization by its `token` property value + and returns the authorization. + + #### InfluxDB OSS + + - Doesn't support this parameter. InfluxDB OSS ignores the `token=` parameter, + applies other parameters, and then returns the result. + + #### Limitations + + - The parameter is non-repeatable. If you specify more than one, + only the first one is used. If a resource with the specified + property value doesn't exist, then the response body contains an empty list. + in: query + name: token + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorizations' + description: | + Success. The response body contains a list of authorizations. + + If the response body is missing authorizations that you expect, check that the API + token used in the request has `read-user` permission for the users (`userID` property value) + in those authorizations. + + #### InfluxDB OSS + + - **Warning**: The response body contains authorizations with their + [API token](/influxdb/latest/reference/glossary/#token) values in clear text. + - If the request uses an _[operator token](/influxdb/latest/security/tokens/#operator-token)_, + InfluxDB OSS returns authorizations for all organizations in the instance. + '400': + $ref: '#/components/responses/GeneralServerError' + description: Invalid request + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: List authorizations + tags: + - Authorizations (API tokens) + - Security and access endpoints + post: + description: | + Creates an authorization and returns the authorization with the + generated API [token](/influxdb/latest/reference/glossary/#token). + + Use this endpoint to create an authorization, which generates an API token + with permissions to `read` or `write` to a specific resource or `type` of resource. + The API token is the authorization's `token` property value. + + To follow best practices for secure API token generation and retrieval, + InfluxDB enforces access restrictions on API tokens. + + - InfluxDB allows access to the API token value immediately after the authorization is created. + - You can’t change access (read/write) permissions for an API token after it’s created. + - Tokens stop working when the user who created the token is deleted. + + We recommend the following for managing your tokens: + + - Create a generic user to create and manage tokens for writing data. + - Store your tokens in a secure password vault for future access. + + #### Required permissions + + - `write-authorizations` + - `write-user` for the user that the authorization is scoped to + + #### Related guides + + - [Create a token](/influxdb/latest/security/tokens/create-token/) + operationId: PostAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + examples: + AuthorizationPostRequest: + $ref: '#/components/examples/AuthorizationPostRequest' + AuthorizationWithResourcePostRequest: + $ref: '#/components/examples/AuthorizationWithResourcePostRequest' + AuthorizationWithUserPostRequest: + $ref: '#/components/examples/AuthorizationWithUserPostRequest' + schema: + $ref: '#/components/schemas/AuthorizationPostRequest' + description: The authorization to create. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: | + Success. The authorization is created. The response body contains the + authorization. + '400': + $ref: '#/components/responses/GeneralServerError' + description: Invalid request + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Create an authorization + tags: + - Authorizations (API tokens) + - Security and access endpoints + /api/v2/authorizations/{authID}: + delete: + description: | + Deletes an authorization. + + Use the endpoint to delete an API token. + + If you want to disable an API token instead of delete it, + [update the authorization's status to `inactive`](#operation/PatchAuthorizationsID). + operationId: DeleteAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: An authorization ID. Specifies the authorization to delete. + in: path + name: authID + required: true + schema: + type: string + responses: + '204': + description: Success. The authorization is deleted. + '400': + content: + application/json: + examples: + notFound: + summary: | + The specified resource ID is invalid. + value: + code: invalid + message: id must have a length of 16 bytes + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested authorization doesn't exist. + value: + code: not found + message: authorization not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Delete an authorization + tags: + - Authorizations (API tokens) + - Security and access endpoints + get: + description: | + Retrieves an authorization. + + Use this endpoint to retrieve information about an API token, including + the token's permissions and the user that the token is scoped to. + + #### InfluxDB OSS + + - InfluxDB OSS returns + [API token](/influxdb/latest/reference/glossary/#token) values in authorizations. + - If the request uses an _[operator token](/influxdb/latest/security/tokens/#operator-token)_, + InfluxDB OSS returns authorizations for all organizations in the instance. + + #### Related guides + + - [View tokens](/influxdb/latest/security/tokens/view-tokens/) + externalDocs: + description: View tokens + url: https://docs.influxdata.com/influxdb/latest/security/tokens/view-tokens/ + operationId: GetAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: An authorization ID. Specifies the authorization to retrieve. + in: path + name: authID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Success. The response body contains the authorization. + '400': + content: + application/json: + examples: + notFound: + summary: | + The specified resource ID is invalid. + value: + code: invalid + message: id must have a length of 16 bytes + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested authorization doesn't exist. + value: + code: not found + message: authorization not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Retrieve an authorization + tags: + - Authorizations (API tokens) + - Security and access endpoints + patch: + description: | + Updates an authorization. + + Use this endpoint to set an API token's status to be _active_ or _inactive_. + InfluxDB rejects requests that use inactive API tokens. + operationId: PatchAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: An authorization ID. Specifies the authorization to update. + in: path + name: authID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AuthorizationUpdateRequest' + description: In the request body, provide the authorization properties to update. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Success. The response body contains the updated authorization. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Update an API token to be active or inactive + tags: + - Authorizations (API tokens) + - Security and access endpoints + /api/v2/backup/kv: + get: + deprecated: true + description: | + Retrieves a snapshot of metadata stored in the server's embedded KV store. + InfluxDB versions greater than 2.1.x don't include metadata stored in embedded SQL; + avoid using this endpoint with versions greater than 2.1.x. + operationId: GetBackupKV + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/octet-stream: + schema: + format: binary + type: string + description: Success. The response contains a snapshot of KV metadata. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Download snapshot of metadata stored in the server's embedded KV store. Don't use with InfluxDB versions greater than InfluxDB 2.1.x. + tags: + - Backup + /api/v2/backup/metadata: + get: + operationId: GetBackupMetadata + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Indicates the content encoding (usually a compression algorithm) that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + responses: + '200': + content: + multipart/mixed: + schema: + $ref: '#/components/schemas/MetadataBackup' + description: Snapshot of metadata + headers: + Content-Encoding: + description: Lists any encodings (usually compression algorithms) that have been applied to the response payload. + schema: + default: identity + description: | + The content coding: `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Download snapshot of all metadata in the server + tags: + - Backup + /api/v2/backup/shards/{shardID}: + get: + operationId: GetBackupShardId + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Indicates the content encoding (usually a compression algorithm) that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - description: The shard ID. + in: path + name: shardID + required: true + schema: + format: int64 + type: integer + - description: The earliest time [RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp) to include in the snapshot. + examples: + RFC3339: + summary: RFC3339 date/time format + value: 2006-01-02T15:04:05Z07:00 + in: query + name: since + schema: + format: date-time + type: string + responses: + '200': + content: + application/octet-stream: + schema: + format: binary + type: string + description: TSM snapshot. + headers: + Content-Encoding: + description: Lists any encodings (usually compression algorithms) that have been applied to the response payload. + schema: + default: identity + description: | + The content coding: `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Shard not found. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Download snapshot of all TSM data in a shard + tags: + - Backup + /api/v2/buckets: + get: + description: | + Lists [buckets](/influxdb/latest/reference/glossary/#bucket). + + InfluxDB retrieves buckets owned by the + [organization](/influxdb/latest/reference/glossary/#organization) + associated with the authorization + ([API token](/influxdb/latest/reference/glossary/#token)). + To limit which buckets are returned, pass query parameters in your request. + If no query parameters are passed, InfluxDB returns all buckets up to the + default `limit`. + + #### InfluxDB OSS + + - If you use an _[operator token](/influxdb/latest/security/tokens/#operator-token)_ + to authenticate your request, InfluxDB retrieves resources for _all + organizations_ in the instance. + To retrieve resources for only a specific organization, use the + `org` parameter or the `orgID` parameter to specify the organization. + + #### Required permissions + + | Action | Permission required | + |:--------------------------|:--------------------| + | Retrieve _user buckets_ | `read-buckets` | + | Retrieve [_system buckets_](/influxdb/latest/reference/internals/system-buckets/) | `read-orgs` | + + #### Related Guides + + - [Manage buckets](/influxdb/latest/organizations/buckets/) + operationId: GetBuckets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/After' + - description: | + An organization name. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Lists buckets for the organization associated with the authorization (API token). + + #### InfluxDB OSS + + - Lists buckets for the specified organization. + in: query + name: org + schema: + type: string + - description: | + An organization ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Lists buckets for the organization associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or `orgID` parameter. + - Lists buckets for the specified organization. + in: query + name: orgID + schema: + type: string + - description: | + A bucket name. + Only returns buckets with the specified name. + in: query + name: name + schema: + type: string + - description: | + A bucket ID. + Only returns the bucket with the specified ID. + in: query + name: id + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + buckets: + - createdAt: '2022-03-15T17:22:33.72617939Z' + description: System bucket for monitoring logs + id: 77ca9dace40a9bfc + labels: [] + links: + labels: /api/v2/buckets/77ca9dace40a9bfc/labels + members: /api/v2/buckets/77ca9dace40a9bfc/members + org: /api/v2/orgs/INFLUX_ORG_ID + owners: /api/v2/buckets/77ca9dace40a9bfc/owners + self: /api/v2/buckets/77ca9dace40a9bfc + write: /api/v2/write?org=ORG_ID&bucket=77ca9dace40a9bfc + name: _monitoring + orgID: INFLUX_ORG_ID + retentionRules: + - everySeconds: 604800 + type: expire + schemaType: implicit + type: system + updatedAt: '2022-03-15T17:22:33.726179487Z' + links: + self: /api/v2/buckets?descending=false&limit=20&name=_monitoring&offset=0&orgID=ORG_ID + schema: + $ref: '#/components/schemas/Buckets' + description: | + Success. + The response body contains a list of `buckets`. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List buckets + tags: + - Buckets + x-codeSamples: + - label: 'cURL: filter buckets by name' + lang: Shell + source: | + curl --request GET "http://localhost:8086/api/v2/buckets?name=_monitoring" \ + --header "Authorization: Token INFLUX_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" + post: + description: | + Creates a [bucket](/influxdb/latest/reference/glossary/#bucket) + and returns the bucket resource. + The default data + [retention period](/influxdb/latest/reference/glossary/#retention-period) + is 30 days. + + #### InfluxDB OSS + + - A single InfluxDB OSS instance supports active writes or queries for + approximately 20 buckets across all organizations at a given time. + Reading or writing to more than 20 buckets at a time can adversely affect + performance. + + #### Limitations + + - InfluxDB Cloud Free Plan allows users to create up to two buckets. + Exceeding the bucket quota will result in an HTTP `403` status code. + For additional information regarding InfluxDB Cloud offerings, see + [InfluxDB Cloud Pricing](https://www.influxdata.com/influxdb-cloud-pricing/). + + #### Related Guides + + - [Create a bucket](/influxdb/latest/organizations/buckets/create-bucket/) + - [Create bucket CLI reference](/influxdb/latest/reference/cli/influx/bucket/create) + operationId: PostBuckets + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostBucketRequest' + description: The bucket to create. + required: true + responses: + '201': + content: + application/json: + examples: + successResponse: + value: + createdAt: '2022-08-03T23:04:41.073704121Z' + description: A bucket holding air sensor data + id: 37407e232b3911d8 + labels: [] + links: + labels: /api/v2/buckets/37407e232b3911d8/labels + members: /api/v2/buckets/37407e232b3911d8/members + org: /api/v2/orgs/INFLUX_ORG_ID + owners: /api/v2/buckets/37407e232b3911d8/owners + self: /api/v2/buckets/37407e232b3911d8 + write: /api/v2/write?org=INFLUX_ORG_ID&bucket=37407e232b3911d8 + name: air_sensor + orgID: INFLUX_ORG_ID + retentionRules: + - everySeconds: 2592000 + type: expire + schemaType: implicit + type: user + updatedAt: '2022-08-03T23:04:41.073704228Z' + schema: + $ref: '#/components/schemas/Bucket' + description: | + Success. + The bucket is created. + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + '401': + $ref: '#/components/responses/AuthorizationError' + '403': + content: + application/json: + examples: + quotaExceeded: + summary: Bucket quota exceeded + value: + code: forbidden + message: creating bucket would exceed quota + schema: + $ref: '#/components/schemas/Error' + description: | + Forbidden. + The bucket quota is exceeded. + headers: + X-Platform-Error-Code: + description: | + The reason for the error. + schema: + example: forbidden + type: string + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable Entity. + The request body failed validation. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a bucket + tags: + - Buckets + x-codeSamples: + - label: 'cURL: create a bucket with retention period' + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/buckets \ + --header "Authorization: Token INFLUX_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "name": "air_sensor", + "description": "A bucket holding air sensor data", + "orgID": "INFLUX_ORG_ID", + "retentionRules": [ + { + "type": "expire", + "everySeconds": 2592000, + } + ] + }' + - label: cURL + lang: Shell + source: '' + /api/v2/buckets/{bucketID}: + delete: + description: | + Deletes a bucket and all associated records. + + #### InfluxDB Cloud + + - Does the following when you send a delete request: + + 1. Validates the request and queues the delete. + 2. Returns an HTTP `204` status code if queued; _error_ otherwise. + 3. Handles the delete asynchronously. + + #### InfluxDB OSS + + - Validates the request, handles the delete synchronously, + and then responds with success or failure. + + #### Limitations + + - Only one bucket can be deleted per request. + + #### Related Guides + + - [Delete a bucket](/influxdb/latest/organizations/buckets/delete-bucket/#delete-a-bucket-in-the-influxdb-ui) + operationId: DeleteBucketsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + Bucket ID. + The ID of the bucket to delete. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '204': + description: | + Success. + + #### InfluxDB Cloud + - The bucket is queued for deletion. + + #### InfluxDB OSS + - The bucket is deleted. + '400': + content: + application/json: + examples: + invalidID: + summary: | + Invalid ID. + value: + code: invalid + message: id must have a length of 16 bytes + schema: + $ref: '#/components/schemas/Error' + description: | + Bad Request. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested bucket was not found. + value: + code: not found + message: bucket not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + Bucket not found. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a bucket + tags: + - Buckets + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request DELETE "http://localhost:8086/api/v2/buckets/BUCKET_ID" \ + --header "Authorization: Token INFLUX_TOKEN" \ + --header 'Accept: application/json' + get: + description: | + Retrieves a bucket. + + Use this endpoint to retrieve information for a specific bucket. + operationId: GetBucketsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the bucket to retrieve. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + createdAt: '2022-08-03T23:04:41.073704121Z' + description: bucket for air sensor data + id: 37407e232b3911d8 + labels: [] + links: + labels: /api/v2/buckets/37407e232b3911d8/labels + members: /api/v2/buckets/37407e232b3911d8/members + org: /api/v2/orgs/INFLUX_ORG_ID + owners: /api/v2/buckets/37407e232b3911d8/owners + self: /api/v2/buckets/37407e232b3911d8 + write: /api/v2/write?org=INFLUX_ORG_ID&bucket=37407e232b3911d8 + name: air-sensor + orgID: bea7ea952287f70d + retentionRules: + - everySeconds: 2592000 + type: expire + schemaType: implicit + type: user + updatedAt: '2022-08-03T23:04:41.073704228Z' + schema: + $ref: '#/components/schemas/Bucket' + description: | + Success. + The response body contains the bucket information. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested bucket wasn't found. + value: + code: not found + message: bucket not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + Bucket not found. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a bucket + tags: + - Buckets + patch: + description: | + Updates a bucket. + + Use this endpoint to update properties + (`name`, `description`, and `retentionRules`) of a bucket. + + #### InfluxDB Cloud + + - Requires the `retentionRules` property in the request body. If you don't + provide `retentionRules`, InfluxDB responds with an HTTP `403` status code. + + #### InfluxDB OSS + + - Doesn't require `retentionRules`. + + #### Related Guides + + - [Update a bucket](/influxdb/latest/organizations/buckets/update-bucket/) + operationId: PatchBucketsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PatchBucketRequest' + description: The bucket update to apply. + required: true + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + createdAt: '2022-08-03T23:04:41.073704121Z' + description: bucket holding air sensor data + id: 37407e232b3911d8 + labels: [] + links: + labels: /api/v2/buckets/37407e232b3911d8/labels + members: /api/v2/buckets/37407e232b3911d8/members + org: /api/v2/orgs/INFLUX_ORG_ID + owners: /api/v2/buckets/37407e232b3911d8/owners + self: /api/v2/buckets/37407e232b3911d8 + write: /api/v2/write?org=INFLUX_ORG_ID&bucket=37407e232b3911d8 + name: air_sensor + orgID: INFLUX_ORG_ID + retentionRules: + - everySeconds: 2592000 + type: expire + schemaType: implicit + type: user + updatedAt: '2022-08-07T22:49:49.422962913Z' + schema: + $ref: '#/components/schemas/Bucket' + description: An updated bucket + '400': + content: + application/json: + examples: + invalidJSONStringValue: + description: | + If the request body contains invalid JSON, InfluxDB returns `invalid` + with detail about the problem. + summary: Invalid JSON + value: + code: invalid + message: 'invalid json: invalid character ''\'''' looking for beginning of value' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad Request. + '401': + $ref: '#/components/responses/AuthorizationError' + '403': + content: + application/json: + examples: + invalidRetention: + summary: | + The retention policy provided exceeds the max retention for the + organization. + value: + code: forbidden + message: provided retention exceeds orgs maximum retention duration + schema: + $ref: '#/components/schemas/Error' + description: | + Forbidden. + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested bucket wasn't found. + value: + code: not found + message: bucket not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + Bucket not found. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a bucket + tags: + - Buckets + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request PATCH "http://localhost:8086/api/v2/buckets/BUCKET_ID \ + --header "Authorization: Token INFLUX_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "name": "air_sensor", + "description": "bucket holding air sensor data", + "retentionRules": [ + { + "type": "expire", + "everySeconds": 2592000 + } + ] + }' + /api/v2/buckets/{bucketID}/labels: + get: + description: | + Lists all labels for a bucket. + + Labels are objects that contain `labelID`, `name`, `description`, and `color` + key-value pairs. They may be used for grouping and filtering InfluxDB + resources. + Labels are also capable of grouping across different resources--for example, + you can apply a label named `air_sensor` to a bucket and a task to quickly + organize resources. + + #### Related guides + + - Use the [`/api/v2/labels` InfluxDB API endpoint](#tag/Labels) to retrieve and manage labels. + - [Manage labels in the InfluxDB UI](/influxdb/latest/visualize-data/labels/) + operationId: GetBucketsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the bucket to retrieve labels for. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + labels: + - id: 09cbd068e7ebb000 + name: production_buckets + orgID: INFLUX_ORG_ID + links: + self: /api/v2/labels + schema: + $ref: '#/components/schemas/LabelsResponse' + description: | + Success. + The response body contains a list of all labels for the bucket. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a bucket + tags: + - Buckets + post: + description: | + Adds a label to a bucket and returns the new label information. + + Labels are objects that contain `labelID`, `name`, `description`, and `color` + key-value pairs. They may be used for grouping and filtering across one or + more kinds of **resources**--for example, you can apply a label named + `air_sensor` to a bucket and a task to quickly organize resources. + + #### Limitations + + - Before adding a label to a bucket, you must create the label if you + haven't already. To create a label with the InfluxDB API, send a `POST` + request to the [`/api/v2/labels` endpoint](#operation/PostLabels)). + + #### Related guides + + - Use the [`/api/v2/labels` InfluxDB API endpoint](#tag/Labels) to retrieve and manage labels. + - [Manage labels in the InfluxDB UI](/influxdb/latest/visualize-data/labels/) + operationId: PostBucketsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + Bucket ID. + The ID of the bucket to label. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: An object that contains a _`labelID`_ to add to the bucket. + required: true + responses: + '201': + content: + application/json: + examples: + successResponse: + value: + label: + id: 09cbd068e7ebb000 + name: production_buckets + orgID: INFLUX_ORG_ID + links: + self: /api/v2/labels + schema: + $ref: '#/components/schemas/LabelResponse' + description: | + Success. + The response body contains the label information. + '400': + $ref: '#/components/responses/BadRequestError' + examples: + invalidRequest: + summary: The `labelID` is missing from the request body. + value: + code: invalid + message: label id is required + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '422': + content: + application/json: + examples: + conflictingResource: + summary: | + Label already exists on the resource. + value: + code: conflict + message: Cannot add label, label already exists on resource + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable entity. + Label already exists on the resource. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a bucket + tags: + - Buckets + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/buckets/BUCKETS_ID/labels \ + --header "Authorization: Token INFLUX_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "labelID": "09cbd068e7ebb000" + }' + /api/v2/buckets/{bucketID}/labels/{labelID}: + delete: + operationId: DeleteBucketsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Bucket not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/members: + get: + description: | + Lists all users for a bucket. + + InfluxDB [users](/influxdb/latest/reference/glossary/#user) have + permission to access InfluxDB. + + [Members](/influxdb/latest/reference/glossary/#member) are users in + an organization with access to the specified resource. + + Use this endpoint to retrieve all users with access to a bucket. + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + - [Manage members](/influxdb/latest/organizations/members/) + operationId: GetBucketsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the bucket to retrieve users for. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + links: + self: /api/v2/buckets/37407e232b3911d8/members + users: + - id: 791df274afd48a83 + links: + self: /api/v2/users/791df274afd48a83 + name: example_user_1 + role: member + status: active + - id: 09cfb87051cbe000 + links: + self: /api/v2/users/09cfb87051cbe000 + name: example_user_2 + role: owner + status: active + schema: + $ref: '#/components/schemas/ResourceMembers' + description: | + Success. + The response body contains a list of all users for the bucket. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all users with member privileges for a bucket + tags: + - Buckets + post: + description: | + Add a user to a bucket and return the new user information. + + InfluxDB [users](/influxdb/latest/reference/glossary/#user) have + permission to access InfluxDB. + + [Members](/influxdb/latest/reference/glossary/#member) are users in + an organization. + + Use this endpoint to give a user member privileges to a bucket. + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + - [Manage members](/influxdb/latest/organizations/members/) + operationId: PostBucketsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the bucket to retrieve users for. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: A user to add as a member to the bucket. + required: true + responses: + '201': + content: + application/json: + examples: + successResponse: + value: + id: 09cfb87051cbe000 + links: + self: /api/v2/users/09cfb87051cbe000 + name: example_user_1 + role: member + status: active + schema: + $ref: '#/components/schemas/ResourceMember' + description: | + Success. + The response body contains the user information. + '400': + $ref: '#/components/responses/BadRequestError' + examples: + invalidRequest: + summary: The user `id` is missing from the request body. + value: + code: invalid + message: user id missing or invalid + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a bucket + tags: + - Buckets + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/buckets/BUCKET_ID/members \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "id": "09cfb87051cbe000" + } + /api/v2/buckets/{bucketID}/members/{userID}: + delete: + description: | + Removes a member from a bucket. + + Use this endpoint to remove a user's member privileges from a bucket. This + removes the user's `read` and `write` permissions for the bucket. + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + - [Manage members](/influxdb/latest/organizations/members/) + operationId: DeleteBucketsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the user to remove. + in: path + name: userID + required: true + schema: + type: string + - description: | + The ID of the bucket to remove a user from. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '204': + description: | + Success. + The user is no longer a member of the bucket. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/owners: + get: + description: | + Lists all [owners](/influxdb/latest/reference/glossary/#owner) + of a bucket. + + Bucket owners have permission to delete buckets and remove user and member + permissions from the bucket. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Owner permissions are separate from API token permissions. + - Owner permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `read-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to retrieve a + list of owners for. + + #### Related endpoints + + - [Authorizations](#tag/Authorizations-(API-tokens)) + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + operationId: GetBucketsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the bucket to retrieve owners for. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + links: + self: /api/v2/buckets/BUCKET_ID/owners + users: + - id: d88d182d91b0950f + links: + self: /api/v2/users/d88d182d91b0950f + name: example-owner + role: owner + status: active + schema: + $ref: '#/components/schemas/ResourceOwners' + description: | + Success. + The response body contains a list of all owners for the bucket. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a bucket + tags: + - Buckets + post: + description: | + Adds an owner to a bucket and returns the [owners](/influxdb/latest/reference/glossary/#owner) + with role and user detail. + + Use this endpoint to create a _resource owner_ for the bucket. + Bucket owners have permission to delete buckets and remove user and member + permissions from the bucket. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Owner permissions are separate from API token permissions. + - Owner permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `write-orgs INFLUX_ORG_ID` + *`INFLUX_ORG_ID`* is the ID of the organization that you want to add + an owner for. + + #### Related endpoints + + - [Authorizations](#tag/Authorizations-(API-tokens)) + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + operationId: PostBucketsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the bucket to add an owner for. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + examples: + successResponse: + value: + id: d88d182d91b0950f + links: + self: /api/v2/users/d88d182d91b0950f + name: example-user + role: owner + status: active + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: A user to add as an owner for the bucket. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: | + Created. + The bucket `owner` role is assigned to the user. + The response body contains the resource owner with + role and user detail. + '400': + $ref: '#/components/responses/BadRequestError' + examples: + invalidRequest: + summary: The user `id` is missing from the request body. + value: + code: invalid + message: user id missing or invalid + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a bucket + tags: + - Buckets + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/buckets/BUCKET_ID/owners \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "id": "09cfb87051cbe000" + } + /api/v2/buckets/{bucketID}/owners/{userID}: + delete: + description: | + Removes an owner from a bucket. + + Use this endpoint to remove a user's `owner` role for a bucket. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Owner permissions are separate from API token permissions. + - Owner permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `write-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to remove an owner + from. + + #### Related endpoints + + - [Authorizations](#tag/Authorizations-(API-tokens)) + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + operationId: DeleteBucketsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: | + The ID of the bucket to remove an owner from. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '204': + description: | + Success. + The user is no longer an owner of the bucket. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a bucket + tags: + - Buckets + /api/v2/checks: + get: + operationId: GetChecks + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - description: Only show checks that belong to a specific organization ID. + in: query + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Checks' + description: A list of checks + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all checks + tags: + - Checks + post: + operationId: CreateCheck + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostCheck' + description: Check to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: Check created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add new check + tags: + - Checks + /api/v2/checks/{checkID}: + delete: + operationId: DeleteChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a check + tags: + - Checks + get: + operationId: GetChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: The check requested + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a check + tags: + - Checks + patch: + operationId: PatchChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CheckPatch' + description: Check update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: An updated check + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a check + tags: + - Checks + put: + operationId: PutChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: Check update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: An updated check + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a check + tags: + - Checks + /api/v2/checks/{checkID}/labels: + get: + operationId: GetChecksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a check + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a check + tags: + - Checks + post: + operationId: PostChecksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label was added to the check + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a check + tags: + - Checks + /api/v2/checks/{checkID}/labels/{labelID}: + delete: + operationId: DeleteChecksIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Check or label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete label from a check + tags: + - Checks + /api/v2/checks/{checkID}/query: + get: + operationId: GetChecksIDQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/FluxResponse' + description: The check query requested + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Invalid request + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Check not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a check query + tags: + - Checks + /api/v2/config: + get: + description: | + Returns the active runtime configuration of the InfluxDB instance. + + In InfluxDB v2.2+, use this endpoint to view your active runtime configuration, + including flags and environment variables. + + #### Related guides + + - [View your runtime server configuration](/influxdb/latest/reference/config-options/#view-your-runtime-server-configuration) + operationId: GetConfig + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Config' + description: | + Success. + The response body contains the active runtime configuration of the InfluxDB instance. + '401': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve runtime configuration + tags: + - Config + - System information endpoints + /api/v2/dashboards: + get: + description: | + Lists [dashboards](/influxdb/latest/reference/glossary/#dashboard). + + #### Related guides + + - [Manage dashboards](/influxdb/latest/visualize-data/dashboards/). + operationId: GetDashboards + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/Descending' + - description: A user ID. Only returns [dashboards](/influxdb/latest/reference/glossary/#dashboard) where the specified user has the `owner` role. + in: query + name: owner + schema: + type: string + - description: The column to sort by. + in: query + name: sortBy + schema: + enum: + - ID + - CreatedAt + - UpdatedAt + type: string + - description: | + A list of dashboard IDs. + Returns only the specified [dashboards](/influxdb/latest/reference/glossary/#dashboard). + If you specify `id` and `owner`, only `id` is used. + in: query + name: id + schema: + items: + type: string + type: array + - description: | + An organization ID. + Only returns [dashboards](/influxdb/latest/reference/glossary/#dashboard) that belong to the specified + [organization](/influxdb/latest/reference/glossary/#organization). + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Only returns [dashboards](/influxdb/latest/reference/glossary/#dashboard) that belong to the specified + [organization](/influxdb/latest/reference/glossary/#organization). + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Dashboards' + description: Success. The response body contains dashboards. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List dashboards + tags: + - Dashboards + post: + operationId: PostDashboards + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CreateDashboardRequest' + description: Dashboard to create + required: true + responses: + '201': + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/Dashboard' + - $ref: '#/components/schemas/DashboardWithViewProperties' + description: Success. The dashboard is created. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}: + delete: + operationId: DeleteDashboardsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a dashboard + tags: + - Dashboards + get: + operationId: GetDashboardsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + - description: If `properties`, includes the cell view properties in the response. + in: query + name: include + required: false + schema: + enum: + - properties + type: string + responses: + '200': + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/Dashboard' + - $ref: '#/components/schemas/DashboardWithViewProperties' + description: Retrieve a single dashboard + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a dashboard + tags: + - Dashboards + patch: + operationId: PatchDashboardsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + properties: + cells: + $ref: '#/components/schemas/CellWithViewProperties' + description: optional, when provided will replace all existing cells with the cells provided + description: + description: optional, when provided will replace the description + type: string + name: + description: optional, when provided will replace the name + type: string + title: PatchDashboardRequest + type: object + description: Patching of a dashboard + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Dashboard' + description: Updated dashboard + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/cells: + post: + operationId: PostDashboardsIDCells + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CreateCell' + description: Cell that will be added + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Cell' + description: Cell successfully added + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a dashboard cell + tags: + - Cells + - Dashboards + put: + description: Replaces all cells in a dashboard. This is used primarily to update the positional information of all cells. + operationId: PutDashboardsIDCells + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Cells' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Dashboard' + description: Replaced dashboard cells + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Replace cells in a dashboard + tags: + - Cells + - Dashboards + /api/v2/dashboards/{dashboardID}/cells/{cellID}: + delete: + operationId: DeleteDashboardsIDCellsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to delete. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the cell to delete. + in: path + name: cellID + required: true + schema: + type: string + responses: + '204': + description: Cell successfully deleted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a dashboard cell + tags: + - Cells + - Dashboards + patch: + description: Updates the non positional information related to a cell. Updates to a single cell's positional data could cause grid conflicts. + operationId: PatchDashboardsIDCellsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the cell to update. + in: path + name: cellID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CellUpdate' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Cell' + description: Updated dashboard cell + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update the non-positional information related to a cell + tags: + - Cells + - Dashboards + /api/v2/dashboards/{dashboardID}/cells/{cellID}/view: + get: + operationId: GetDashboardsIDCellsIDView + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The cell ID. + in: path + name: cellID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/View' + description: A dashboard cells view + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve the view for a cell + tags: + - Cells + - Dashboards + - Views + patch: + operationId: PatchDashboardsIDCellsIDView + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the cell to update. + in: path + name: cellID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/View' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/View' + description: Updated cell view + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update the view for a cell + tags: + - Cells + - Dashboards + - Views + /api/v2/dashboards/{dashboardID}/labels: + get: + operationId: GetDashboardsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a dashboard + tags: + - Dashboards + post: + operationId: PostDashboardsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label added to the dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/labels/{labelID}: + delete: + operationId: DeleteDashboardsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/members: + get: + operationId: GetDashboardsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of users who have member privileges for a dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all dashboard members + tags: + - Dashboards + post: + operationId: PostDashboardsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Added to dashboard members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/members/{userID}: + delete: + operationId: DeleteDashboardsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/owners: + get: + operationId: GetDashboardsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of users who have owner privileges for a dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all dashboard owners + tags: + - Dashboards + post: + operationId: PostDashboardsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Added to dashboard owners + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/owners/{userID}: + delete: + operationId: DeleteDashboardsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a dashboard + tags: + - Dashboards + /api/v2/dbrps: + get: + description: | + Lists database retention policy (DBRP) mappings. + + #### Related guide + + - [Database and retention policy mapping](/influxdb/latest/reference/api/influxdb-1x/dbrp/) + operationId: GetDBRPs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + An organization ID. + Only returns DBRP mappings for the specified organization. + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Only returns DBRP mappings for the specified organization. + in: query + name: org + schema: + type: string + - description: | + A DBPR mapping ID. + Only returns the specified DBRP mapping. + in: query + name: id + schema: + type: string + - description: | + A bucket ID. + Only returns DBRP mappings that belong to the specified bucket. + in: query + name: bucketID + schema: + type: string + - description: Specifies filtering on default + in: query + name: default + schema: + type: boolean + - description: | + A database. + Only returns DBRP mappings that belong to the 1.x database. + in: query + name: db + schema: + type: string + - description: | + A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp). + Specifies the 1.x retention policy to filter on. + in: query + name: rp + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + content: + - bucketID: 4d4d9d5b61dee751 + database: example_database_1 + default: true + id: 0a3cbb5dd526a000 + orgID: bea7ea952287f70d + retention_policy: autogen + - bucketID: 4d4d9d5b61dee751 + database: example_database_2 + default: false + id: 0a3cbcde20e38000 + orgID: bea7ea952287f70d + retention_policy: example_retention_policy + schema: + $ref: '#/components/schemas/DBRPs' + description: Success. The response body contains a list of database retention policy mappings. + '400': + content: + application/json: + examples: + invalidRequest: + description: | + The query parameters contain invalid values. + value: + code: invalid + message: invalid ID + schema: + $ref: '#/components/schemas/Error' + description: Bad request. The request has one or more invalid parameters. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List database retention policy mappings + tags: + - DBRPs + post: + description: | + Creates a database retention policy (DBRP) mapping and returns the mapping. + + Use this endpoint to add InfluxDB 1.x API compatibility to your + InfluxDB Cloud or InfluxDB OSS 2.x buckets. Your buckets must contain a + DBRP mapping in order to query and write using the InfluxDB 1.x API. + object. + + #### Related guide + + - [Database and retention policy mapping](/influxdb/latest/reference/api/influxdb-1x/dbrp/) + operationId: PostDBRP + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPCreate' + description: | + The database retention policy mapping to add. + + Note that _`retention_policy`_ is a required parameter in the request body. + The value of _`retention_policy`_ can be any arbitrary `string` name or + value, with the default value commonly set as `autogen`. + The value of _`retention_policy`_ isn't a [retention_policy](/influxdb/latest/reference/glossary/#retention-policy-rp) + required: true + responses: + '201': + content: + application/json: + examples: + successResponse: + value: + bucketID: 4d4d9d5b61dee751 + database: example_database + default: true + id: 0a3cbb5dd526a000 + orgID: bea7ea952287f70d + retention_policy: autogen + schema: + $ref: '#/components/schemas/DBRP' + description: Created. The response body contains the database retention policy mapping. + '400': + content: + application/json: + examples: + invalidRequest: + description: | + The query parameters contain invalid values. + value: + code: invalid + message: invalid ID + schema: + $ref: '#/components/schemas/Error' + description: Bad request. The mapping in the request has one or more invalid IDs. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a database retention policy mapping + tags: + - DBRPs + x-codeSamples: + - label: 'cURL: create a database retention policy mapping' + lang: Shell + source: | + curl --request POST \ + "http://localhost:8086/api/v2/dbrp/" \ + --header 'Content-type: application/json' \ + --header "Authorization: Token INFLUXDB_TOKEN" \ + --data-binary @- << EOF + { \ + "bucketID": "INFLUXDB_BUCKET_ID", \ + "orgID": "INFLUXDB_ORG_ID", \ + "database": "database_name", \ + "default": true, \ + "retention_policy": "example_retention_policy_name" \ + } + EOF + /api/v2/dbrps/{dbrpID}: + delete: + description: | + Deletes the specified database retention policy (DBRP) mapping. + + #### Related guide + + - [Database and retention policy mapping](/influxdb/latest/reference/api/influxdb-1x/dbrp/) + operationId: DeleteDBRPID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + An organization ID. + Specifies the organization that owns the DBRP mapping. + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Specifies the organization that owns the DBRP mapping. + in: query + name: org + schema: + type: string + - description: | + A DBRP mapping ID. + Only returns the specified DBRP mapping. + in: path + name: dbrpID + required: true + schema: + type: string + responses: + '204': + description: Success. The delete is accepted. + '400': + content: + application/json: + examples: + invalidRequest: + description: | + The query parameters contain invalid values. + value: + code: invalid + message: invalid ID + schema: + $ref: '#/components/schemas/Error' + description: Bad Request. Query parameters contain invalid values. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a database retention policy + tags: + - DBRPs + get: + description: | + Retrieves the specified retention policy (DBRP) mapping. + + #### Related guide + + - [Database and retention policy mapping](/influxdb/latest/reference/api/influxdb-1x/dbrp/) + operationId: GetDBRPsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + An organization ID. + Specifies the organization that owns the DBRP mapping. + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Specifies the organization that owns the DBRP mapping. + in: query + name: org + schema: + type: string + - description: | + A DBRP mapping ID. + Specifies the DBRP mapping. + in: path + name: dbrpID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + content: + bucketID: 4d4d9d5b61dee751 + database: example_database_1 + default: true + id: 0a3cbb5dd526a000 + orgID: bea7ea952287f70d + retention_policy: autogen + schema: + $ref: '#/components/schemas/DBRPGet' + description: Success. The response body contains the DBRP mapping. + '400': + content: + application/json: + examples: + invalidRequest: + description: | + The query parameters contain invalid values. + value: + code: invalid + message: invalid ID + schema: + $ref: '#/components/schemas/Error' + description: Bad Request. Query parameters contain invalid values. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a database retention policy mapping + tags: + - DBRPs + patch: + operationId: PatchDBRPID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + An organization ID. + Specifies the organization that owns the DBRP mapping. + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Specifies the organization that owns the DBRP mapping. + in: query + name: org + schema: + type: string + - description: | + A DBRP mapping ID. + Specifies the DBRP mapping. + in: path + name: dbrpID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPUpdate' + description: | + Updates the database retention policy (DBRP) mapping and returns the mapping. + + Use this endpoint to modify the _retention policy_ (`retention_policy` property) of a DBRP mapping. + + #### Related guide + + - [Database and retention policy mapping](/influxdb/latest/reference/api/influxdb-1x/dbrp/) + required: true + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + content: + bucketID: 4d4d9d5b61dee751 + database: example_database + default: false + id: 0a3cbb5dd526a000 + orgID: bea7ea952287f70d + retention_policy: example_retention_policy + schema: + $ref: '#/components/schemas/DBRPGet' + description: An updated mapping + '400': + content: + application/json: + examples: + invalidRequest: + description: | + The query parameters contain invalid values. + value: + code: invalid + message: invalid ID + schema: + $ref: '#/components/schemas/Error' + description: Bad Request. Query parameters contain invalid values. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a database retention policy mapping + tags: + - DBRPs + x-codeSamples: + - label: 'cURL: Update a DBRP mapping' + lang: Shell + source: | + curl --request PATCH \ + "http://localhost:8086/api/v2/dbrp/DBRP_ID" \ + --header 'Content-type: application/json' \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --data-binary @- << EOF + { + "default": true, + "retention_policy": "example_retention_policy_name" + } + EOF + /debug/pprof/all: + get: + description: | + Collects samples and returns reports for the following [Go runtime profiles](https://pkg.go.dev/runtime/pprof): + + - **allocs**: All past memory allocations + - **block**: Stack traces that led to blocking on synchronization primitives + - **cpu**: (Optional) Program counters sampled from the executing stack. + Include by passing the `cpu` query parameter with a [duration](/influxdb/latest/reference/glossary/#duration) value. + Equivalent to the report from [`GET /debug/pprof/profile?seconds=NUMBER_OF_SECONDS`](#operation/GetDebugPprofProfile). + - **goroutine**: All current goroutines + - **heap**: Memory allocations for live objects + - **mutex**: Holders of contended mutexes + - **threadcreate**: Stack traces that led to the creation of new OS threads + operationId: GetDebugPprofAllProfiles + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + Collects and returns CPU profiling data for the specified [duration](/influxdb/latest/reference/glossary/#duration). + in: query + name: cpu + schema: + externalDocs: + description: InfluxDB duration + url: https://docs.influxdata.com/influxdb/latest/reference/glossary/#duration + format: duration + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + GZIP compressed TAR file (`.tar.gz`) that contains + [Go runtime profile](https://pkg.go.dev/runtime/pprof) reports. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) reports. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve all runtime profiles + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: Get all profiles' + lang: Shell + source: | + # Download and extract a `tar.gz` of all profiles after 10 seconds of CPU sampling. + + curl "http://localhost:8086/debug/pprof/all?cpu=10s" | tar -xz + + # x profiles/cpu.pb.gz + # x profiles/goroutine.pb.gz + # x profiles/block.pb.gz + # x profiles/mutex.pb.gz + # x profiles/heap.pb.gz + # x profiles/allocs.pb.gz + # x profiles/threadcreate.pb.gz + + # Analyze a profile. + + go tool pprof profiles/heap.pb.gz + - label: 'Shell: Get all profiles except CPU' + lang: Shell + source: | + # Download and extract a `tar.gz` of all profiles except CPU. + + curl http://localhost:8086/debug/pprof/all | tar -xz + + # x profiles/goroutine.pb.gz + # x profiles/block.pb.gz + # x profiles/mutex.pb.gz + # x profiles/heap.pb.gz + # x profiles/allocs.pb.gz + # x profiles/threadcreate.pb.gz + + # Analyze a profile. + + go tool pprof profiles/heap.pb.gz + /debug/pprof/allocs: + get: + description: | + Returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) report of + all past memory allocations. + **allocs** is the same as the **heap** profile, + but changes the default [pprof](https://pkg.go.dev/runtime/pprof) + display to __-alloc_space__, + the total number of bytes allocated since the program began (including garbage-collected bytes). + operationId: GetDebugPprofAllocs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + - `0`: (Default) Return the report as a gzip-compressed protocol buffer. + - `1`: Return a response body with the report formatted as human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + If debug is enabled (`?debug=1`), response body contains a human-readable profile. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the memory allocations runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Analyze the profile in interactive mode. + + go tool pprof http://localhost:8086/debug/pprof/allocs + + # `pprof` returns the following prompt: + # Entering interactive mode (type "help" for commands, "o" for options) + # (pprof) + + # At the prompt, get the top N memory allocations. + + (pprof) top10 + /debug/pprof/block: + get: + description: | + Collects samples and returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) + report of stack traces that led to blocking on synchronization primitives. + operationId: GetDebugPprofBlock + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + - `0`: (Default) Return the report as a gzip-compressed protocol buffer. + - `1`: Return a response body with the report formatted as human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + If debug is enabled (`?debug=1`), response body contains a human-readable profile. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the block runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Analyze the profile in interactive mode. + + go tool pprof http://localhost:8086/debug/pprof/block + + # `pprof` returns the following prompt: + # Entering interactive mode (type "help" for commands, "o" for options) + # (pprof) + + # At the prompt, get the top N entries. + + (pprof) top10 + /debug/pprof/cmdline: + get: + description: | + Returns the command line that invoked InfluxDB. + operationId: GetDebugPprofCmdline + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + text/plain: + schema: + format: Command line + type: string + description: Command line invocation. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the command line invocation + tags: + - Debug + - System information endpoints + /debug/pprof/goroutine: + get: + description: | + Collects statistics and returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) + report of all current goroutines. + operationId: GetDebugPprofGoroutine + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + - `0`: (Default) Return the report as a gzip-compressed protocol buffer. + - `1`: Return a response body with the report formatted as + human-readable text with comments that translate addresses to + function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + If debug is enabled (`?debug=1`), response body contains a human-readable profile. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the goroutines runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Analyze the profile in interactive mode. + + go tool pprof http://localhost:8086/debug/pprof/goroutine + + # `pprof` returns the following prompt: + # Entering interactive mode (type "help" for commands, "o" for options) + # (pprof) + + # At the prompt, get the top N entries. + + (pprof) top10 + /debug/pprof/heap: + get: + description: | + Collects statistics and returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) + report of memory allocations for live objects. + + To run **garbage collection** before sampling, + pass the `gc` query parameter with a value of `1`. + operationId: GetDebugPprofHeap + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + - `0`: (Default) Return the report as a gzip-compressed protocol buffer. + - `1`: Return a response body with the report formatted as human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + - description: | + - `0`: (Default) don't force garbage collection before sampling. + - `1`: Force garbage collection before sampling. + in: query + name: gc + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + examples: + profileDebugResponse: + summary: Profile in plain text + value: "heap profile: 12431: 137356528 [149885081: 846795139976] @ heap/8192\n23: 17711104 [46: 35422208] @ 0x4c6df65 0x4ce03ec 0x4cdf3c5 0x4c6f4db 0x4c9edbc 0x4bdefb3 0x4bf822a 0x567d158 0x567ced9 0x406c0a1\n#\t0x4c6df64\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*entry).add+0x1a4\t\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/cache.go:97\n#\t0x4ce03eb\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*partition).write+0x2ab\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/ring.go:229\n#\t0x4cdf3c4\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*ring).write+0xa4\t\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/ring.go:95\n#\t0x4c6f4da\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*Cache).WriteMulti+0x31a\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/cache.go:343\n" + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + If debug is enabled (`?debug=1`), response body contains a human-readable profile. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the heap runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Analyze the profile in interactive mode. + + go tool pprof http://localhost:8086/debug/pprof/heap + + # `pprof` returns the following prompt: + # Entering interactive mode (type "help" for commands, "o" for options) + # (pprof) + + # At the prompt, get the top N memory-intensive nodes. + + (pprof) top10 + + # pprof displays the list: + # Showing nodes accounting for 142.46MB, 85.43% of 166.75MB total + # Dropped 895 nodes (cum <= 0.83MB) + # Showing top 10 nodes out of 143 + /debug/pprof/mutex: + get: + description: | + Collects statistics and returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) report of + lock contentions. + The profile contains stack traces of holders of contended mutual exclusions (mutexes). + operationId: GetDebugPprofMutex + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + - `0`: (Default) Return the report as a gzip-compressed protocol buffer. + - `1`: Return a response body with the report formatted as human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + If debug is enabled (`?debug=1`), response body contains a human-readable profile. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the mutual exclusion (mutex) runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Analyze the profile in interactive mode. + + go tool pprof http://localhost:8086/debug/pprof/mutex + + # `pprof` returns the following prompt: + # Entering interactive mode (type "help" for commands, "o" for options) + # (pprof) + + # At the prompt, get the top N entries. + + (pprof) top10 + /debug/pprof/profile: + get: + description: | + Collects statistics and returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) + report of program counters on the executing stack. + operationId: GetDebugPprofProfile + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Number of seconds to collect profile data. Default is `30` seconds. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the CPU runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Download the profile report. + + curl http://localhost:8086/debug/pprof/profile -o cpu + + # Analyze the profile in interactive mode. + + go tool pprof ./cpu + + # At the prompt, get the top N functions most often running + # or waiting during the sample period. + + (pprof) top10 + /debug/pprof/threadcreate: + get: + description: | + Collects statistics and returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) + report of stack traces that led to the creation of new OS threads. + operationId: GetDebugPprofThreadCreate + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + - `0`: (Default) Return the report as a gzip-compressed protocol buffer. + - `1`: Return a response body with the report formatted as human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + examples: + profileDebugResponse: + summary: Profile in plain text + value: "threadcreate profile: total 26\n25 @\n#\t0x0\n\n1 @ 0x403dda8 0x403e54b 0x403e810 0x403a90c 0x406c0a1\n#\t0x403dda7\truntime.allocm+0xc7\t\t\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:1877\n#\t0x403e54a\truntime.newm+0x2a\t\t\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:2201\n#\t0x403e80f\truntime.startTemplateThread+0x8f\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:2271\n#\t0x403a90b\truntime.main+0x1cb\t\t\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:234\n" + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report compatible + with [pprof](https://github.com/google/pprof) analysis and visualization tools. + If debug is enabled (`?debug=1`), response body contains a human-readable profile. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the threadcreate runtime profile + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Analyze the profile in interactive mode. + + go tool pprof http://localhost:8086/debug/pprof/threadcreate + + # `pprof` returns the following prompt: + # Entering interactive mode (type "help" for commands, "o" for options) + # (pprof) + + # At the prompt, get the top N entries. + + (pprof) top10 + /debug/pprof/trace: + get: + description: | + Collects profile data and returns trace execution events for the current program. + operationId: GetDebugPprofTrace + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Number of seconds to collect profile data. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + externalDocs: + description: Golang trace package + url: https://pkg.go.dev/runtime/trace + format: binary + type: string + description: | + [Trace file](https://pkg.go.dev/runtime/trace) compatible + with the [Golang `trace` command](https://pkg.go.dev/cmd/trace). + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the runtime execution trace + tags: + - Debug + - System information endpoints + x-codeSamples: + - label: 'Shell: go tool trace' + lang: Shell + source: | + # Download the trace file. + + curl http://localhost:8086/debug/pprof/trace -o trace + + # Analyze the trace. + + go tool trace ./trace + /api/v2/delete: + post: + description: | + Deletes data from a bucket. + + Use this endpoint to delete points from a bucket in a specified time range. + + #### InfluxDB Cloud + + - Does the following when you send a delete request: + + 1. Validates the request and queues the delete. + 2. If queued, responds with _success_ (HTTP `2xx` status code); _error_ otherwise. + 3. Handles the delete asynchronously and reaches eventual consistency. + + To ensure that InfluxDB Cloud handles writes and deletes in the order you request them, + wait for a success response (HTTP `2xx` status code) before you send the next request. + + Because writes and deletes are asynchronous, your change might not yet be readable + when you receive the response. + + #### InfluxDB OSS + + - Validates the request, handles the delete synchronously, + and then responds with success or failure. + + #### Required permissions + + - `write-buckets` or `write-bucket BUCKET_ID`. + + *`BUCKET_ID`* is the ID of the destination bucket. + + #### Rate limits (with InfluxDB Cloud) + + `write` rate limits apply. + For more information, see [limits and adjustable quotas](/influxdb/cloud/account-management/limits/). + + #### Related guides + + - [Delete data](/influxdb/latest/write-data/delete-data/) + - Learn how to use [delete predicate syntax](/influxdb/latest/reference/syntax/delete-predicate/). + - Learn how InfluxDB handles [deleted tags](https://docs.influxdata.com/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagkeys/) + and [deleted fields](https://docs.influxdata.com/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementfieldkeys/). + operationId: PostDelete + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + An organization name or ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Deletes data from the bucket in the organization + associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or the `orgID` parameter. + - Deletes data from the bucket in the specified organization. + - If you pass both `orgID` and `org`, they must both be valid. + in: query + name: org + schema: + description: The organization name or ID. + type: string + - description: | + A bucket name or ID. + Specifies the bucket to delete data from. + If you pass both `bucket` and `bucketID`, `bucketID` takes precedence. + in: query + name: bucket + schema: + description: The bucket name or ID. + type: string + - description: | + An organization ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Deletes data from the bucket in the organization + associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or the `orgID` parameter. + - Deletes data from the bucket in the specified organization. + - If you pass both `orgID` and `org`, they must both be valid. + in: query + name: orgID + schema: + description: The organization ID. + type: string + - description: | + A bucket ID. + Specifies the bucket to delete data from. + If you pass both `bucket` and `bucketID`, `bucketID` takes precedence. + in: query + name: bucketID + schema: + description: The bucket ID. + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DeletePredicateRequest' + description: | + Time range parameters and an optional **delete predicate expression**. + + To select points to delete within the specified time range, pass a + **delete predicate expression** in the `predicate` property of the request body. + If you don't pass a `predicate`, InfluxDB deletes all data with timestamps + in the specified time range. + + #### Related guides + + - [Delete data](/influxdb/latest/write-data/delete-data/) + - Learn how to use [delete predicate syntax](/influxdb/latest/reference/syntax/delete-predicate/). + required: true + responses: + '204': + description: | + Success. + + #### InfluxDB Cloud + + - Validated and queued the request. + - Handles the delete asynchronously - the deletion might not have completed yet. + + An HTTP `2xx` status code acknowledges that the write or delete is queued. + To ensure that InfluxDB Cloud handles writes and deletes in the order you request them, + wait for a response before you send the next request. + + Because writes are asynchronous, data might not yet be written + when you receive the response. + + #### InfluxDB OSS + + - Deleted the data. + '400': + content: + application/json: + examples: + orgNotFound: + summary: Organization not found + value: + code: invalid + message: 'failed to decode request body: organization not found' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + + #### InfluxDB OSS + + - Returns this error if the `org` parameter or `orgID` parameter doesn't match an organization. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Delete data + tags: + - Data I/O endpoints + - Delete + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST INFLUX_URL/api/v2/delete?org=INFLUX_ORG&bucket=INFLUX_BUCKET \ + --header 'Authorization: Token INFLUX_API_TOKEN' \ + --header 'Content-Type: application/json' \ + --data '{ + "start": "2020-03-01T00:00:00Z", + "stop": "2020-11-14T00:00:00Z", + "predicate": "tag1=\"value1\" and (tag2=\"value2\" and tag3!=\"value3\")" + }' + /api/v2/flags: + get: + description: | + Retrieves the feature flag key-value pairs configured for the InfluxDB + instance. + _Feature flags_ are configuration options used to develop and test + experimental InfluxDB features and are intended for internal use only. + + This endpoint represents the first step in the following three-step process + to configure feature flags: + + 1. Use [token authentication](#section/Authentication/TokenAuthentication) or a [user session](#tag/Signin) with this endpoint to retrieve + feature flags and their values. + 2. Follow the instructions to [enable, disable, or override values for feature flags](/influxdb/latest/reference/config-options/#feature-flags). + 3. **Optional**: To confirm that your change is applied, do one of the following: + + - Send a request to this endpoint to retrieve the current feature flag values. + - Send a request to the [`GET /api/v2/config` endpoint](#operation/GetConfig) to retrieve the + current runtime server configuration. + + #### Related guides + + - [InfluxDB configuration options](/influxdb/latest/reference/config-options/) + operationId: GetFlags + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Flags' + description: Success. The response body contains feature flags. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve feature flags + tags: + - Config + /health: + get: + description: Returns the health of the instance. + operationId: GetHealth + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: | + The instance is healthy. + The response body contains the health check items and status. + '503': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: The instance is unhealthy. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve the health of the instance + tags: + - Health + - System information endpoints + /api/v2/labels: + get: + operationId: GetLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: Success. The response body contains a list of labels. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List all labels + tags: + - Labels + post: + operationId: PostLabels + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelCreateRequest' + description: The label to create. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: Success. The label was created. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Create a label + tags: + - Labels + /api/v2/labels/{labelID}: + delete: + operationId: DeleteLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Success. The delete was accepted. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Delete a label + tags: + - Labels + get: + operationId: GetLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the label to update. + in: path + name: labelID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: Success. The response body contains the label. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve a label + tags: + - Labels + patch: + operationId: PatchLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the label to update. + in: path + name: labelID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelUpdate' + description: A label update. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: Success. The response body contains the updated label. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Update a label + tags: + - Labels + /api/v2/maps/mapToken: + get: + operationId: getMapboxToken + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Token' + description: Temporary token for Mapbox. + '401': + $ref: '#/components/responses/ServerError' + '500': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Get a mapbox token + /api/v2/me: + get: + operationId: GetMe + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: Success. The response body contains the currently authenticated user. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve the currently authenticated user + tags: + - Users + /api/v2/me/password: + put: + description: | + Updates the password for the signed-in [user](/influxdb/latest/reference/glossary/#user). + + This endpoint represents the third step in the following three-step process to let a + user with a user session update their password: + 1. Pass the user's [Basic authentication credentials](#section/Authentication/BasicAuthentication) to the `POST /api/v2/signin` + endpoint to create a user session and generate a session cookie. + 2. From the response in the first step, extract the session cookie (`Set-Cookie`) header. + 3. Pass the following in a request to the `PUT /api/v2/me/password` endpoint: + - The `Set-Cookie` header from the second step + - The `Authorization Basic` header with the user's _Basic authentication_ credentials + - `{"password": "NEW_PASSWORD"}` in the request body + + #### InfluxDB Cloud + + - Doesn't let you manage user passwords through the API. + Use the InfluxDB Cloud user interface (UI) to update your password. + + #### Related endpoints + + - [Signin](#tag/Signin) + - [Signout](#tag/Signout) + - [Users](#tag/Users) + + #### Related guides + + - [InfluxDB Cloud - Change your password](/influxdb/cloud/account-management/change-password/) + - [InfluxDB OSS - Change your password](/influxdb/latest/users/change-password/) + operationId: PutMePassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The user session cookie for the + [user](/influxdb/latest/reference/glossary/#user) + signed in with [Basic authentication credentials](#section/Authentication/BasicAuthentication). + + #### Related guides + + - [Manage users](/influxdb/v2.6/users/) + example: influxdb-oss-session=19aaaZZZGOvP2GGryXVT2qYftlFKu3bIopurM6AGFow1yF1abhtOlbHfsc-d8gozZFC_6WxmlQIAwLMW5xs523w== + in: cookie + name: influxdb-oss-session + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PasswordResetBody' + description: The new password. + required: true + responses: + '204': + description: Success. The password is updated. + '400': + description: | + Bad request. + + #### InfluxDB Cloud + + - Doesn't let you manage user passwords through the API; always responds with this status. + + #### InfluxDB OSS + + - Doesn't understand a value passed in the request. + '401': + $ref: '#/components/responses/AuthorizationError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unsuccessful authentication + security: + - BasicAuthentication: [] + summary: Update a password + tags: + - Users + /metrics: + get: + description: | + Returns metrics about the workload performance of an InfluxDB instance. + + Use this endpoint to get performance, resource, and usage metrics. + + #### Related guides + + - For the list of metrics categories, see [InfluxDB OSS metrics](/influxdb/latest/reference/internals/metrics/). + - Learn how to use InfluxDB to [scrape Prometheus metrics](/influxdb/latest/write-data/developer-tools/scrape-prometheus-metrics/). + - Learn how InfluxDB [parses the Prometheus exposition format](/influxdb/latest/reference/prometheus-metrics/). + operationId: GetMetrics + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + text/plain: + examples: + expositionResponse: + summary: Metrics in plain text + value: | + # HELP go_threads Number of OS threads created. + # TYPE go_threads gauge + go_threads 19 + # HELP http_api_request_duration_seconds Time taken to respond to HTTP request + # TYPE http_api_request_duration_seconds histogram + http_api_request_duration_seconds_bucket{handler="platform",method="GET",path="/:fallback_path",response_code="200",status="2XX",user_agent="curl",le="0.005"} 4 + http_api_request_duration_seconds_bucket{handler="platform",method="GET",path="/:fallback_path",response_code="200",status="2XX",user_agent="curl",le="0.01"} 4 + http_api_request_duration_seconds_bucket{handler="platform",method="GET",path="/:fallback_path",response_code="200",status="2XX",user_agent="curl",le="0.025"} 5 + schema: + externalDocs: + description: Prometheus exposition formats + url: https://prometheus.io/docs/instrumenting/exposition_formats + format: Prometheus text-based exposition + type: string + description: | + Success. The response body contains metrics in + [Prometheus plain-text exposition format](https://prometheus.io/docs/instrumenting/exposition_formats) + Metrics contain a name, an optional set of key-value pairs, and a value. + + The following descriptors precede each metric: + + - `HELP`: description of the metric + - `TYPE`: [Prometheus metric type](https://prometheus.io/docs/concepts/metric_types/) (`counter`, `gauge`, `histogram`, or `summary`) + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Retrieve workload performance metrics + tags: + - Metrics + - System information endpoints + /api/v2/notificationEndpoints: + get: + operationId: GetNotificationEndpoints + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - description: Only show notification endpoints that belong to specific organization ID. + in: query + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoints' + description: A list of notification endpoints + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all notification endpoints + tags: + - NotificationEndpoints + post: + operationId: CreateNotificationEndpoint + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostNotificationEndpoint' + description: Notification endpoint to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: Notification endpoint created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationEndpoints/{endpointID}: + delete: + operationId: DeleteNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The endpoint was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a notification endpoint + tags: + - NotificationEndpoints + get: + operationId: GetNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: The notification endpoint requested + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a notification endpoint + tags: + - NotificationEndpoints + patch: + operationId: PatchNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpointUpdate' + description: Check update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: An updated notification endpoint + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification endpoint was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification endpoint + tags: + - NotificationEndpoints + put: + operationId: PutNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: A new notification endpoint to replace the existing endpoint with + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: An updated notification endpoint + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification endpoint was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationEndpoints/{endpointID}/labels: + get: + operationId: GetNotificationEndpointsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a notification endpoint + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a notification endpoint + tags: + - NotificationEndpoints + post: + operationId: PostNotificationEndpointIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label was added to the notification endpoint + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationEndpoints/{endpointID}/labels/{labelID}: + delete: + operationId: DeleteNotificationEndpointsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Endpoint or label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationRules: + get: + operationId: GetNotificationRules + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - description: Only show notification rules that belong to a specific organization ID. + in: query + name: orgID + required: true + schema: + type: string + - description: Only show notifications that belong to the specific check ID. + in: query + name: checkID + schema: + type: string + - description: Only return notification rules that "would match" statuses which contain the tag key value pairs provided. + in: query + name: tag + schema: + example: env:prod + pattern: ^[a-zA-Z0-9_]+:[a-zA-Z0-9_]+$ + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRules' + description: A list of notification rules + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all notification rules + tags: + - NotificationRules + post: + operationId: CreateNotificationRule + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostNotificationRule' + description: Notification rule to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: Notification rule created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}: + delete: + operationId: DeleteNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a notification rule + tags: + - NotificationRules + get: + operationId: GetNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: The notification rule requested + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a notification rule + tags: + - NotificationRules + patch: + operationId: PatchNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRuleUpdate' + description: Notification rule update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: An updated notification rule + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification rule was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification rule + tags: + - NotificationRules + put: + operationId: PutNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: Notification rule update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: An updated notification rule + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification rule was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}/labels: + get: + operationId: GetNotificationRulesIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a notification rule + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a notification rule + tags: + - NotificationRules + post: + operationId: PostNotificationRuleIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label was added to the notification rule + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}/labels/{labelID}: + delete: + operationId: DeleteNotificationRulesIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Rule or label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete label from a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}/query: + get: + operationId: GetNotificationRulesIDQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/FluxResponse' + description: The notification rule query requested + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Invalid request + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Notification rule not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a notification rule query + tags: + - Rules + /api/v2/orgs: + get: + description: | + Lists [organizations](/influxdb/latest/reference/glossary/#organization/). + + To limit which organizations are returned, pass query parameters in your request. + If no query parameters are passed, InfluxDB returns all organizations up to the default `limit`. + + #### InfluxDB Cloud + + - Only returns the organization that owns the token passed in the request. + + #### Related guides + + - [View organizations](/influxdb/latest/organizations/view-orgs/) + operationId: GetOrgs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/Descending' + - description: | + An organization name. + Only returns the specified organization. + in: query + name: org + schema: + type: string + - description: | + An organization ID. + Only returns the specified organization. + in: query + name: orgID + schema: + type: string + - description: | + A user ID. + Only returns organizations where the specified user is a member or owner. + in: query + name: userID + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + links: + self: /api/v2/orgs + orgs: + - createdAt: '2022-07-17T23:00:30.778487Z' + description: Example InfluxDB organization + id: INFLUX_ORG_ID + links: + buckets: /api/v2/buckets?org=INFLUX_ORG + dashboards: /api/v2/dashboards?org=INFLUX_ORG + labels: /api/v2/orgs/INFLUX_ORG_ID/labels + logs: /api/v2/orgs/INFLUX_ORG_ID/logs + members: /api/v2/orgs/INFLUX_ORG_ID/members + owners: /api/v2/orgs/INFLUX_ORG_ID/owners + secrets: /api/v2/orgs/INFLUX_ORG_ID/secrets + self: /api/v2/orgs/INFLUX_ORG_ID + tasks: /api/v2/tasks?org=InfluxData + name: INFLUX_ORG + updatedAt: '2022-07-17T23:00:30.778487Z' + schema: + $ref: '#/components/schemas/Organizations' + description: Success. The response body contains a list of organizations. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List organizations + tags: + - Organizations + - Security and access endpoints + post: + description: | + Creates an [organization](/influxdb/latest/reference/glossary/#organization) + and returns the newly created organization. + + #### InfluxDB Cloud + + - Doesn't allow you to use this endpoint to create organizations. + + #### Related guides + + - [Manage organizations](/influxdb/latest/organizations) + operationId: PostOrgs + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostOrganizationRequest' + description: The organization to create. + required: true + responses: + '201': + content: + application/json: + examples: + successResponse: + value: + createdAt: '2022-08-24T23:05:52.881317Z' + description: '' + id: INFLUX_ORG_ID + links: + buckets: /api/v2/buckets?org=INFLUX_ORG + dashboards: /api/v2/dashboards?org=INFLUX_ORG + labels: /api/v2/orgs/INFLUX_ORG_ID/labels + logs: /api/v2/orgs/INFLUX_ORG_ID/logs + members: /api/v2/orgs/INFLUX_ORG_ID/members + owners: /api/v2/orgs/INFLUX_ORG_ID/owners + secrets: /api/v2/orgs/INFLUX_ORG_ID/secrets + self: /api/v2/orgs/INFLUX_ORG_ID + tasks: /api/v2/tasks?org=INFLUX_ORG + name: INFLUX_ORG + updatedAt: '2022-08-24T23:05:52.881318Z' + schema: + $ref: '#/components/schemas/Organization' + description: Created. The response body contains the organization information. + '400': + $ref: '#/components/responses/BadRequestError' + examples: + invalidRequest: + summary: The `name` field is missing from the request body. + value: + code: invalid + message: org name is empty + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create an organization + tags: + - Organizations + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/orgs \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "name": "INFLUX_ORG", + "description: "Example InfluxDB organization" + }' + /api/v2/orgs/{orgID}: + delete: + description: | + Deletes an organization. + + Deleting an organization from InfluxDB Cloud can't be undone. + Once deleted, all data associated with the organization is removed. + + #### InfluxDB Cloud + + - Does the following when you send a delete request: + + 1. Validates the request and queues the delete. + 2. Returns an HTTP `204` status code if queued; _error_ otherwise. + 3. Handles the delete asynchronously. + + #### InfluxDB OSS + + - Validates the request, handles the delete synchronously, + and then responds with success or failure. + + #### Limitations + + - Only one organization can be deleted per request. + + #### Related guides + + - [Delete organizations](/influxdb/latest/organizations/delete-orgs/) + operationId: DeleteOrgsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the organization to delete. + in: path + name: orgID + required: true + schema: + type: string + responses: + '204': + description: | + Success. + + #### InfluxDB Cloud + - The organization is queued for deletion. + + #### InfluxDB OSS + - The organization is deleted. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested organization was not found. + value: + code: not found + message: org not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + InfluxDB can't find the organization. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete an organization + tags: + - Organizations + get: + description: | + Retrieves an organization. + + Use this endpoint to retrieve information for a specific organization. + + #### Related guides + + - [View organizations](/influxdb/latest/organizations/view-orgs/) + operationId: GetOrgsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the organization to retrieve. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Organization' + description: | + Success. + The response body contains the organization information. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested organization wasn't found. + value: + code: not found + message: organization not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + Organization not found. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve an organization + tags: + - Organizations + - Security and access endpoints + patch: + description: | + Updates an organization. + + Use this endpoint to update properties + (`name`, `description`) of an organization. + + Updating an organization’s name affects all resources that reference the + organization by name, including the following: + + - Queries + - Dashboards + - Tasks + - Telegraf configurations + - Templates + + If you change an organization name, be sure to update the organization name + in these resources as well. + + #### Related Guides + + - [Update an organization](/influxdb/latest/organizations/update-org/) + operationId: PatchOrgsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the organization to update. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PatchOrganizationRequest' + description: The organization update to apply. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Organization' + description: Success. The response body contains the updated organization. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update an organization + tags: + - Organizations + /api/v2/orgs/{orgID}/members: + get: + description: | + Lists all users that belong to an organization. + + InfluxDB [users](/influxdb/latest/reference/glossary/#user) have + permission to access InfluxDB. + + [Members](/influxdb/latest/reference/glossary/#member) are users + within the organization. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Member permissions are separate from API token permissions. + - Member permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `read-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to retrieve + members for. + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + - [Manage members](/influxdb/latest/organizations/members/) + operationId: GetOrgsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the organization to retrieve users for. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + links: + self: /api/v2/orgs/055aa4783aa38398/members + users: + - id: 791df274afd48a83 + links: + self: /api/v2/users/791df274afd48a83 + name: example_user_1 + role: member + status: active + - id: 09cfb87051cbe000 + links: + self: /api/v2/users/09cfb87051cbe000 + name: example_user_2 + role: owner + status: active + schema: + $ref: '#/components/schemas/ResourceMembers' + description: | + Success. + The response body contains a list of all users within the organization. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + content: + application/json: + examples: + notFound: + summary: | + The requested organization wasn't found. + value: + code: not found + message: 404 page not found + schema: + $ref: '#/components/schemas/Error' + description: | + Not found. + InfluxDB can't find the organization. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all members of an organization + tags: + - Organizations + - Security and access endpoints + post: + description: | + Add a user to an organization. + + InfluxDB [users](/influxdb/latest/reference/glossary/#user) have + permission to access InfluxDB. + + [Members](/influxdb/latest/reference/glossary/#member) are users + within the organization. + + #### InfluxDB Cloud + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Member permissions are separate from API token permissions. + - Member permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `write-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to add a member to. + + #### Related guides + + - [Manage users](/influxdb/latest/users/) + - [Manage members](/influxdb/latest/organizations/members/) + operationId: PostOrgsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the organization. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: | + The user to add to the organization. + required: true + responses: + '201': + content: + application/json: + examples: + successResponse: + value: + id: 09cfb87051cbe000 + links: + self: /api/v2/users/09cfb87051cbe000 + name: example_user_1 + role: member + status: active + schema: + $ref: '#/components/schemas/ResourceMember' + description: | + Success. + The response body contains the user information. + '400': + $ref: '#/components/responses/BadRequestError' + examples: + invalidRequest: + summary: The user `id` is missing from the request body. + value: + code: invalid + message: user id missing or invalid + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to an organization + tags: + - Organizations + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/orgs/INFLUX_ORG_ID/members \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "id": "09cfb87051cbe000" + }' + /api/v2/orgs/{orgID}/members/{userID}: + delete: + description: | + Removes a member from an organization. + + Use this endpoint to remove a user's member privileges for an organization. + Removing member privileges removes the user's `read` and `write` permissions + from the organization. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Member permissions are separate from API token permissions. + - Member permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `write-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to remove an + owner from. + + #### Related guides + + - [Manage members](/influxdb/latest/organizations/members/) + operationId: DeleteOrgsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the user to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The ID of the organization to remove a user from. + in: path + name: orgID + required: true + schema: + type: string + responses: + '204': + description: | + Success. + The user is no longer a member of the organization. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from an organization + tags: + - Organizations + - Security and access endpoints + /api/v2/orgs/{orgID}/owners: + get: + description: | + Lists all owners of an organization. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Required permissions + + - `read-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to retrieve a + list of owners from. + operationId: GetOrgsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The ID of the organization to list owners for. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + links: + self: /api/v2/orgs/055aa4783aa38398/owners + users: + - id: 09cfb87051cbe000 + links: + self: /api/v2/users/09cfb87051cbe000 + name: example_user_2 + role: owner + status: active + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of organization owners + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Organization not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of an organization + tags: + - Organizations + - Security and access endpoints + post: + description: | + Adds an owner to an organization. + + Use this endpoint to assign the organization `owner` role to a user. + + #### InfluxDB Cloud + + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Required permissions + + - `write-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to add an owner for. + + #### Related endpoints + + - [Authorizations](#tag/Authorizations-(API-tokens)) + operationId: PostOrgsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the organization that you want to add an owner for. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + examples: + successResponse: + value: + id: 09cfb87051cbe000 + links: + self: /api/v2/users/09cfb87051cbe000 + name: example_user_1 + role: owner + status: active + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: The user to add as an owner of the organization. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: | + Success. The user is an owner of the organization. + The response body contains the owner with role and user detail. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to an organization + tags: + - Organizations + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/orgs/INFLUX_ORG_ID/owners \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header "Accept: application/json" \ + --header "Content-Type: application/json" \ + --data '{ + "id": "09cfb87051cbe000" + }' + /api/v2/orgs/{orgID}/owners/{userID}: + delete: + description: | + Removes an [owner](/influxdb/latest/reference/glossary/#owner) from + the organization. + + Organization owners have permission to delete organizations and remove user and member + permissions from the organization. + + #### InfluxDB Cloud + - Doesn't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + #### Limitations + + - Owner permissions are separate from API token permissions. + - Owner permissions are used in the context of the InfluxDB UI. + + #### Required permissions + + - `write-orgs INFLUX_ORG_ID` + + *`INFLUX_ORG_ID`* is the ID of the organization that you want to + remove an owner from. + + #### Related endpoints + - [Authorizations](#tag/Authorizations-(API-tokens)) + operationId: DeleteOrgsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the user to remove. + in: path + name: userID + required: true + schema: + type: string + - description: | + The ID of the organization to remove an owner from. + in: path + name: orgID + required: true + schema: + type: string + responses: + '204': + description: | + Success. + The user is no longer an owner of the organization. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from an organization + tags: + - Organizations + - Security and access endpoints + /api/v2/orgs/{orgID}/secrets: + get: + operationId: GetOrgsIDSecrets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/SecretKeysResponse' + description: A list of all secret keys + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all secret keys for an organization + tags: + - Secrets + - Security and access endpoints + patch: + operationId: PatchOrgsIDSecrets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Secrets' + description: Secret key value pairs to update/add + required: true + responses: + '204': + description: Keys successfully patched + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update secrets in an organization + tags: + - Secrets + /api/v2/orgs/{orgID}/secrets/{secretID}: + delete: + operationId: DeleteOrgsIDSecretsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + - description: The secret ID. + in: path + name: secretID + required: true + schema: + type: string + responses: + '204': + description: Keys successfully deleted + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Delete a secret from an organization + tags: + - Secrets + - Security and access endpoints + /api/v2/orgs/{orgID}/secrets/delete: + post: + deprecated: true + operationId: PostOrgsIDSecrets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/SecretKeys' + description: Secret key to delete + required: true + responses: + '204': + description: Keys successfully patched + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete secrets from an organization + tags: + - Secrets + - Security and access endpoints + /ping: + get: + description: | + Retrieves the status and InfluxDB version of the instance. + + Use this endpoint to monitor uptime for the InfluxDB instance. The response + returns a HTTP `204` status code to inform you the instance is available. + + #### InfluxDB Cloud + + - Isn't versioned and doesn't return `X-Influxdb-Version` in the headers. + + #### Related guides + + - [Influx ping](/influxdb/latest/reference/cli/influx/ping/) + operationId: GetPing + responses: + '204': + description: | + Success. + Headers contain InfluxDB version information. + headers: + X-Influxdb-Build: + description: | + The type of InfluxDB build. + schema: + type: string + X-Influxdb-Version: + description: | + The version of InfluxDB. + + #### InfluxDB Cloud + - Doesn't return version. + schema: + type: integer + servers: [] + summary: Get the status of the instance + tags: + - Ping + - System information endpoints + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request GET "http://localhost:8086/ping" + head: + description: | + Returns the status and InfluxDB version of the instance. + + Use this endpoint to monitor uptime for the InfluxDB instance. The response + returns a HTTP `204` status code to inform you the instance is available. + + #### InfluxDB Cloud + + - Isn't versioned and doesn't return `X-Influxdb-Version` in the headers. + + #### Related guides + + - [Influx ping](/influxdb/latest/reference/cli/influx/ping/) + operationId: HeadPing + responses: + '204': + description: | + Success. + Headers contain InfluxDB version information. + headers: + X-Influxdb-Build: + description: The type of InfluxDB build. + schema: + type: string + X-Influxdb-Version: + description: | + The version of InfluxDB. + + #### InfluxDB Cloud + - Doesn't return version. + schema: + type: integer + servers: [] + summary: Get the status of the instance + tags: + - Ping + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request HEAD "http://localhost:8086/ping" + /api/v2/query: + post: + description: | + Retrieves data from buckets. + + Use this endpoint to send a Flux query request and retrieve data from a bucket. + + #### Rate limits (with InfluxDB Cloud) + + `read` rate limits apply. + For more information, see [limits and adjustable quotas](/influxdb/cloud/account-management/limits/). + + #### Related guides + + - [Query with the InfluxDB API](/influxdb/latest/query-data/execute-queries/influx-api/) + - [Get started with Flux](https://docs.influxdata.com/flux/v0.x/get-started/) + operationId: PostQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The content encoding (usually a compression algorithm) that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + enum: + - application/json + - application/vnd.flux + type: string + - description: | + An organization name or ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Queries the bucket in the organization associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or `orgID` parameter. + - Queries the bucket in the specified organization. + in: query + name: org + schema: + type: string + - description: | + An organization ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Queries the bucket in the organization associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or `orgID` parameter. + - Queries the bucket in the specified organization. + in: query + name: orgID + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Query' + application/vnd.flux: + example: | + from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement") + schema: + type: string + description: Flux query or specification to execute + responses: + '200': + content: + application/csv: + example: | + result,table,_start,_stop,_time,region,host,_value + mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43 + mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25 + mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62 + schema: + type: string + description: Success. The response body contains query results. + headers: + Content-Encoding: + description: Lists encodings (usually compression algorithms) that have been applied to the response payload. + schema: + default: identity + description: | + The content coding: `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + Trace-Id: + description: The trace ID, if generated, of the request. + schema: + description: Trace ID of a request. + type: string + '400': + content: + application/json: + examples: + orgNotFound: + summary: Organization not found + value: + code: invalid + message: 'failed to decode request body: organization not found' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + + #### InfluxDB OSS + + - Returns this error if the `org` parameter or `orgID` parameter doesn't match an organization. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '429': + description: | + #### InfluxDB Cloud: + - returns this error if a **read** or **write** request exceeds your + plan's [adjustable service quotas](/influxdb/latest/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum + [global limit](/influxdb/latest/account-management/limits/#global-limits) + - returns `Retry-After` header that describes when to try the write again. + + #### InfluxDB OSS: + - doesn't return this error. + headers: + Retry-After: + description: Non-negative decimal integer indicating seconds to wait before retrying the request. + schema: + format: int32 + type: integer + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Query data + tags: + - Data I/O endpoints + - Query + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request POST 'INFLUX_URL/api/v2/query?org=INFLUX_ORG' \ + --header 'Content-Type: application/vnd.flux' \ + --header 'Accept: application/csv \ + --header 'Authorization: Token INFLUX_API_TOKEN' \ + --data 'from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement")' + /api/v2/query/analyze: + post: + description: | + Analyzes a [Flux query](https://docs.influxdata.com/flux/v0.x/) for syntax + errors and returns the list of errors. + + In the following sample query, `from()` is missing the property key. + + ```json + { "query": "from(: \"iot_center\")\ + |> range(start: -90d)\ + |> filter(fn: (r) => r._measurement == \"environment\")", + "type": "flux" + } + ``` + + If you pass this in a request to the `/api/v2/analyze` endpoint, + InfluxDB returns an `errors` list that contains an error object for the missing key. + + #### Limitations + + - The endpoint doesn't validate values in the query--for example: + + - The following sample query has correct syntax, but contains an incorrect `from()` property key: + + ```json + { "query": "from(foo: \"iot_center\")\ + |> range(start: -90d)\ + |> filter(fn: (r) => r._measurement == \"environment\")", + "type": "flux" + } + ``` + + If you pass this in a request to the `/api/v2/analyze` endpoint, + InfluxDB returns an empty `errors` list. + operationId: PostQueryAnalyze + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: header + name: Content-Type + schema: + enum: + - application/json + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Query' + description: Flux query to analyze + responses: + '200': + content: + application/json: + examples: + missingQueryPropertyKey: + description: | + Returns an error object if the Flux query is missing a property key. + + The following sample query is missing the _`bucket`_ property key: + + ```json + { + "query": "from(: \"iot_center\")\ + ... + } + ``` + summary: Missing property key error + value: + errors: + - character: 0 + column: 6 + line: 1 + message: missing property key + schema: + $ref: '#/components/schemas/AnalyzeQueryResponse' + description: | + Success. + The response body contains the list of `errors`. + If the query syntax is valid, the endpoint returns an empty `errors` list. + '400': + content: + application/json: + examples: + invalidJSONStringValue: + description: If the request body contains invalid JSON, returns `invalid` and problem detail. + summary: Invalid JSON + value: + code: invalid + message: 'invalid json: invalid character ''\'''' looking for beginning of value' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + InfluxDB is unable to parse the request. + The response body contains detail about the problem. + headers: + X-Platform-Error-Code: + description: | + The reason for the error. + schema: + example: invalid + type: string + default: + content: + application/json: + examples: + emptyJSONObject: + description: | + If the request body contains an empty JSON object, returns `internal error`. + summary: Empty JSON object in request body + value: + code: internal error + message: An internal error has occurred - check server logs + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + headers: + X-Influx-Error: + description: A string that describes the problem. + schema: + type: string + X-Influx-Reference: + description: The numeric reference code for the error type. + schema: + type: integer + X-Platform-Error-Code: + description: The reason for the error. + schema: + example: internal error + type: string + summary: Analyze a Flux query + tags: + - Query + x-codeSamples: + - label: 'cURL: Analyze a Flux query' + lang: Shell + source: | + curl -v --request POST \ + "http://localhost:8086/api/v2/query/analyze" \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header 'Content-type: application/json' \ + --header 'Accept: application/json' \ + --data-binary @- << EOF + { "query": "from(bucket: \"iot_center\")\ + |> range(start: -90d)\ + |> filter(fn: (r) => r._measurement == \"environment\")", + "type": "flux" + } + EOF + /api/v2/query/ast: + post: + description: | + Analyzes a Flux query and returns a complete package source [Abstract Syntax + Tree (AST)](/influxdb/latest/reference/glossary/#abstract-syntax-tree-ast) + for the query. + + Use this endpoint for deep query analysis such as debugging unexpected query + results. + + A Flux query AST provides a semantic, tree-like representation with contextual + information about the query. The AST illustrates how the query is distributed + into different components for execution. + + #### Limitations + + - The endpoint doesn't validate values in the query--for example: + + The following sample Flux query has correct syntax, but contains an incorrect `from()` property key: + + ```js + from(foo: "iot_center") + |> range(start: -90d) + |> filter(fn: (r) => r._measurement == "environment") + ``` + + The following sample JSON shows how to pass the query in the request body: + + ```js + from(foo: "iot_center") + |> range(start: -90d) + |> filter(fn: (r) => r._measurement == "environment") + ``` + + The following code sample shows how to pass the query as JSON in the request body: + + ```json + { "query": "from(foo: \"iot_center\")\ + |> range(start: -90d)\ + |> filter(fn: (r) => r._measurement == \"environment\")" + } + ``` + + Passing this to `/api/v2/query/ast` will return a successful response + with a generated AST. + operationId: PostQueryAst + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: header + name: Content-Type + schema: + enum: + - application/json + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LanguageRequest' + description: The Flux query to analyze. + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + ast: + files: + - body: + - expression: + argument: + argument: + arguments: + - location: + end: + column: 25 + line: 1 + source: 'bucket: "example-bucket"' + start: + column: 6 + line: 1 + properties: + - key: + location: + end: + column: 12 + line: 1 + source: bucket + start: + column: 6 + line: 1 + name: bucket + type: Identifier + location: + end: + column: 25 + line: 1 + source: 'bucket: "example-bucket"' + start: + column: 6 + line: 1 + type: Property + value: + location: + end: + column: 25 + line: 1 + source: '"example-bucket"' + start: + column: 14 + line: 1 + type: StringLiteral + value: example-bucket + type: ObjectExpression + callee: + location: + end: + column: 5 + line: 1 + source: from + start: + column: 1 + line: 1 + name: from + type: Identifier + location: + end: + column: 26 + line: 1 + source: 'from(bucket: "example-bucket")' + start: + column: 1 + line: 1 + type: CallExpression + call: + arguments: + - location: + end: + column: 46 + line: 1 + source: 'start: -5m' + start: + column: 36 + line: 1 + properties: + - key: + location: + end: + column: 41 + line: 1 + source: start + start: + column: 36 + line: 1 + name: start + type: Identifier + location: + end: + column: 46 + line: 1 + source: 'start: -5m' + start: + column: 36 + line: 1 + type: Property + value: + argument: + location: + end: + column: 46 + line: 1 + source: 5m + start: + column: 44 + line: 1 + type: DurationLiteral + values: + - magnitude: 5 + unit: m + location: + end: + column: 46 + line: 1 + source: '-5m' + start: + column: 43 + line: 1 + operator: '-' + type: UnaryExpression + type: ObjectExpression + callee: + location: + end: + column: 35 + line: 1 + source: range + start: + column: 30 + line: 1 + name: range + type: Identifier + location: + end: + column: 47 + line: 1 + source: 'range(start: -5m)' + start: + column: 30 + line: 1 + type: CallExpression + location: + end: + column: 47 + line: 1 + source: 'from(bucket: "example-bucket") |> range(start: -5m)' + start: + column: 1 + line: 1 + type: PipeExpression + call: + arguments: + - location: + end: + column: 108 + line: 1 + source: 'fn: (r) => r._measurement == "example-measurement"' + start: + column: 58 + line: 1 + properties: + - key: + location: + end: + column: 60 + line: 1 + source: fn + start: + column: 58 + line: 1 + name: fn + type: Identifier + location: + end: + column: 108 + line: 1 + source: 'fn: (r) => r._measurement == "example-measurement"' + start: + column: 58 + line: 1 + type: Property + value: + body: + left: + location: + end: + column: 83 + line: 1 + source: r._measurement + start: + column: 69 + line: 1 + object: + location: + end: + column: 70 + line: 1 + source: r + start: + column: 69 + line: 1 + name: r + type: Identifier + property: + location: + end: + column: 83 + line: 1 + source: _measurement + start: + column: 71 + line: 1 + name: _measurement + type: Identifier + type: MemberExpression + location: + end: + column: 108 + line: 1 + source: r._measurement == "example-measurement" + start: + column: 69 + line: 1 + operator: '==' + right: + location: + end: + column: 108 + line: 1 + source: '"example-measurement"' + start: + column: 87 + line: 1 + type: StringLiteral + value: example-measurement + type: BinaryExpression + location: + end: + column: 108 + line: 1 + source: (r) => r._measurement == "example-measurement" + start: + column: 62 + line: 1 + params: + - key: + location: + end: + column: 64 + line: 1 + source: r + start: + column: 63 + line: 1 + name: r + type: Identifier + location: + end: + column: 64 + line: 1 + source: r + start: + column: 63 + line: 1 + type: Property + value: null + type: FunctionExpression + type: ObjectExpression + callee: + location: + end: + column: 57 + line: 1 + source: filter + start: + column: 51 + line: 1 + name: filter + type: Identifier + location: + end: + column: 109 + line: 1 + source: 'filter(fn: (r) => r._measurement == "example-measurement")' + start: + column: 51 + line: 1 + type: CallExpression + location: + end: + column: 109 + line: 1 + source: 'from(bucket: "example-bucket") |> range(start: -5m) |> filter(fn: (r) => r._measurement == "example-measurement")' + start: + column: 1 + line: 1 + type: PipeExpression + location: + end: + column: 109 + line: 1 + source: 'from(bucket: "example-bucket") |> range(start: -5m) |> filter(fn: (r) => r._measurement == "example-measurement")' + start: + column: 1 + line: 1 + type: ExpressionStatement + imports: null + location: + end: + column: 109 + line: 1 + source: 'from(bucket: "example-bucket") |> range(start: -5m) |> filter(fn: (r) => r._measurement == "example-measurement")' + start: + column: 1 + line: 1 + metadata: parser-type=rust + package: null + type: File + package: main + type: Package + schema: + $ref: '#/components/schemas/ASTResponse' + description: | + Success. + The response body contains an Abstract Syntax Tree (AST) of the Flux query. + '400': + content: + application/json: + examples: + invalidASTValue: + description: | + If the request body contains a missing property key in `from()`, + returns `invalid` and problem detail. + summary: Invalid AST + value: + code: invalid + message: 'invalid AST: loc 1:6-1:19: missing property key' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + InfluxDB is unable to parse the request. + The response body contains detail about the problem. + headers: + X-Platform-Error-Code: + description: | + The reason for the error. + schema: + example: invalid + type: string + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error. + summary: Generate a query Abstract Syntax Tree (AST) + tags: + - Query + x-codeSamples: + - label: 'cURL: Analyze and generate AST for the query' + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/query/ast" \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "Authorization: Token INFLUX_TOKEN" \ + --data-binary @- << EOL + { + "query": "from(bucket: \"INFLUX_BUCKET_NAME\")\ + |> range(start: -5m)\ + |> filter(fn: (r) => r._measurement == \"example-measurement\")" + } + EOL + /api/v2/query/suggestions: + get: + description: | + Lists Flux query suggestions. Each suggestion contains a + [Flux function](https://docs.influxdata.com/flux/v0.x/stdlib/all-functions/) + name and parameters. + + Use this endpoint to retrieve a list of Flux query suggestions used in the + InfluxDB Flux Query Builder. + + #### Limitations + + - When writing a query, avoid using `_functionName()` helper functions + exposed by this endpoint. Helper function names have an underscore (`_`) + prefix and aren't meant to be used directly in queries--for example: + + - To sort on a column and keep the top n records, use the + `top(n, columns=["_value"], tables=<-)` function instead of the `_sortLimit` + helper function. `top` uses `_sortLimit`. + + #### Related Guides + + - [List of all Flux functions](https://docs.influxdata.com/flux/v0.x/stdlib/all-functions/) + operationId: GetQuerySuggestions + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + funcs: + - name: _fillEmpty + params: + createEmpty: bool + tables: stream + - name: _highestOrLowest + params: + _sortLimit: function + column: invalid + groupColumns: array + 'n': invalid + reducer: function + tables: stream + - name: _hourSelection + params: + location: object + start: int + stop: int + tables: stream + timeColumn: string + - name: _sortLimit + params: + columns: array + desc: bool + 'n': int + tables: stream + - name: _window + params: + createEmpty: bool + every: duration + location: object + offset: duration + period: duration + startColumn: string + stopColumn: string + tables: stream + timeColumn: string + - name: aggregateWindow + params: + column: invalid + createEmpty: bool + every: duration + fn: function + location: object + offset: duration + period: duration + tables: stream + timeDst: string + timeSrc: string + - name: bool + params: + v: invalid + - name: bottom + params: + columns: array + 'n': int + tables: stream + - name: buckets + params: + host: string + org: string + orgID: string + token: string + - name: bytes + params: + v: invalid + - name: cardinality + params: + bucket: string + bucketID: string + host: string + org: string + orgID: string + predicate: function + start: invalid + stop: invalid + token: string + - name: chandeMomentumOscillator + params: + columns: array + 'n': int + tables: stream + - name: columns + params: + column: string + tables: stream + - name: contains + params: + set: array + value: invalid + - name: count + params: + column: string + tables: stream + - name: cov + params: + 'on': array + pearsonr: bool + x: invalid + 'y': invalid + - name: covariance + params: + columns: array + pearsonr: bool + tables: stream + valueDst: string + - name: cumulativeSum + params: + columns: array + tables: stream + - name: derivative + params: + columns: array + initialZero: bool + nonNegative: bool + tables: stream + timeColumn: string + unit: duration + - name: die + params: + msg: string + - name: difference + params: + columns: array + initialZero: bool + keepFirst: bool + nonNegative: bool + tables: stream + - name: display + params: + v: invalid + - name: distinct + params: + column: string + tables: stream + - name: doubleEMA + params: + 'n': int + tables: stream + - name: drop + params: + columns: array + fn: function + tables: stream + - name: duplicate + params: + as: string + column: string + tables: stream + - name: duration + params: + v: invalid + - name: elapsed + params: + columnName: string + tables: stream + timeColumn: string + unit: duration + - name: exponentialMovingAverage + params: + 'n': int + tables: stream + - name: fill + params: + column: string + tables: stream + usePrevious: bool + value: invalid + - name: filter + params: + fn: function + onEmpty: string + tables: stream + - name: findColumn + params: + column: string + fn: function + tables: stream + - name: findRecord + params: + fn: function + idx: int + tables: stream + - name: first + params: + column: string + tables: stream + - name: float + params: + v: invalid + - name: from + params: + bucket: string + bucketID: string + host: string + org: string + orgID: string + token: string + - name: getColumn + params: + column: string + - name: getRecord + params: + idx: int + - name: group + params: + columns: array + mode: string + tables: stream + - name: highestAverage + params: + column: string + groupColumns: array + 'n': int + tables: stream + - name: highestCurrent + params: + column: string + groupColumns: array + 'n': int + tables: stream + - name: highestMax + params: + column: string + groupColumns: array + 'n': int + tables: stream + - name: histogram + params: + bins: array + column: string + countColumn: string + normalize: bool + tables: stream + upperBoundColumn: string + - name: histogramQuantile + params: + countColumn: string + minValue: float + quantile: float + tables: stream + upperBoundColumn: string + valueColumn: string + - name: holtWinters + params: + column: string + interval: duration + 'n': int + seasonality: int + tables: stream + timeColumn: string + withFit: bool + - name: hourSelection + params: + location: object + start: int + stop: int + tables: stream + timeColumn: string + - name: increase + params: + columns: array + tables: stream + - name: int + params: + v: invalid + - name: integral + params: + column: string + interpolate: string + tables: stream + timeColumn: string + unit: duration + - name: join + params: + method: string + 'on': array + tables: invalid + - name: kaufmansAMA + params: + column: string + 'n': int + tables: stream + - name: kaufmansER + params: + 'n': int + tables: stream + - name: keep + params: + columns: array + fn: function + tables: stream + - name: keyValues + params: + keyColumns: array + tables: stream + - name: keys + params: + column: string + tables: stream + - name: last + params: + column: string + tables: stream + - name: length + params: + arr: array + - name: limit + params: + 'n': int + offset: int + tables: stream + - name: linearBins + params: + count: int + infinity: bool + start: float + width: float + - name: logarithmicBins + params: + count: int + factor: float + infinity: bool + start: float + - name: lowestAverage + params: + column: string + groupColumns: array + 'n': int + tables: stream + - name: lowestCurrent + params: + column: string + groupColumns: array + 'n': int + tables: stream + - name: lowestMin + params: + column: string + groupColumns: array + 'n': int + tables: stream + - name: map + params: + fn: function + mergeKey: bool + tables: stream + - name: max + params: + column: string + tables: stream + - name: mean + params: + column: string + tables: stream + - name: median + params: + column: string + compression: float + method: string + tables: stream + - name: min + params: + column: string + tables: stream + - name: mode + params: + column: string + tables: stream + - name: movingAverage + params: + 'n': int + tables: stream + - name: now + params: {} + - name: pearsonr + params: + 'on': array + x: invalid + 'y': invalid + - name: pivot + params: + columnKey: array + rowKey: array + tables: stream + valueColumn: string + - name: quantile + params: + column: string + compression: float + method: string + q: float + tables: stream + - name: range + params: + start: invalid + stop: invalid + tables: stream + - name: reduce + params: + fn: function + identity: invalid + tables: stream + - name: relativeStrengthIndex + params: + columns: array + 'n': int + tables: stream + - name: rename + params: + columns: invalid + fn: function + tables: stream + - name: sample + params: + column: string + 'n': int + pos: int + tables: stream + - name: set + params: + key: string + tables: stream + value: string + - name: skew + params: + column: string + tables: stream + - name: sort + params: + columns: array + desc: bool + tables: stream + - name: spread + params: + column: string + tables: stream + - name: stateCount + params: + column: string + fn: function + tables: stream + - name: stateDuration + params: + column: string + fn: function + tables: stream + timeColumn: string + unit: duration + - name: stateTracking + params: + countColumn: string + durationColumn: string + durationUnit: duration + fn: function + tables: stream + timeColumn: string + - name: stddev + params: + column: string + mode: string + tables: stream + - name: string + params: + v: invalid + - name: sum + params: + column: string + tables: stream + - name: tableFind + params: + fn: function + tables: stream + - name: tail + params: + 'n': int + offset: int + tables: stream + - name: time + params: + v: invalid + - name: timeShift + params: + columns: array + duration: duration + tables: stream + - name: timeWeightedAvg + params: + tables: stream + unit: duration + - name: timedMovingAverage + params: + column: string + every: duration + period: duration + tables: stream + - name: to + params: + bucket: string + bucketID: string + fieldFn: function + host: string + measurementColumn: string + org: string + orgID: string + tables: stream + tagColumns: array + timeColumn: string + token: string + - name: toBool + params: + tables: stream + - name: toFloat + params: + tables: stream + - name: toInt + params: + tables: stream + - name: toString + params: + tables: stream + - name: toTime + params: + tables: stream + - name: toUInt + params: + tables: stream + - name: today + params: {} + - name: top + params: + columns: array + 'n': int + tables: stream + - name: tripleEMA + params: + 'n': int + tables: stream + - name: tripleExponentialDerivative + params: + 'n': int + tables: stream + - name: truncateTimeColumn + params: + tables: stream + timeColumn: invalid + unit: duration + - name: uint + params: + v: invalid + - name: union + params: + tables: array + - name: unique + params: + column: string + tables: stream + - name: wideTo + params: + bucket: string + bucketID: string + host: string + org: string + orgID: string + tables: stream + token: string + - name: window + params: + createEmpty: bool + every: duration + location: object + offset: duration + period: duration + startColumn: string + stopColumn: string + tables: stream + timeColumn: string + - name: yield + params: + name: string + tables: stream + schema: + $ref: '#/components/schemas/FluxSuggestions' + description: | + Success. + The response body contains a list of Flux query suggestions--function + names used in the Flux Query Builder autocomplete suggestions. + '301': + content: + text/html: + examples: + movedPermanently: + description: | + The URL has been permanently moved. Use `/api/v2/query/suggestions`. + summary: Invalid URL + value: | + Moved Permanently + schema: + properties: + body: + description: Response message with URL of requested resource. + readOnly: true + type: string + description: | + Moved Permanently. + InfluxData has moved the URL of the endpoint. + Use `/api/v2/query/suggestions` (without a trailing slash). + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error. + summary: List Flux query suggestions + tags: + - Query + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request GET "INFLUX_URL/api/v2/query/suggestions" \ + --header "Accept: application/json" \ + --header "Authorization: Token INFLUX_API_TOKEN" + /api/v2/query/suggestions/{name}: + get: + description: | + Retrieves a query suggestion that contains the name and parameters of the + requested function. + + Use this endpoint to pass a branching suggestion (a Flux function name) and + retrieve the parameters of the requested function. + + #### Limitations + + - Use `/api/v2/query/suggestions/{name}` (without a trailing slash). + `/api/v2/query/suggestions/{name}/` (note the trailing slash) results in a + HTTP `301 Moved Permanently` status. + + - The function `name` must exist and must be spelled correctly. + + #### Related Guides + + - [List of all Flux functions](https://docs.influxdata.com/flux/v0.x/stdlib/all-functions/) + operationId: GetQuerySuggestionsName + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A [Flux function](https://docs.influxdata.com/flux/v0.x/stdlib/all-functions/) name. + in: path + name: name + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + successResponse: + value: + name: sum + params: + column: string + tables: stream + schema: + $ref: '#/components/schemas/FluxSuggestion' + description: | + Success. + The response body contains the function name and parameters. + '500': + content: + application/json: + examples: + internalError: + description: | + The requested function doesn't exist. + summary: Invalid function + value: + code: internal error + message: An internal error has occurred + schema: + $ref: '#/components/schemas/Error' + description: | + Internal server error. + The value passed for _`name`_ may have been misspelled. + summary: Retrieve a query suggestion for a branching suggestion + tags: + - Query + x-codeSamples: + - label: cURL + lang: Shell + source: | + curl --request GET "INFLUX_URL/api/v2/query/suggestions/sum/" \ + --header "Accept: application/json" \ + --header "Authorization: Token INFLUX_API_TOKEN" + /ready: + get: + operationId: GetReady + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Ready' + description: The instance is ready + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + servers: [] + summary: Get the readiness of an instance at startup + tags: + - Ready + - System information endpoints + /api/v2/remotes: + get: + operationId: GetRemoteConnections + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + required: true + schema: + type: string + - in: query + name: name + schema: + type: string + - in: query + name: remoteURL + schema: + format: uri + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnections' + description: List of remote connections + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List all remote connections + tags: + - RemoteConnections + post: + operationId: PostRemoteConnection + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnectionCreationRequest' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Remote connection saved + '400': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Register a new remote connection + tags: + - RemoteConnections + /api/v2/remotes/{remoteID}: + delete: + operationId: DeleteRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + responses: + '204': + description: Remote connection info deleted. + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Delete a remote connection + tags: + - RemoteConnections + get: + operationId: GetRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Remote connection + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve a remote connection + tags: + - RemoteConnections + patch: + operationId: PatchRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnectionUpdateRequest' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Updated information saved + '400': + $ref: '#/components/responses/GeneralServerError' + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Update a remote connection + tags: + - RemoteConnections + /api/v2/replications: + get: + operationId: GetReplications + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + required: true + schema: + type: string + - in: query + name: name + schema: + type: string + - in: query + name: remoteID + schema: + type: string + - in: query + name: localBucketID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replications' + description: List of replications + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List all replications + tags: + - Replications + post: + operationId: PostReplication + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: If true, validate the replication, but don't save it. + in: query + name: validate + schema: + default: false + type: boolean + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ReplicationCreationRequest' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Replication saved + '204': + description: Replication validated, but not saved + '400': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Register a new replication + tags: + - Replications + /api/v2/replications/{replicationID}: + delete: + operationId: DeleteReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '204': + description: Replication deleted. + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Delete a replication + tags: + - Replications + get: + operationId: GetReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Replication + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve a replication + tags: + - Replications + patch: + operationId: PatchReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + - description: If true, validate the updated information, but don't save it. + in: query + name: validate + schema: + default: false + type: boolean + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ReplicationUpdateRequest' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Updated information saved + '204': + description: Updated replication validated, but not saved + '400': + $ref: '#/components/responses/GeneralServerError' + '404': + $ref: '#/components/responses/GeneralServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Update a replication + tags: + - Replications + /api/v2/replications/{replicationID}/validate: + post: + operationId: PostValidateReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '204': + description: Replication is valid + '400': + $ref: '#/components/responses/GeneralServerError' + description: Replication failed validation + default: + $ref: '#/components/responses/GeneralServerError' + summary: Validate a replication + tags: + - Replications + /api/v2/resources: + get: + operationId: GetResources + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + items: + type: string + type: array + description: All resources targets + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: List all known resources + tags: + - Resources + - System information endpoints + /api/v2/restore/bucket/{bucketID}: + post: + deprecated: true + operationId: PostRestoreBucketID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + requestBody: + content: + text/plain: + schema: + format: byte + type: string + description: Database info serialized as protobuf. + required: true + responses: + '200': + content: + application/json: + schema: + format: byte + type: string + description: ID mappings for shards in bucket. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Overwrite storage metadata for a bucket with shard info from a backup. + tags: + - Restore + /api/v2/restore/bucketMetadata: + post: + operationId: PostRestoreBucketMetadata + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/BucketMetadataManifest' + description: Metadata manifest for a bucket. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/RestoredBucketMappings' + description: ID mappings for shards in new bucket. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Create a new bucket pre-seeded with shard info from a backup. + tags: + - Restore + /api/v2/restore/kv: + post: + operationId: PostRestoreKV + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The value tells InfluxDB what compression is applied to the line protocol in the request payload. + To make an API request with a GZIP payload, send `Content-Encoding: gzip` as a request header. + in: header + name: Content-Encoding + schema: + default: identity + description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + requestBody: + content: + text/plain: + schema: + format: binary + type: string + description: Full KV snapshot. + required: true + responses: + '200': + content: + application/json: + schema: + properties: + token: + description: token is the root token for the instance after restore (this is overwritten during the restore) + type: string + type: object + description: KV store successfully overwritten. + '204': + description: KV store successfully overwritten. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Overwrite the embedded KV store on the server with a backed-up snapshot. + tags: + - Restore + /api/v2/restore/shards/{shardID}: + post: + operationId: PostRestoreShardId + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The value tells InfluxDB what compression is applied to the line protocol in the request payload. + To make an API request with a GZIP payload, send `Content-Encoding: gzip` as a request header. + in: header + name: Content-Encoding + schema: + default: identity + description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + - description: The shard ID. + in: path + name: shardID + required: true + schema: + type: string + requestBody: + content: + text/plain: + schema: + format: binary + type: string + description: TSM snapshot. + required: true + responses: + '204': + description: TSM snapshot successfully restored. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Restore a TSM snapshot into a shard. + tags: + - Restore + /api/v2/restore/sql: + post: + operationId: PostRestoreSQL + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The value tells InfluxDB what compression is applied to the line protocol in the request payload. + To make an API request with a GZIP payload, send `Content-Encoding: gzip` as a request header. + in: header + name: Content-Encoding + schema: + default: identity + description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + requestBody: + content: + text/plain: + schema: + format: binary + type: string + description: Full SQL snapshot. + required: true + responses: + '204': + description: SQL store successfully overwritten. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Overwrite the embedded SQL store on the server with a backed-up snapshot. + tags: + - Restore + /api/v2/scrapers: + get: + operationId: GetScrapers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the name of the scraper target. + in: query + name: name + schema: + type: string + - description: List of scraper target IDs to return. If both `id` and `owner` are specified, only `id` is used. + in: query + name: id + schema: + items: + type: string + type: array + - description: Specifies the organization ID of the scraper target. + in: query + name: orgID + schema: + type: string + - description: Specifies the organization name of the scraper target. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponses' + description: All scraper targets + summary: List all scraper targets + tags: + - Scraper Targets + post: + operationId: PostScrapers + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetRequest' + description: Scraper target to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponse' + description: Scraper target created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Create a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}: + delete: + operationId: DeleteScrapersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The identifier of the scraper target. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '204': + description: Scraper target deleted + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Delete a scraper target + tags: + - Scraper Targets + get: + operationId: GetScrapersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The identifier of the scraper target. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponse' + description: The scraper target + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Retrieve a scraper target + tags: + - Scraper Targets + patch: + operationId: PatchScrapersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The identifier of the scraper target. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetRequest' + description: Scraper target update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponse' + description: Scraper target updated + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Update a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/labels: + get: + operationId: GetScrapersIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of labels for a scraper target. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a scraper target + tags: + - Scraper Targets + post: + operationId: PostScrapersIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The newly added label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/labels/{labelID}: + delete: + operationId: DeleteScrapersIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + - description: The label ID. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Scraper target not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/members: + get: + operationId: GetScrapersIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of scraper target members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all users with member privileges for a scraper target + tags: + - Scraper Targets + post: + operationId: PostScrapersIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Member added to scraper targets + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/members/{userID}: + delete: + operationId: DeleteScrapersIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/owners: + get: + operationId: GetScrapersIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of scraper target owners + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a scraper target + tags: + - Scraper Targets + post: + operationId: PostScrapersIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Scraper target owner added + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/owners/{userID}: + delete: + operationId: DeleteScrapersIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a scraper target + tags: + - Scraper Targets + /api/v2/setup: + get: + description: Returns `true` if no default user, organization, or bucket has been created. + operationId: GetSetup + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/IsOnboarding' + description: allowed true or false + summary: Check if database has default user, org, bucket + tags: + - Setup + post: + description: Post an onboarding request to set up initial user, org and bucket. + operationId: PostSetup + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/OnboardingRequest' + description: Source to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/OnboardingResponse' + description: Created default user, bucket, org + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Set up initial user, org and bucket + tags: + - Setup + /api/v2/signin: + post: + description: | + Authenticates [Basic authentication credentials](#section/Authentication/BasicAuthentication) + for a [user](/influxdb/latest/reference/glossary/#user), + and then, if successful, generates a user session. + + To authenticate a user, pass the HTTP `Authorization` header with the + `Basic` scheme and the base64-encoded username and password. + For syntax and more information, see [Basic Authentication](#section/Authentication/BasicAuthentication) for + syntax and more information. + + If authentication is successful, InfluxDB creates a new session for the user + and then returns the session cookie in the `Set-Cookie` response header. + + InfluxDB stores user sessions in memory only. + They expire within ten minutes and during restarts of the InfluxDB instance. + + #### User sessions with authorizations + + - In InfluxDB Cloud, a user session inherits all the user's permissions for + the organization. + - In InfluxDB OSS, a user session inherits all the user's permissions for all + the organizations that the user belongs to. + + #### Related endpoints + + - [Signout](#tag/Signout) + operationId: PostSignin + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '204': + description: | + Success. + The user is authenticated. + The `Set-Cookie` response header contains the session cookie. + '401': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unauthorized. + This error may be caused by one of the following problems: + + - The user doesn't have access. + - The user passed incorrect credentials in the request. + - The credentials are formatted incorrectly in the request. + '403': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Forbidden. The user account is disabled. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unsuccessful authentication. + security: + - BasicAuthentication: [] + summary: Create a user session. + tags: + - Security and access endpoints + - Signin + x-codeSamples: + - label: 'cURL: signin with --user option encoding' + lang: Shell + source: | + curl --request POST http://localhost:8086/api/v2/signin \ + --user "USERNAME:PASSWORD" + /api/v2/signout: + post: + description: | + Expires a user session specified by a session cookie. + + Use this endpoint to expire a user session that was generated when the user + authenticated with the InfluxDB Developer Console (UI) or the `POST /api/v2/signin` endpoint. + + For example, the `POST /api/v2/signout` endpoint represents the third step + in the following three-step process + to authenticate a user, retrieve the `user` resource, and then expire the session: + + 1. Send a request with the user's [Basic authentication credentials](#section/Authentication/BasicAuthentication) + to the `POST /api/v2/signin` endpoint to create a user session and + generate a session cookie. + 2. Send a request to the `GET /api/v2/me` endpoint, passing the stored session cookie + from step 1 to retrieve user information. + 3. Send a request to the `POST /api/v2/signout` endpoint, passing the stored session + cookie to expire the session. + + _See the complete example in request samples._ + + InfluxDB stores user sessions in memory only. + If a user doesn't sign out, then the user session automatically expires within ten minutes or + during a restart of the InfluxDB instance. + + To learn more about cookies in HTTP requests, see + [Mozilla Developer Network (MDN) Web Docs, HTTP cookies](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies). + + #### Related endpoints + + - [Signin](#tag/Signin) + operationId: PostSignout + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '204': + description: Success. The session is expired. + '401': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unauthorized. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The session expiry is unsuccessful. + summary: Expire a user session + tags: + - Security and access endpoints + - Signout + x-codeSamples: + - label: 'cURL: sign in a user, verify the user session, and then end the session' + lang: Shell + source: | + # The following example shows how to use cURL and the InfluxDB API + # to do the following: + # 1. Sign in a user with a username and password. + # 2. Check that the user session exists for the user. + # 3. Sign out the user to expire the session. + # 4. Check that the session is no longer active. + + # 1. Send a request to `POST /api/v2/signin` to sign in the user. + # In your request, pass the following: + # + # - `--user` option with basic authentication credentials. + # - `-c` option with a file path where cURL will write cookies. + + curl --request POST \ + -c ./cookie-file.tmp \ + "$INFLUX_URL/api/v2/signin" \ + --user "${INFLUX_USER_NAME}:${INFLUX_USER_PASSWORD}" + + # 2. To check that a user session exists for the user in step 1, + # send a request to `GET /api/v2/me`. + # In your request, pass the `-b` option with the session cookie file path from step 1. + + curl --request GET \ + -b ./cookie-file.tmp \ + "$INFLUX_URL/api/v2/me" + + # InfluxDB responds with the `user` resource. + + # 3. Send a request to `POST /api/v2/signout` to expire the user session. + # In your request, pass the `-b` option with the session cookie file path from step 1. + + curl --request POST \ + -b ./cookie-file.tmp \ + "$INFLUX_URL/api/v2/signout" + + # If the user session is successfully expired, InfluxDB responds with + an HTTP `204` status code. + + # 4. To check that the user session is expired, call `GET /api/v2/me` again, + # passing the `-b` option with the cookie file path. + + curl --request GET \ + -b ./cookie-file.tmp \ + "$INFLUX_URL/api/v2/me" + + # If the user session is expired, InfluxDB responds with an HTTP `401` status code. + /api/v2/sources: + get: + operationId: GetSources + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The name of the organization. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Sources' + description: A list of sources + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all sources + tags: + - Sources + post: + operationId: PostSources + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Source to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Created Source + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a source + tags: + - Sources + /api/v2/sources/{sourceID}: + delete: + operationId: DeleteSourcesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: View not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a source + tags: + - Sources + get: + operationId: GetSourcesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: A source + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Source not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a source + tags: + - Sources + patch: + operationId: PatchSourcesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Source update + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Created Source + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Source not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a Source + tags: + - Sources + /api/v2/sources/{sourceID}/buckets: + get: + operationId: GetSourcesIDBuckets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + - description: The name of the organization. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Buckets' + description: A source + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Source not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Get buckets in a source + tags: + - Sources + - Buckets + /api/v2/sources/{sourceID}/health: + get: + operationId: GetSourcesIDHealth + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: The source is healthy + '503': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: The source is not healthy + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Get the health of a source + tags: + - Sources + /api/v2/stacks: + get: + description: | + Lists installed InfluxDB stacks. + + To limit stacks in the response, pass query parameters in your request. + If no query parameters are passed, InfluxDB returns all installed stacks + for the organization. + + #### Related guides + + - [View InfluxDB stacks](/influxdb/latest/influxdb-templates/stacks/). + operationId: ListStacks + parameters: + - description: | + An organization ID. + Only returns stacks owned by the specified [organization](/influxdb/latest/reference/glossary/#organization). + + #### InfluxDB Cloud + + - Doesn't require this parameter; + InfluxDB only returns resources allowed by the API token. + in: query + name: orgID + required: true + schema: + type: string + - description: | + A stack name. + Finds stack `events` with this name and returns the stacks. + + Repeatable. + To filter for more than one stack name, + repeat this parameter with each name--for example: + + - `INFLUX_URL/api/v2/stacks?&orgID=INFLUX_ORG_ID&name=project-stack-0&name=project-stack-1` + examples: + findStackByName: + summary: Find stacks with the event name + value: project-stack-0 + in: query + name: name + schema: + type: string + - description: | + A stack ID. + Only returns the specified stack. + + Repeatable. + To filter for more than one stack ID, + repeat this parameter with each ID--for example: + + - `INFLUX_URL/api/v2/stacks?&orgID=INFLUX_ORG_ID&stackID=09bd87cd33be3000&stackID=09bef35081fe3000` + examples: + findStackByID: + summary: Find a stack with the ID + value: 09bd87cd33be3000 + in: query + name: stackID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + properties: + stacks: + items: + $ref: '#/components/schemas/Stack' + type: array + type: object + description: Success. The response body contains the list of stacks. + '400': + content: + application/json: + examples: + orgIdMissing: + summary: The orgID query parameter is missing + value: + code: invalid + message: 'organization id[""] is invalid: id must have a length of 16 bytes' + orgProvidedNotFound: + summary: The org or orgID passed doesn't own the token passed in the header + value: + code: invalid + message: 'failed to decode request body: organization not found' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + + #### InfluxDB OSS + + - Returns this error if an incorrect value is passed in the `org` parameter or `orgID` parameter. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List installed stacks + tags: + - Templates + post: + description: | + Creates or initializes a stack. + + Use this endpoint to _manually_ initialize a new stack with the following + optional information: + + - Stack name + - Stack description + - URLs for template manifest files + + To automatically create a stack when applying templates, + use the [/api/v2/templates/apply endpoint](#operation/ApplyTemplate). + + #### Required permissions + + - `write` permission for the organization + + #### Related guides + + - [Initialize an InfluxDB stack](/influxdb/latest/influxdb-templates/stacks/init/). + - [Use InfluxDB templates](/influxdb/latest/influxdb-templates/use/#apply-templates-to-an-influxdb-instance). + operationId: CreateStack + requestBody: + content: + application/json: + schema: + properties: + description: + type: string + name: + type: string + orgID: + type: string + urls: + items: + type: string + type: array + title: PostStackRequest + type: object + description: The stack to create. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Success. Returns the newly created stack. + '401': + $ref: '#/components/responses/AuthorizationError' + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable entity. + + The error may indicate one of the following problems: + + - The request body isn't valid--the request is well-formed, but InfluxDB can't process it due to semantic errors. + - You passed a parameter combination that InfluxDB doesn't support. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a stack + tags: + - Templates + /api/v2/stacks/{stack_id}: + delete: + operationId: DeleteStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + - description: The identifier of the organization. + in: query + name: orgID + required: true + schema: + type: string + responses: + '204': + description: The stack and its associated resources were deleted. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a stack and associated resources + tags: + - Templates + get: + operationId: ReadStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Returns the stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a stack + tags: + - Templates + patch: + operationId: UpdateStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + properties: + additionalResources: + items: + properties: + kind: + type: string + resourceID: + type: string + templateMetaName: + type: string + required: + - kind + - resourceID + type: object + type: array + description: + nullable: true + type: string + name: + nullable: true + type: string + templateURLs: + items: + type: string + nullable: true + type: array + title: PatchStackRequest + type: object + description: The stack to update. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Returns the updated stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a stack + tags: + - Templates + /api/v2/stacks/{stack_id}/uninstall: + post: + operationId: UninstallStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Returns the uninstalled stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Uninstall a stack + tags: + - Templates + /api/v2/tasks: + get: + description: | + Lists [tasks](/influxdb/latest/reference/glossary/#task). + + To limit which tasks are returned, pass query parameters in your request. + If no query parameters are passed, InfluxDB returns all tasks up to the default `limit`. + + #### Related guide + + - [Process data with InfluxDB tasks](/influxdb/latest/process-data/) + operationId: GetTasks + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task name. + Only returns [tasks](/influxdb/latest/reference/glossary/#task) + that have the specified name. + Different tasks may have the same name. + in: query + name: name + schema: + type: string + - description: | + A task ID. + Only returns [tasks](/influxdb/latest/reference/glossary/#task) created after the specified task. + in: query + name: after + schema: + type: string + - description: | + A user ID. + Only returns [tasks](/influxdb/latest/reference/glossary/#task) + owned by the specified [user](/influxdb/latest/reference/glossary/#user). + in: query + name: user + schema: + type: string + - description: | + An organization name. + Only returns tasks owned by the specified [organization](/influxdb/latest/reference/glossary/#organization). + in: query + name: org + schema: + type: string + - description: | + An organization ID. + Only returns [tasks](/influxdb/latest/reference/glossary/#task) owned by the specified [organization](/influxdb/latest/reference/glossary/#organization). + in: query + name: orgID + schema: + type: string + - description: | + A task status. + Only returns [tasks](/influxdb/latest/reference/glossary/#task) that have the specified status. + in: query + name: status + schema: + enum: + - active + - inactive + type: string + - description: | + The maximum number of [tasks](/influxdb/latest/reference/glossary/#task) to return. + Default is `100`. + The minimum is `1` and the maximum is `500`. + + To reduce the payload size, combine _`type=basic`_ and _`limit`_ (see _Request samples_). + For more information about the `basic` response, see the _`type`_ parameter. + in: query + name: limit + schema: + default: 100 + maximum: 500 + minimum: 1 + type: integer + - description: | + A task type. + Specifies the level of detail for [tasks](/influxdb/latest/reference/glossary/#task) in the response. + Default is `system`. + The default (`system`) response contains all the metadata properties for tasks. + To reduce the response size, pass `basic` to omit some task properties (`flux`, `createdAt`, `updatedAt`). + in: query + name: type + required: false + schema: + default: '' + enum: + - basic + - system + type: string + responses: + '200': + content: + application/json: + examples: + basicTypeTaskOutput: + description: | + A sample response body for the `?type=basic` parameter. + `type=basic` omits some task fields (`createdAt` and `updatedAt`) + and field values (`org`, `flux`) in the response. + summary: Basic output + value: + links: + self: /api/v2/tasks?limit=100 + tasks: + - every: 30m + flux: '' + id: 09956cbb6d378000 + labels: [] + lastRunStatus: success + latestCompleted: '2022-06-30T15:00:00Z' + links: + labels: /api/v2/tasks/09956cbb6d378000/labels + logs: /api/v2/tasks/09956cbb6d378000/logs + members: /api/v2/tasks/09956cbb6d378000/members + owners: /api/v2/tasks/09956cbb6d378000/owners + runs: /api/v2/tasks/09956cbb6d378000/runs + self: /api/v2/tasks/09956cbb6d378000 + name: task1 + org: '' + orgID: 48c88459ee424a04 + ownerID: 0772396d1f411000 + status: active + systemTypeTaskOutput: + description: | + A sample response body for the `?type=system` parameter. + `type=system` returns all task fields. + summary: System output + value: + links: + self: /api/v2/tasks?limit=100 + tasks: + - createdAt: '2022-06-27T15:09:06Z' + description: IoT Center 90-day environment average. + every: 30m + flux: |- + option task = {name: "task1", every: 30m} + + from(bucket: "iot_center") + |> range(start: -90d) + |> filter(fn: (r) => r._measurement == "environment") + |> aggregateWindow(every: 1h, fn: mean) + id: 09956cbb6d378000 + labels: [] + lastRunStatus: success + latestCompleted: '2022-06-30T15:00:00Z' + links: + labels: /api/v2/tasks/09956cbb6d378000/labels + logs: /api/v2/tasks/09956cbb6d378000/logs + members: /api/v2/tasks/09956cbb6d378000/members + owners: /api/v2/tasks/09956cbb6d378000/owners + runs: /api/v2/tasks/09956cbb6d378000/runs + self: /api/v2/tasks/09956cbb6d378000 + name: task1 + org: my-iot-center + orgID: 48c88459ee424a04 + ownerID: 0772396d1f411000 + status: active + updatedAt: '2022-06-28T18:10:15Z' + schema: + $ref: '#/components/schemas/Tasks' + description: | + Success. + The response body contains the list of tasks. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List tasks + tags: + - Data I/O endpoints + - Tasks + x-codeSamples: + - label: 'cURL: all tasks, basic output' + lang: Shell + source: | + curl https://localhost:8086/api/v2/tasks/?limit=-1&type=basic \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Token INFLUX_API_TOKEN' + post: + description: | + Creates a [task](/influxdb/latest/reference/glossary/#task) and returns the task. + + Use this endpoint to create a scheduled task that runs a script. + + #### Related guides + + - [Get started with tasks](/influxdb/latest/process-data/get-started/) + - [Create a task](/influxdb/latest/process-data/manage-tasks/create-task/) + - [Common tasks](/influxdb/latest/process-data/common-tasks/) + - [Task configuration options](/influxdb/latest/process-data/task-options/) + operationId: PostTasks + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + examples: + TaskWithFlux: + $ref: '#/components/examples/TaskWithFluxRequest' + schema: + $ref: '#/components/schemas/TaskCreateRequest' + description: | + In the request body, provide the task. + + #### InfluxDB OSS + + - Requires either the `org` parameter or the `orgID` parameter. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Task' + description: Success. The response body contains a `tasks` list with the task. + '400': + content: + application/json: + examples: + missingFluxError: + summary: The request body doesn't contain a Flux query + value: + code: invalid + message: 'failed to decode request: missing flux' + orgProvidedNotFound: + summary: The organization specified by org or orgID doesn't own the token passed in the header + value: + code: invalid + message: 'failed to decode request body: organization not found' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + + #### InfluxDB OSS + + - Returns this error if an incorrect value is passed for `org` or `orgID`. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a task + tags: + - Data I/O endpoints + - Tasks + /api/v2/tasks/{taskID}: + delete: + description: | + Deletes the specified [task](/influxdb/latest/reference/glossary/#task) + and all associated records (task runs, logs, and labels). + Once the task is deleted, InfluxDB cancels all scheduled runs of the task. + + To disable a task instead of delete it, use + [`PATCH /api/v2/tasks/TASK_ID`](#operation/PatchTasksID) to set the task status + to `inactive`. + operationId: DeleteTasksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to delete. + in: path + name: taskID + required: true + schema: + type: string + responses: + '204': + description: Success. The task and runs are deleted. Scheduled runs are canceled. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Delete a task + tags: + - Tasks + get: + description: | + Retrieves the specified [task](/influxdb/latest/reference/glossary/#task). + operationId: GetTasksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to retrieve. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Task' + description: Success. The response body contains the task. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve a task + tags: + - Data I/O endpoints + - Tasks + patch: + description: | + Updates the specified [task](/influxdb/latest/reference/glossary/#task), + and then cancels all scheduled runs of the task. + + Use this endpoint to set, modify, or clear task properties--for example: `cron`, `name`, `flux`, `status`. + Once InfluxDB applies the update, it cancels all previously scheduled runs of the task. + + #### Related guides + + - [Update a task](/influxdb/latest/process-data/manage-tasks/update-task/) + - [Task configuration options](/influxdb/latest/process-data/task-options/) + operationId: PatchTasksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task)to update. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + examples: + TaskWithFlux: + $ref: '#/components/examples/TaskWithFluxRequest' + schema: + $ref: '#/components/schemas/TaskUpdateRequest' + description: | + In the request body, provide the task properties to update. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Task' + description: Success. The response body contains the updated task. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Update a task + tags: + - Tasks + /api/v2/tasks/{taskID}/labels: + get: + description: | + Lists all labels for a task. + + Use this endpoint to list labels applied to a task. + Labels are a way to add metadata to InfluxDB resources. + You can use labels for grouping and filtering resources in the + InfluxDB UI, `influx` CLI, and InfluxDB API. + operationId: GetTasksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the task to retrieve labels for. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: Success. The response body contains a list of all labels for the task. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List labels for a task + tags: + - Tasks + post: + description: | + Adds a label to a [task](/influxdb/latest/reference/glossary/#task). + + Use this endpoint to add a label to a task. + Labels are a way to add metadata to InfluxDB resources. + You can use labels for grouping and filtering resources in the + InfluxDB UI, `influx` CLI, and InfluxDB API. + operationId: PostTasksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to label. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: | + In the request body, provide an object that specifies the label. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: Success. The response body contains the label. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Add a label to a task + tags: + - Tasks + /api/v2/tasks/{taskID}/labels/{labelID}: + delete: + description: | + Deletes a label from a [task](/influxdb/latest/reference/glossary/#task). + operationId: DeleteTasksIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to delete the label from. + in: path + name: taskID + required: true + schema: + type: string + - description: | + A label ID. + Specifies the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Success. The label is deleted. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Delete a label from a task + tags: + - Tasks + /api/v2/tasks/{taskID}/logs: + get: + description: | + Lists all log events for a [task](/influxdb/latest/reference/glossary/#task). + + When a task runs, InfluxDB creates a `run` record in the task’s history. + Logs associated with each run provide relevant log messages, timestamps, and the exit status of the `run` attempt. + + Use this endpoint to retrieve only the log events for a task, + without additional task metadata. + operationId: GetTasksIDLogs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: A task ID. Specifies the [task](/influxdb/latest/reference/glossary/#task) to retrieve logs for. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + taskFailure: + summary: Events for a failed task run + value: + events: + - message: 'Started task from script: "option task = {name: \"test task\", every: 3d, offset: 0s}"' + runID: 09a946fc3167d000 + time: '2022-07-13T07:06:54.198167Z' + - message: Completed(failed) + runID: 09a946fc3167d000 + time: '2022-07-13T07:07:13.104037Z' + - message: 'error exhausting result iterator: error in query specification while starting program: this Flux script returns no streaming data. Consider adding a "yield" or invoking streaming functions directly, without performing an assignment' + runID: 09a946fc3167d000 + time: '2022-07-13T08:24:37.115323Z' + taskSuccess: + summary: Events for a successful task run + value: + events: + - message: 'Started task from script: "option task = {name: \"task1\", every: 30m} from(bucket: \"iot_center\") |> range(start: -90d) |> filter(fn: (r) => r._measurement == \"environment\") |> aggregateWindow(every: 1h, fn: mean)"' + runID: 09b070dadaa7d000 + time: '2022-07-18T14:46:07.101231Z' + - message: Completed(success) + runID: 09b070dadaa7d000 + time: '2022-07-18T14:46:07.242859Z' + schema: + $ref: '#/components/schemas/Logs' + description: | + Success. + The response body contains an `events` list with logs for the task. + Each log event `message` contains detail about the event. + If a task run fails, InfluxDB logs an event with the reason for the failure. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List logs for a task + tags: + - Tasks + /api/v2/tasks/{taskID}/members: + get: + deprecated: true + description: | + **Deprecated**: Tasks don't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + Lists all users that have the `member` role for the specified [task](/influxdb/latest/reference/glossary/#task). + operationId: GetTasksIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to retrieve members for. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: | + Success. The response body contains a list of `users` that have + the `member` role for a task. + + If the task has no members, the response contains an empty `users` array. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all task members + tags: + - Tasks + post: + deprecated: true + description: | + **Deprecated**: Tasks don't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + Adds a specified user to members of the specified [task](/influxdb/latest/reference/glossary/#task) and then returns + the member. + operationId: PostTasksIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A [task](/influxdb/latest/reference/glossary/#task) ID. + Specifies the task for the member. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: | + In the request body, provide an object that specifies the user. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: | + Created. The task `member` role is assigned to the user. + The response body contains the resource member with + role and user detail. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a task + tags: + - Tasks + /api/v2/tasks/{taskID}/members/{userID}: + delete: + deprecated: true + description: | + **Deprecated**: Tasks don't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + Removes a member from a [task](/influxdb/latest/reference/glossary/#task). + operationId: DeleteTasksIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: A user ID. Specifies the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: A task ID. Specifies the [task](/influxdb/latest/reference/glossary/#task) to remove the member from. + in: path + name: taskID + required: true + schema: + type: string + responses: + '204': + description: Success. The member is removed. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a task + tags: + - Tasks + /api/v2/tasks/{taskID}/owners: + get: + deprecated: true + description: | + **Deprecated**: Tasks don't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + Lists all users that have the `owner` role for the specified task. + operationId: GetTasksIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: A task ID. Specifies the task to retrieve owners for. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: | + Success. + The response contains a list of `users` that have the `owner` role for the task. + + If the task has no owners, the response contains an empty `users` array. + '401': + $ref: '#/components/responses/AuthorizationError' + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable entity. + + The error may indicate one of the following problems: + + - The request body isn't valid--the request is well-formed, but InfluxDB can't process it due to semantic errors. + - You passed a parameter combination that InfluxDB doesn't support. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a task + tags: + - Tasks + post: + deprecated: true + description: | + **Deprecated**: Tasks don't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + Adds a specified user to owners of the specified task and then returns the + owner. + + Use this endpoint to create a _resource owner_ for the task. + A _resource owner_ is a user with `role: owner` for a specific resource. + operationId: PostTasksIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) for the owner. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: | + In the request body, provide an object that specifies the user. + required: true + responses: + '201': + content: + application/json: + examples: + createdOwner: + summary: User has the owner role for the resource + value: + id: 0772396d1f411000 + links: + logs: /api/v2/users/0772396d1f411000/logs + self: /api/v2/users/0772396d1f411000 + name: USER_NAME + role: owner + status: active + schema: + $ref: '#/components/schemas/ResourceOwner' + description: | + Created. The task `owner` role is assigned to the user. + The response body contains the resource owner with + role and user detail. + '401': + $ref: '#/components/responses/AuthorizationError' + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable entity. + + The error may indicate one of the following problems: + + - The request body isn't valid--the request is well-formed, but InfluxDB can't process it due to semantic errors. + - You passed a parameter combination that InfluxDB doesn't support. + '500': + $ref: '#/components/responses/InternalServerError' + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner for a task + tags: + - Tasks + /api/v2/tasks/{taskID}/owners/{userID}: + delete: + deprecated: true + description: | + **Deprecated**: Tasks don't use `owner` and `member` roles. + Use [`/api/v2/authorizations`](#tag/Authorizations-(API-tokens)) to assign user permissions. + + Removes an owner from a [task](/influxdb/latest/reference/glossary/#task). + operationId: DeleteTasksIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: A user ID. Specifies the owner to remove from the [task](/influxdb/latest/reference/glossary/#task). + in: path + name: userID + required: true + schema: + type: string + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to remove the owner from. + in: path + name: taskID + required: true + schema: + type: string + responses: + '204': + description: Success. The owner is removed. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a task + tags: + - Tasks + /api/v2/tasks/{taskID}/runs: + get: + description: | + Lists runs for the specified [task](/influxdb/latest/process-data/). + + To limit which task runs are returned, pass query parameters in your request. + If you don't pass query parameters, InfluxDB returns all runs for the task + up to the default `limit`. + operationId: GetTasksIDRuns + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to + to list runs for. + in: path + name: taskID + required: true + schema: + type: string + - description: A task run ID. Only returns runs created after the specified run. + in: query + name: after + schema: + type: string + - description: | + Limits the number of task runs returned. Default is `100`. + in: query + name: limit + schema: + default: 100 + maximum: 500 + minimum: 1 + type: integer + - description: | + A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)). + Only returns runs scheduled after the specified time. + in: query + name: afterTime + schema: + format: date-time + type: string + - description: | + A timestamp ([RFC3339 date/time format](/influxdb/latest/reference/glossary/#rfc3339-timestamp)). + Only returns runs scheduled before the specified time. + in: query + name: beforeTime + schema: + format: date-time + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Runs' + description: Success. The response body contains the list of task runs. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List runs for a task + tags: + - Tasks + post: + description: | + Schedules a task run to start immediately, ignoring scheduled runs. + + Use this endpoint to manually start a task run. + Scheduled runs will continue to run as scheduled. + This may result in concurrently running tasks. + + To _retry_ a previous run (and avoid creating a new run), + use the [`POST /api/v2/tasks/{taskID}/runs/{runID}/retry` endpoint](#operation/PostTasksIDRunsIDRetry). + + #### Limitations + + - Queuing a task run requires that the task's `status` property be set to `active`. + operationId: PostTasksIDRuns + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to + to run. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RunManually' + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + description: Success. The run is scheduled to start. + '400': + content: + application/json: + examples: + inactiveTask: + summary: Can't run an inactive task + value: + code: invalid + message: 'failed to force run: inactive task' + schema: + $ref: '#/components/schemas/Error' + description: Bad request. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Start a task run, overriding the schedule + tags: + - Data I/O endpoints + - Tasks + /api/v2/tasks/{taskID}/runs/{runID}: + delete: + description: | + Cancels a running [task](/influxdb/latest/reference/glossary/#task). + + Use this endpoint to cancel a running task. + + #### InfluxDB Cloud + + - Doesn't support this operation. + operationId: DeleteTasksIDRunsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) to + to cancel. + in: path + name: taskID + required: true + schema: + type: string + - description: | + A task run ID. + Specifies the task run to cancel. + in: path + name: runID + required: true + schema: + type: string + responses: + '204': + description: | + Success. The `DELETE` is accepted and the run will be cancelled. + + #### InfluxDB Cloud + + - Doesn't support this operation. + - Doesn't return this status. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '405': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Method not allowed. + + #### InfluxDB Cloud + + - Always returns this error; doesn't support cancelling tasks. + + #### InfluxDB OSS + + - Doesn't return this error. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Cancel a running task + tags: + - Tasks + get: + description: | + Retrieves the specified run for the specified [task](/influxdb/latest/reference/glossary/#task). + + Use this endpoint to retrieve detail and logs for a specific task run. + operationId: GetTasksIDRunsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) + that the task run belongs to. + in: path + name: taskID + required: true + schema: + type: string + - description: A task run ID. Specifies the run to retrieve. + in: path + name: runID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + runSuccess: + summary: A successful task run. + value: + finishedAt: '2022-07-18T14:46:07.308254Z' + id: 09b070dadaa7d000 + links: + logs: /api/v2/tasks/0996e56b2f378000/runs/09b070dadaa7d000/logs + retry: /api/v2/tasks/0996e56b2f378000/runs/09b070dadaa7d000/retry + self: /api/v2/tasks/0996e56b2f378000/runs/09b070dadaa7d000 + task: /api/v2/tasks/0996e56b2f378000 + log: + - message: 'Started task from script: "option task = {name: \"task1\", every: 30m} from(bucket: \"iot_center\") |> range(start: -90d) |> filter(fn: (r) => r._measurement == \"environment\") |> aggregateWindow(every: 1h, fn: mean)"' + runID: 09b070dadaa7d000 + time: '2022-07-18T14:46:07.101231Z' + - message: Completed(success) + runID: 09b070dadaa7d000 + time: '2022-07-18T14:46:07.242859Z' + requestedAt: '2022-07-18T14:46:06Z' + scheduledFor: '2022-07-18T14:46:06Z' + startedAt: '2022-07-18T14:46:07.16222Z' + status: success + taskID: 0996e56b2f378000 + schema: + $ref: '#/components/schemas/Run' + description: Success. The response body contains the task run. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retrieve a run for a task. + tags: + - Tasks + /api/v2/tasks/{taskID}/runs/{runID}/logs: + get: + description: | + Lists all logs for a task run. + A log is a list of run events with `runID`, `time`, and `message` properties. + + Use this endpoint to help analyze [task](/influxdb/latest/reference/glossary/#task) performance and troubleshoot failed task runs. + operationId: GetTasksIDRunsIDLogs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: A task ID. Specifies the [task](/influxdb/latest/reference/glossary/#task) that the run belongs to. + in: path + name: taskID + required: true + schema: + type: string + - description: A run ID. Specifies the task run to list logs for. + in: path + name: runID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + examples: + taskFailure: + summary: Events for a failed task. + value: + events: + - message: 'Started task from script: "option task = {name: \"test task\", every: 3d, offset: 0s}"' + runID: 09a946fc3167d000 + time: '2022-07-13T07:06:54.198167Z' + - message: Completed(failed) + runID: 09a946fc3167d000 + time: '2022-07-13T07:07:13.104037Z' + - message: 'error exhausting result iterator: error in query specification while starting program: this Flux script returns no streaming data. Consider adding a "yield" or invoking streaming functions directly, without performing an assignment' + runID: 09a946fc3167d000 + time: '2022-07-13T08:24:37.115323Z' + taskSuccess: + summary: Events for a successful task run. + value: + events: + - message: 'Started task from script: "option task = {name: \"task1\", every: 30m} from(bucket: \"iot_center\") |> range(start: -90d) |> filter(fn: (r) => r._measurement == \"environment\") |> aggregateWindow(every: 1h, fn: mean)"' + runID: 09b070dadaa7d000 + time: '2022-07-18T14:46:07.101231Z' + - message: Completed(success) + runID: 09b070dadaa7d000 + time: '2022-07-18T14:46:07.242859Z' + schema: + $ref: '#/components/schemas/Logs' + description: | + Success. The response body contains an `events` list with logs for the task run. + Each log event `message` contains detail about the event. + If a run fails, InfluxDB logs an event with the reason for the failure. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: List logs for a run + tags: + - Tasks + /api/v2/tasks/{taskID}/runs/{runID}/retry: + post: + description: | + Queues a [task](/influxdb/latest/reference/glossary/#task) run to + retry and then returns the scheduled run. + + To manually start a _new_ task run, use the + [`POST /api/v2/tasks/{taskID}/runs` endpoint](#operation/PostTasksIDRuns). + + #### Limitations + + - Queuing a task run requires that the task's `status` property be set to `active`. + operationId: PostTasksIDRunsIDRetry + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A task ID. + Specifies the [task](/influxdb/latest/reference/glossary/#task) that the task run belongs to. + in: path + name: taskID + required: true + schema: + type: string + - description: | + A task run ID. + Specifies the task run to retry. + + To find a task run ID, use the + [`GET /api/v2/tasks/{taskID}/runs` endpoint](#operation/GetTasksIDRuns) + to list task runs. + in: path + name: runID + required: true + schema: + type: string + requestBody: + content: + application/json; charset=utf-8: + schema: + type: object + responses: + '200': + content: + application/json: + examples: + retryTaskRun: + summary: A task run scheduled to retry + value: + id: 09d60ffe08738000 + links: + logs: /api/v2/tasks/09a776832f381000/runs/09d60ffe08738000/logs + retry: /api/v2/tasks/09a776832f381000/runs/09d60ffe08738000/retry + self: /api/v2/tasks/09a776832f381000/runs/09d60ffe08738000 + task: /api/v2/tasks/09a776832f381000 + requestedAt: '2022-08-16T20:05:11.84145Z' + scheduledFor: '2022-08-15T00:00:00Z' + status: scheduled + taskID: 09a776832f381000 + schema: + $ref: '#/components/schemas/Run' + description: Success. The response body contains the queued run. + '400': + content: + application/json: + examples: + inactiveTask: + summary: Can't retry an inactive task + value: + code: invalid + message: 'failed to retry run: inactive task' + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + + InfluxDB may return this error for the following reasons: + + - The task has `status: inactive`. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Retry a task run + tags: + - Tasks + /api/v2/telegraf/plugins: + get: + operationId: GetTelegrafPlugins + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The type of plugin desired. + in: query + name: type + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/TelegrafPlugins' + description: A list of Telegraf plugins. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all Telegraf plugins + tags: + - Telegraf Plugins + /api/v2/telegrafs: + get: + operationId: GetTelegrafs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID the Telegraf config belongs to. + in: query + name: orgID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegrafs' + description: A list of Telegraf configurations + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all Telegraf configurations + tags: + - Telegrafs + post: + operationId: PostTelegrafs + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TelegrafPluginRequest' + description: Telegraf configuration to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegraf' + description: Telegraf configuration created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a Telegraf configuration + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}: + delete: + operationId: DeleteTelegrafsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a Telegraf configuration + tags: + - Telegrafs + get: + operationId: GetTelegrafsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + - in: header + name: Accept + required: false + schema: + default: application/toml + enum: + - application/toml + - application/json + - application/octet-stream + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegraf' + application/octet-stream: + example: |- + [agent] + interval = "10s" + schema: + type: string + application/toml: + example: |- + [agent] + interval = "10s" + schema: + type: string + description: Telegraf configuration details + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a Telegraf configuration + tags: + - Telegrafs + put: + operationId: PutTelegrafsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TelegrafPluginRequest' + description: Telegraf configuration update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegraf' + description: An updated Telegraf configurations + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a Telegraf configuration + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/labels: + get: + operationId: GetTelegrafsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a Telegraf config + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a Telegraf config + tags: + - Telegrafs + post: + operationId: PostTelegrafsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label added to the Telegraf config + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/labels/{labelID}: + delete: + operationId: DeleteTelegrafsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + - description: The label ID. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Telegraf config not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/members: + get: + operationId: GetTelegrafsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of Telegraf config members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all users with member privileges for a Telegraf config + tags: + - Telegrafs + post: + operationId: PostTelegrafsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Member added to Telegraf config + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/members/{userID}: + delete: + operationId: DeleteTelegrafsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/owners: + get: + operationId: GetTelegrafsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: Returns Telegraf configuration owners as a ResourceOwners list + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a Telegraf configuration + tags: + - Telegrafs + post: + operationId: PostTelegrafsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Telegraf configuration owner was added. Returns a ResourceOwner that references the User. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a Telegraf configuration + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/owners/{userID}: + delete: + operationId: DeleteTelegrafsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a Telegraf config + tags: + - Telegrafs + /api/v2/templates/apply: + post: + description: | + Applies a template to + create or update a [stack](/influxdb/latest/influxdb-templates/stacks/) of InfluxDB + [resources](/influxdb/latest/reference/cli/influx/export/all/#resources). + The response contains the diff of changes and the stack ID. + + Use this endpoint to install an InfluxDB template to an organization. + Provide template URLs or template objects in your request. + To customize which template resources are installed, use the `actions` + parameter. + + By default, when you apply a template, InfluxDB installs the template to + create and update stack resources and then generates a diff of the changes. + If you pass `dryRun: true` in the request body, InfluxDB validates the + template and generates the resource diff, but doesn’t make any + changes to your instance. + + #### Custom values for templates + + - Some templates may contain [environment references](/influxdb/latest/influxdb-templates/create/#include-user-definable-resource-names) for custom metadata. + To provide custom values for environment references, pass the _`envRefs`_ + property in the request body. + For more information and examples, see how to + [define environment references](/influxdb/latest/influxdb-templates/use/#define-environment-references). + + - Some templates may contain queries that use + [secrets](/influxdb/latest/security/secrets/). + To provide custom secret values, pass the _`secrets`_ property + in the request body. + Don't expose secret values in templates. + For more information, see [how to pass secrets when installing a template](/influxdb/latest/influxdb-templates/use/#pass-secrets-when-installing-a-template). + + #### Required permissions + + - `write` permissions for resource types in the template. + + #### Rate limits (with InfluxDB Cloud) + + - Adjustable service quotas apply. + For more information, see [limits and adjustable quotas](/influxdb/cloud/account-management/limits/). + + #### Related guides + + - [Use templates](/influxdb/latest/influxdb-templates/use/) + - [Stacks](/influxdb/latest/influxdb-templates/stacks/) + operationId: ApplyTemplate + requestBody: + content: + application/json: + examples: + skipKindAction: + summary: Skip all bucket and task resources in the provided templates + value: + actions: + - action: skipKind + properties: + kind: Bucket + - action: skipKind + properties: + kind: Task + orgID: INFLUX_ORG_ID + templates: + - contents: + - '[object Object]': null + skipResourceAction: + summary: Skip specific resources in the provided templates + value: + actions: + - action: skipResource + properties: + kind: Label + resourceTemplateName: foo-001 + - action: skipResource + properties: + kind: Bucket + resourceTemplateName: bar-020 + - action: skipResource + properties: + kind: Bucket + resourceTemplateName: baz-500 + orgID: INFLUX_ORG_ID + templates: + - contents: + - apiVersion: influxdata.com/v2alpha1 + kind: Bucket + metadata: + name: baz-500 + templateObjectEnvRefs: + summary: envRefs for template objects + value: + envRefs: + docker-bucket: MY_DOCKER_BUCKET + docker-spec-1: MY_DOCKER_SPEC + linux-cpu-label: MY_CPU_LABEL + orgID: INFLUX_ORG_ID + templates: + - contents: + - apiVersion: influxdata.com/v2alpha1 + kind: Label + metadata: + name: + envRef: + key: linux-cpu-label + spec: + color: '#326BBA' + name: inputs.cpu + - contents: + - apiVersion: influxdata.com/v2alpha1 + kind: Bucket + metadata: + name: + envRef: + key: docker-bucket + schema: + $ref: '#/components/schemas/TemplateApply' + application/x-jsonnet: + schema: + $ref: '#/components/schemas/TemplateApply' + text/yml: + schema: + $ref: '#/components/schemas/TemplateApply' + description: | + Parameters for applying templates. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/TemplateSummary' + description: | + Success. + The template dry run succeeded. + The response body contains a resource diff of changes that the + template would have made if installed. + No resources were created or updated. + The diff and summary won't contain IDs for resources + that didn't exist at the time of the dry run. + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/TemplateSummary' + description: | + Success. + The template applied successfully. + The response body contains the stack ID, a diff, and a summary. + The diff compares the initial state to the state after the template installation. + The summary contains newly created resources. + '422': + content: + application/json: + schema: + allOf: + - $ref: '#/components/schemas/TemplateSummary' + - properties: + code: + type: string + message: + type: string + required: + - message + - code + type: object + description: | + Unprocessable entity. + + + The error may indicate one of the following problems: + + - The template failed validation. + - You passed a parameter combination that InfluxDB doesn't support. + '500': + content: + application/json: + examples: + createExceedsQuota: + summary: 'InfluxDB Cloud: Creating resource would exceed quota.' + value: + code: internal error + message: "resource_type=\"tasks\" err=\"failed to apply resource\"\n\tmetadata_name=\"alerting-gates-b84003\" err_msg=\"failed to create tasks[\\\"alerting-gates-b84003\\\"]: creating task would exceed quota\"" + schema: + $ref: '#/components/schemas/Error' + description: | + Internal server error. + + #### InfluxDB Cloud + + - Returns this error if creating one of the template + resources (bucket, dashboard, task, user) exceeds your plan’s + adjustable service quotas. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Apply or dry-run a template + tags: + - Templates + x-codeSamples: + - label: 'cURL: Dry run with a remote template' + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/templates/apply" \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --data @- << EOF + { + "dryRun": true, + "orgID": "INFLUX_ORG_ID", + "remotes": [ + { + "url": "https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml" + } + ] + } + EOF + - label: 'cURL: Apply with secret values' + lang: Shell + source: | + curl "http://localhost:8086/api/v2/templates/apply" \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --data @- << EOF | jq . + { + "orgID": "INFLUX_ORG_ID", + "secrets": { + "SLACK_WEBHOOK": "YOUR_SECRET_WEBHOOK_URL" + }, + "remotes": [ + { + "url": "https://raw.githubusercontent.com/influxdata/community-templates/master/fortnite/fn-template.yml" + } + ] + } + EOF + - label: 'cURL: Apply template objects with environment references' + lang: Shell + source: | + curl --request POST "http://localhost:8086/api/v2/templates/apply" \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --data @- << EOF + { "orgID": "INFLUX_ORG_ID", + "envRefs": { + "linux-cpu-label": "MY_CPU_LABEL", + "docker-bucket": "MY_DOCKER_BUCKET", + "docker-spec-1": "MY_DOCKER_SPEC" + }, + "templates": [ + { "contents": [{ + "apiVersion": "influxdata.com/v2alpha1", + "kind": "Label", + "metadata": { + "name": { + "envRef": { + "key": "linux-cpu-label" + } + } + }, + "spec": { + "color": "#326BBA", + "name": "inputs.cpu" + } + }] + }, + "templates": [ + { "contents": [{ + "apiVersion": "influxdata.com/v2alpha1", + "kind": "Label", + "metadata": { + "name": { + "envRef": { + "key": "linux-cpu-label" + } + } + }, + "spec": { + "color": "#326BBA", + "name": "inputs.cpu" + } + }] + }, + { "contents": [{ + "apiVersion": "influxdata.com/v2alpha1", + "kind": "Bucket", + "metadata": { + "name": { + "envRef": { + "key": "docker-bucket" + } + } + } + }] + } + ] + } + EOF + /api/v2/templates/export: + post: + operationId: ExportTemplate + requestBody: + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/TemplateExportByID' + - $ref: '#/components/schemas/TemplateExportByName' + description: Export resources as an InfluxDB template. + required: false + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Template' + application/x-yaml: + schema: + $ref: '#/components/schemas/Template' + description: The template was created successfully. Returns the newly created template. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Export a new template + tags: + - Templates + /api/v2/users: + get: + description: | + Lists [users](/influxdb/latest/reference/glossary/#user). + Default limit is `20`. + + To limit which users are returned, pass query parameters in your request. + + #### Required permissions for InfluxDB OSS + + | Action | Permission required | Restriction | + |:-------|:--------------------|:------------| + | List all users | _[Operator token](/influxdb/latest/security/tokens/#operator-token)_ | | + | List a specific user | `read-users` or `read-user USER_ID` | | + + *`USER_ID`* is the ID of the user that you want to retrieve. + + #### Related guides + + - [View users](/influxdb/latest/users/view-users/). + operationId: GetUsers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/After' + - description: | + A user name. + Only lists the specified [user](/influxdb/latest/reference/glossary/#user). + in: query + name: name + schema: + type: string + - description: | + A user ID. + Only lists the specified [user](/influxdb/latest/reference/glossary/#user). + in: query + name: id + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Users' + description: Success. The response contains a list of `users`. + '401': + content: + application/json: + examples: + tokenNotAuthorized: + summary: API token doesn't have `write:users` permission + value: + code: unauthorized + message: write:users/09d8462ce0764000 is unauthorized + schema: + $ref: '#/components/schemas/Error' + description: | + Unauthorized. + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable entity. + + The error may indicate one of the following problems: + + - The request body isn't valid--the request is well-formed, + but InfluxDB can't process it due to semantic errors. + - You passed a parameter combination that InfluxDB doesn't support. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: List users + tags: + - Security and access endpoints + - Users + post: + description: | + Creates a [user](/influxdb/latest/reference/glossary/#user) that can access InfluxDB. + Returns the user. + + Use this endpoint to create a user that can sign in to start a user session + through one of the following interfaces: + + - InfluxDB UI + - `/api/v2/signin` InfluxDB API endpoint + - InfluxDB CLI + + This endpoint represents the first two steps in a four-step process to allow a user + to authenticate with a username and password, and then access data in an organization: + + 1. Create a user: send a `POST` request to `POST /api/v2/users`. The `name` property is required. + 2. Extract the user ID (`id` property) value from the API response for _step 1_. + 3. Create an authorization (and API token) for the user: send a `POST` request to [`POST /api/v2/authorizations`](#operation/PostAuthorizations), passing the user ID (`id`) from _step 2_. + 4. Create a password for the user: send a `POST` request to [`POST /api/v2/users/USER_ID/password`](#operation/PostUsersIDPassword), passing the user ID from _step 2_. + + #### Required permissions + + | Action | Permission required | Restriction | + |:-------|:--------------------|:------------| + | Create a user | _[Operator token](/influxdb/latest/security/tokens/#operator-token)_ | | + + #### Related guides + + - [Create a user](/influxdb/latest/users/create-user/) + - [Create an API token scoped to a user](/influxdb/latest/security/tokens/create-token/#create-a-token-scoped-to-a-user) + operationId: PostUsers + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/User' + description: The user to create. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: | + Success. + The response body contains the user. + '401': + content: + application/json: + examples: + tokenNotAuthorized: + summary: API token doesn't have `write:users` permission + value: + code: unauthorized + message: write:users/09d8462ce0764000 is unauthorized + schema: + $ref: '#/components/schemas/Error' + description: | + Unauthorized. + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Unprocessable entity. + + The error may indicate one of the following problems: + + - The request body isn't valid--the request is well-formed, but InfluxDB can't process it due to semantic errors. + - You passed a parameter combination that InfluxDB doesn't support. + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Create a user + tags: + - Users + x-codeSamples: + - label: 'cURL: create a user and set a password' + lang: Shell + source: | + # The following steps show how to create a user and then set + # the user's password: + # + # 1. Send a request to this endpoint to create a user--for example: + + USER=$(curl --request POST \ + "INFLUX_URL/api/v2/users/" \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header 'Content-type: application/json' \ + --data-binary @- << EOF + { + "name": "USER_NAME", + "status": "active" + } + EOF + ) + + # 2. Extract the id property from the response in step 1--for example: + + USER_ID=`echo $USER | jq -r '.id'` + + # 3. To set the user's password, set the password property in a request + # to the /api/v2/users/USER_ID/password endpoint--for example: + + curl request POST "INFLUX_URL/api/v2/users/$USER_ID/password/" \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header 'Content-type: application/json' \ + --data '{ "password": "USER_PASSWORD" }' + /api/v2/users/{userID}: + delete: + description: | + Deletes a [user](/influxdb/latest/reference/glossary/#user). + + #### Required permissions + + | Action | Permission required | + |:------------|:-----------------------------------------------| + | Delete a user | `write-users` or `write-user USER_ID` | + + *`USER_ID`* is the ID of the user that you want to delete. + + #### Related guides + + - [Manage users](/influxdb/latest/organizations/users/) + operationId: DeleteUsersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A user ID. + Specifies the [user](/influxdb/latest/reference/glossary/#user) to delete. + in: path + name: userID + required: true + schema: + type: string + responses: + '204': + description: Success. The user is deleted. + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + The response body contains detail about the error. + '401': + $ref: '#/components/responses/AuthorizationError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Delete a user + tags: + - Users + get: + description: | + Retrieves a [user](/influxdb/latest/reference/glossary/#user). + + #### Related guides + + - [Manage users](/influxdb/latest/organizations/users/) + operationId: GetUsersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A user ID. + Retrieves the specified [user](/influxdb/latest/reference/glossary/#user). + in: path + name: userID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: Success. The response body contains the user. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Retrieve a user + tags: + - Security and access endpoints + - Users + patch: + description: | + Updates a [user](/influxdb/latest/reference/glossary/#user) and returns the user. + + #### Required permissions + + | Action | Permission required | + |:------------|:-----------------------------------------------| + | Update a user | `write-users` or `write-user USER_ID` | + + *`USER_ID`* is the ID of the user that you want to update. + + #### Related guides + + - [Manage users](/influxdb/latest/organizations/users/) + operationId: PatchUsersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A user ID. + Specifies the [user](/influxdb/latest/reference/glossary/#user) to update. + in: path + name: userID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/User' + description: In the request body, provide the user properties to update. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: | + Success. + The response body contains the user. + '400': + $ref: '#/components/responses/BadRequestError' + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '500': + $ref: '#/components/responses/InternalServerError' + default: + $ref: '#/components/responses/GeneralServerError' + summary: Update a user + tags: + - Users + /api/v2/users/{userID}/password: + post: + description: | + Updates a user password. + + #### InfluxDB Cloud + + - Doesn't allow you to manage user passwords through the API. + Use the InfluxDB Cloud user interface (UI) to update a password. + + #### Related guides + + - [InfluxDB Cloud - Change your password](/influxdb/cloud/account-management/change-password/) + - [InfluxDB OSS - Change your password](/influxdb/latest/users/change-password/) + operationId: PostUsersIDPassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the user to set the password for. + in: path + name: userID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PasswordResetBody' + description: The new password to set for the user. + required: true + responses: + '204': + description: Success. The password is updated. + '400': + content: + application/json: + examples: + updatePasswordNotAllowed: + summary: Cloud API can't update passwords + value: + code: invalid + message: passwords cannot be changed through the InfluxDB Cloud API + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + + #### InfluxDB Cloud + + - Doesn't allow you to manage passwords through the API; always responds with this status. + + #### InfluxDB OSS + + - Doesn't understand a value passed in the request. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + summary: Update a password + tags: + - Security and access endpoints + - Users + x-codeSamples: + - label: 'cURL: use HTTP POST to update the user password' + lang: Shell + source: | + curl --request POST \ + "http://localhost:8086/api/v2/users/USER_ID/password" \ + --header 'Content-type: application/json' \ + --header "Authorization: Token INFLUX_TOKEN" \ + --data-binary @- << EOF + {"password": "NEW_USER_PASSWORD"} + EOF + put: + description: | + Updates a user password. + + Use this endpoint to let a user authenticate with + [Basic authentication credentials](#section/Authentication/BasicAuthentication) + and set a new password. + + #### InfluxDB Cloud + + - Doesn't allow you to manage user passwords through the API. + Use the InfluxDB Cloud user interface (UI) to update a password. + + #### Related guides + + - [InfluxDB Cloud - Change your password](/influxdb/cloud/account-management/change-password/) + - [InfluxDB OSS - Change your password](/influxdb/latest/users/change-password/) + operationId: PutUsersIDPassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the user to set the password for. + in: path + name: userID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PasswordResetBody' + description: The new password to set for the user. + required: true + responses: + '204': + description: Success. The password is updated. + '400': + content: + application/json: + examples: + updatePasswordNotAllowed: + summary: Cloud API can't update passwords + value: + code: invalid + message: passwords cannot be changed through the InfluxDB Cloud API + schema: + $ref: '#/components/schemas/Error' + description: | + Bad request. + + #### InfluxDB Cloud + + - Doesn't allow you to manage passwords through the API; always responds with this status. + + #### InfluxDB OSS + + - Doesn't understand a value passed in the request. + default: + $ref: '#/components/responses/GeneralServerError' + description: Unexpected error + security: + - BasicAuthentication: [] + summary: Update a password + tags: + - Security and access endpoints + - Users + x-codeSamples: + - label: 'cURL: use Basic auth to update the user password' + lang: Shell + source: | + curl -c ./cookie-file.tmp --request POST \ + "http://localhost:8086/api/v2/signin" \ + --user "${INFLUX_USER_NAME}:${INFLUX_USER_PASSWORD}" + + curl -b ./cookie-file.tmp --request PUT \ + "http://localhost:8086/api/v2/users/USER_ID/password" \ + --header 'Content-type: application/json' \ + --data-binary @- << EOF + {"password": "NEW_USER_PASSWORD"} + EOF + /api/v2/variables: + get: + operationId: GetVariables + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The name of the organization. + in: query + name: org + schema: + type: string + - description: The organization ID. + in: query + name: orgID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variables' + description: A list of variables for an organization + '400': + $ref: '#/components/responses/GeneralServerError' + description: Invalid request + default: + $ref: '#/components/responses/GeneralServerError' + description: Internal server error + summary: List all variables + tags: + - Variables + post: + operationId: PostVariables + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable created + default: + $ref: '#/components/responses/GeneralServerError' + description: Internal server error + summary: Create a variable + tags: + - Variables + /api/v2/variables/{variableID}: + delete: + operationId: DeleteVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + responses: + '204': + description: Variable deleted + default: + $ref: '#/components/responses/GeneralServerError' + description: Internal server error + summary: Delete a variable + tags: + - Variables + get: + operationId: GetVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable found + '404': + $ref: '#/components/responses/GeneralServerError' + description: Variable not found + default: + $ref: '#/components/responses/GeneralServerError' + description: Internal server error + summary: Retrieve a variable + tags: + - Variables + patch: + operationId: PatchVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable updated + default: + $ref: '#/components/responses/GeneralServerError' + description: Internal server error + summary: Update a variable + tags: + - Variables + put: + operationId: PutVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable to replace + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable updated + default: + $ref: '#/components/responses/GeneralServerError' + description: Internal server error + summary: Replace a variable + tags: + - Variables + /api/v2/variables/{variableID}/labels: + get: + operationId: GetVariablesIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a variable + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a variable + tags: + - Variables + post: + operationId: PostVariablesIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The newly added label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a variable + tags: + - Variables + /api/v2/variables/{variableID}/labels/{labelID}: + delete: + operationId: DeleteVariablesIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + - description: The label ID to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Variable not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a variable + tags: + - Variables + /api/v2/write: + post: + description: | + Writes data to a bucket. + + Use this endpoint to send data in [line protocol](/influxdb/latest/reference/syntax/line-protocol/) format to InfluxDB. + + #### InfluxDB Cloud + + - Does the following when you send a write request: + + 1. Validates the request and queues the write. + 2. If queued, responds with _success_ (HTTP `2xx` status code); _error_ otherwise. + 3. Handles the delete asynchronously and reaches eventual consistency. + + To ensure that InfluxDB Cloud handles writes and deletes in the order you request them, + wait for a success response (HTTP `2xx` status code) before you send the next request. + + Because writes and deletes are asynchronous, your change might not yet be readable + when you receive the response. + + #### InfluxDB OSS + + - Validates the request and handles the write synchronously. + - If all points were written successfully, responds with HTTP `2xx` status code; + otherwise, returns the first line that failed. + + #### Required permissions + + - `write-buckets` or `write-bucket BUCKET_ID`. + + *`BUCKET_ID`* is the ID of the destination bucket. + + #### Rate limits (with InfluxDB Cloud) + + `write` rate limits apply. + For more information, see [limits and adjustable quotas](/influxdb/cloud/account-management/limits/). + + #### Related guides + + - [Write data with the InfluxDB API](/influxdb/latest/write-data/developer-tools/api) + - [Optimize writes to InfluxDB](/influxdb/latest/write-data/best-practices/optimize-writes/) + - [Troubleshoot issues writing data](/influxdb/latest/write-data/troubleshoot/) + operationId: PostWrite + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The compression applied to the line protocol in the request payload. + To send a GZIP payload, pass `Content-Encoding: gzip` header. + in: header + name: Content-Encoding + schema: + default: identity + description: | + Content coding. + Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - description: | + The format of the data in the request body. + To send a line protocol payload, pass `Content-Type: text/plain; charset=utf-8`. + in: header + name: Content-Type + schema: + default: text/plain; charset=utf-8 + description: | + `text/plain` is the content type for line protocol. `UTF-8` is the default character set. + enum: + - text/plain + - text/plain; charset=utf-8 + type: string + - description: | + The size of the entity-body, in bytes, sent to InfluxDB. + If the length is greater than the `max body` configuration option, + the server responds with status code `413`. + in: header + name: Content-Length + schema: + description: The length in decimal number of octets. + type: integer + - description: | + The content type that the client can understand. + Writes only return a response body if they fail--for example, + due to a formatting problem or quota limit. + + #### InfluxDB Cloud + + - Returns only `application/json` for format and limit errors. + - Returns only `text/html` for some quota limit errors. + + #### InfluxDB OSS + + - Returns only `application/json` for format and limit errors. + + #### Related guides + + - [Troubleshoot issues writing data](/influxdb/latest/write-data/troubleshoot/) + in: header + name: Accept + schema: + default: application/json + description: Error content type. + enum: + - application/json + type: string + - description: | + An organization name or ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Writes data to the bucket in the organization + associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or the `orgID` parameter. + - If you pass both `orgID` and `org`, they must both be valid. + - Writes data to the bucket in the specified organization. + in: query + name: org + required: true + schema: + description: The organization name or ID. + type: string + - description: | + An organization ID. + + #### InfluxDB Cloud + + - Doesn't use the `org` parameter or `orgID` parameter. + - Writes data to the bucket in the organization + associated with the authorization (API token). + + #### InfluxDB OSS + + - Requires either the `org` parameter or the `orgID` parameter. + - If you pass both `orgID` and `org`, they must both be valid. + - Writes data to the bucket in the specified organization. + in: query + name: orgID + schema: + type: string + - description: | + A bucket name or ID. + InfluxDB writes all points in the batch to the specified bucket. + in: query + name: bucket + required: true + schema: + description: The bucket name or ID. + type: string + - description: The precision for unix timestamps in the line protocol batch. + in: query + name: precision + schema: + $ref: '#/components/schemas/WritePrecision' + requestBody: + content: + text/plain: + examples: + plain-utf8: + value: | + airSensors,sensor_id=TLM0201 temperature=73.97038159354763,humidity=35.23103248356096,co=0.48445310567793615 1630424257000000000 + airSensors,sensor_id=TLM0202 temperature=75.30007505999716,humidity=35.651929918691714,co=0.5141876544505826 1630424257000000000 + schema: + format: byte + type: string + description: | + In the request body, provide data in [line protocol format](/influxdb/latest/reference/syntax/line-protocol/). + + To send compressed data, do the following: + + 1. Use [GZIP](https://www.gzip.org/) to compress the line protocol data. + 2. In your request, send the compressed data and the + `Content-Encoding: gzip` header. + + #### Related guides + + - [Best practices for optimizing writes](/influxdb/latest/write-data/best-practices/optimize-writes/) + required: true + responses: + '204': + description: | + Success. + + #### InfluxDB Cloud + + - Validated and queued the request. + - Handles the write asynchronously - the write might not have completed yet. + + #### InfluxDB OSS + + - Successfully wrote all points in the batch. + + #### Related guides + + - [How to check for write errors](/influxdb/latest/write-data/troubleshoot/) + '400': + content: + application/json: + examples: + measurementSchemaFieldTypeConflict: + summary: (Cloud) field type conflict thrown by an explicit bucket schema + value: + code: invalid + message: 'partial write error (2 written): unable to parse ''air_sensor,service=S1,sensor=L1 temperature="90.5",humidity=70.0 1632850122'': schema: field type for field "temperature" not permitted by schema; got String but expected Float' + orgNotFound: + summary: (OSS) organization not found + value: + code: invalid + message: 'failed to decode request body: organization not found' + schema: + $ref: '#/components/schemas/LineProtocolError' + description: | + Bad request. The response body contains detail about the error. + + InfluxDB returns this error if the line protocol data in the request is malformed. + The response body contains the first malformed line in the data, and indicates what was expected. + For partial writes, the number of points written and the number of points rejected are also included. + For more information, check the `rejected_points` measurement in your `_monitoring` bucket. + + #### InfluxDB Cloud + + - Returns this error for bucket schema conflicts. + + #### InfluxDB OSS + + - Returns this error if the `org` parameter or `orgID` parameter doesn't match an organization. + '401': + $ref: '#/components/responses/AuthorizationError' + '404': + $ref: '#/components/responses/ResourceNotFoundError' + '413': + content: + application/json: + examples: + dataExceedsSizeLimitOSS: + summary: InfluxDB OSS response + value: | + {"code":"request too large","message":"unable to read data: points batch is too large"} + schema: + $ref: '#/components/schemas/LineProtocolLengthError' + text/html: + examples: + dataExceedsSizeLimit: + summary: InfluxDB Cloud response + value: | + + 413 Request Entity Too Large + +

413 Request Entity Too Large

+
+
nginx
+ + + schema: + type: string + description: | + The request payload is too large. + InfluxDB rejected the batch and did not write any data. + + #### InfluxDB Cloud: + + - Returns this error if the payload exceeds the 50MB size limit. + - Returns `Content-Type: text/html` for this error. + + #### InfluxDB OSS: + + - Returns this error only if the [Go (golang) `ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function raises an error. + - Returns `Content-Type: application/json` for this error. + '429': + description: | + Too many requests. + + #### InfluxDB Cloud + + - Returns this error if a **read** or **write** request exceeds your plan's [adjustable service quotas](/influxdb/cloud/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum [global limit](/influxdb/cloud/account-management/limits/#global-limits). + - For rate limits that reset automatically, returns a `Retry-After` header that describes when to try the write again. + - For limits that can't reset (for example, **cardinality limit**), doesn't return a `Retry-After` header. + + Rates (data-in (writes), queries (reads), and deletes) accrue within a fixed five-minute window. + Once a rate limit is exceeded, InfluxDB returns an error response until the current five-minute window resets. + + #### InfluxDB OSS + + - Doesn't return this error. + headers: + Retry-After: + description: Non-negative decimal integer indicating seconds to wait before retrying the request. + schema: + format: int32 + type: integer + '500': + $ref: '#/components/responses/InternalServerError' + '503': + description: | + Service unavailable. + + - Returns this error if + the server is temporarily unavailable to accept writes. + - Returns a `Retry-After` header that describes when to try the write again. + headers: + Retry-After: + description: Non-negative decimal integer indicating seconds to wait before retrying the request. + schema: + format: int32 + type: integer + default: + $ref: '#/components/responses/GeneralServerError' + summary: Write data + tags: + - Data I/O endpoints + - Write + /legacy/authorizations: + get: + operationId: GetLegacyAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + A user ID. + Only returns legacy authorizations scoped to the specified [user](/influxdb/v2.6/reference/glossary/#user). + in: query + name: userID + schema: + type: string + - description: | + A user name. + Only returns legacy authorizations scoped to the specified [user](/influxdb/v2.6/reference/glossary/#user). + in: query + name: user + schema: + type: string + - description: | + An organization ID. + Only returns legacy authorizations that belong to the specified [organization](/influxdb/v2.6/reference/glossary/#organization). + in: query + name: orgID + schema: + type: string + - description: | + An organization name. + Only returns legacy authorizations that belong to the specified [organization](/influxdb/v2.6/reference/glossary/#organization). + in: query + name: org + schema: + type: string + - description: | + An authorization name token. + Only returns legacy authorizations with the specified name. + in: query + name: token + schema: + type: string + - description: | + An authorization ID. + Returns the specified legacy authorization. + in: query + name: authID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + properties: + authorizations: + items: + $ref: '#/components/schemas/Authorization' + type: array + links: + $ref: '#/components/schemas/Links' + readOnly: true + type: object + description: Success. The response body contains a list of legacy `authorizations`. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: List all legacy authorizations + tags: + - Legacy Authorizations + post: + description: | + Creates a legacy authorization and returns the legacy authorization. + + #### Required permissions + + - `write-users USER_ID` if you pass the `userID` property in the request body. + + *`USER_ID`* is the ID of the user that you want to scope the authorization to. + operationId: PostLegacyAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LegacyAuthorizationPostRequest' + description: The legacy authorization to create. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: | + Created. The legacy authorization is created. + The response body contains the newly created legacy authorization. + '400': + $ref: '#/components/responses/ServerError' + description: Invalid request + '401': + content: + application/json: + examples: + unauthorizedWriteUsers: + summary: The token doesn't have the write:user permission + value: + code: unauthorized + message: write:users/08028e90933bf000 is unauthorized + schema: + properties: + code: + description: | + The HTTP status code description. Default is `unauthorized`. + enum: + - unauthorized + readOnly: true + type: string + message: + description: A human-readable message that may contain detail about the error. + readOnly: true + type: string + description: | + Unauthorized. + The API token passed doesn't have the permissions necessary for the + request. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Create a legacy authorization + tags: + - Legacy Authorizations + servers: + - url: /private + /legacy/authorizations/{authID}: + delete: + operationId: DeleteLegacyAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to delete. + in: path + name: authID + required: true + schema: + type: string + responses: + '204': + description: Legacy authorization deleted + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Delete a legacy authorization + tags: + - Legacy Authorizations + get: + operationId: GetLegacyAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to get. + in: path + name: authID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Legacy authorization details + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Retrieve a legacy authorization + tags: + - Legacy Authorizations + patch: + operationId: PatchLegacyAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to update. + in: path + name: authID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AuthorizationUpdateRequest' + description: Legacy authorization to update + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: The active or inactive legacy authorization + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Update a legacy authorization to be active or inactive + tags: + - Legacy Authorizations + servers: + - url: /private + /legacy/authorizations/{authID}/password: + post: + operationId: PostLegacyAuthorizationsIDPassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to update. + in: path + name: authID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + properties: + password: + type: string + required: + - password + description: New password + required: true + responses: + '204': + description: Legacy authorization password set + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Set a legacy authorization password + tags: + - Legacy Authorizations + servers: + - url: /private + /query: + get: + description: Queries InfluxDB using InfluxQL. + operationId: GetLegacyQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: header + name: Accept + schema: + default: application/json + description: | + Media type that the client can understand. + + **Note**: With `application/csv`, query results include [**unix timestamps**](/influxdb/v2.6/reference/glossary/#unix-timestamp) instead of [RFC3339 timestamps](/influxdb/v2.6/reference/glossary/#rfc3339-timestamp). + enum: + - application/json + - application/csv + - text/csv + - application/x-msgpack + type: string + - description: The content encoding (usually a compression algorithm) that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + enum: + - application/json + type: string + - description: The InfluxDB 1.x username to authenticate the request. + in: query + name: u + schema: + type: string + - description: The InfluxDB 1.x password to authenticate the request. + in: query + name: p + schema: + type: string + - description: | + The database to query data from. + This is mapped to an InfluxDB [bucket](/influxdb/v2.6/reference/glossary/#bucket). + For more information, see [Database and retention policy mapping](/influxdb/v2.6/api/influxdb-1x/dbrp/). + in: query + name: db + required: true + schema: + type: string + - description: | + The retention policy to query data from. + This is mapped to an InfluxDB [bucket](/influxdb/v2.6/reference/glossary/#bucket). + For more information, see [Database and retention policy mapping](/influxdb/v2.6/api/influxdb-1x/dbrp/). + in: query + name: rp + schema: + type: string + - description: The InfluxQL query to execute. To execute multiple queries, delimit queries with a semicolon (`;`). + in: query + name: q + required: true + schema: + type: string + - description: | + A unix timestamp precision. + Formats timestamps as [unix (epoch) timestamps](/influxdb/v2.6/reference/glossary/#unix-timestamp) the specified precision + instead of [RFC3339 timestamps](/influxdb/v2.6/reference/glossary/#rfc3339-timestamp) with nanosecond precision. + in: query + name: epoch + schema: + enum: + - ns + - u + - µ + - ms + - s + - m + - h + type: string + responses: + '200': + content: + application/csv: + schema: + $ref: '#/components/schemas/InfluxqlCsvResponse' + application/json: + schema: + $ref: '#/components/schemas/InfluxqlJsonResponse' + application/x-msgpack: + schema: + format: binary + type: string + text/csv: + schema: + $ref: '#/components/schemas/InfluxqlCsvResponse' + description: Query results + headers: + Content-Encoding: + description: Lists encodings (usually compression algorithms) that have been applied to the response payload. + schema: + default: identity + description: | + The content coding: + - `gzip`: compressed data + - `identity`: unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + Trace-Id: + description: The trace ID, if generated, of the request. + schema: + description: Trace ID of a request. + type: string + '429': + description: | + #### InfluxDB Cloud: + - returns this error if a **read** or **write** request exceeds your + plan's [adjustable service quotas](/influxdb/v2.6/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum + [global limit](/influxdb/v2.6/account-management/limits/#global-limits) + - returns `Retry-After` header that describes when to try the write again. + + #### InfluxDB OSS: + - doesn't return this error. + headers: + Retry-After: + description: A non-negative decimal integer indicating the seconds to delay after the response is received. + schema: + format: int32 + type: integer + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Error processing query + summary: Query with the 1.x compatibility API + tags: + - Legacy Query + /write: + post: + operationId: PostLegacyWrite + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The InfluxDB 1.x username to authenticate the request. + in: query + name: u + schema: + type: string + - description: The InfluxDB 1.x password to authenticate the request. + in: query + name: p + schema: + type: string + - description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy. + in: query + name: db + required: true + schema: + type: string + - description: Retention policy name. + in: query + name: rp + schema: + type: string + - description: Write precision. + in: query + name: precision + schema: + type: string + - description: When present, its value indicates to the database that compression is applied to the line protocol body. + in: header + name: Content-Encoding + schema: + default: identity + description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity. + enum: + - gzip + - identity + type: string + requestBody: + content: + text/plain: + schema: + type: string + description: Line protocol body + required: true + responses: + '204': + description: Write data is correctly formatted and accepted for writing to the bucket. + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/LineProtocolError' + description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written. + '401': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Token doesn't have sufficient permissions to write to this organization and bucket or the organization and bucket do not exist. + '403': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: No token was sent and they are required. + '413': + content: + application/json: + schema: + $ref: '#/components/schemas/LineProtocolLengthError' + description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written. + '429': + description: Token is temporarily over quota. The Retry-After header describes when to try the write again. + headers: + Retry-After: + description: A non-negative decimal integer indicating the seconds to delay after the response is received. + schema: + format: int32 + type: integer + '503': + description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again. + headers: + Retry-After: + description: A non-negative decimal integer indicating the seconds to delay after the response is received. + schema: + format: int32 + type: integer + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Write time series data into InfluxDB in a V1-compatible format + tags: + - Legacy Write +security: + - TokenAuthentication: [] +servers: + - url: / +tags: + - description: | + Use one of the following schemes to authenticate to the InfluxDB API: + + - [Token authentication](#section/Authentication/TokenAuthentication) + - [Basic authentication](#section/Authentication/BasicAuthentication) + - [Querystring authentication](#section/Authentication/QuerystringAuthentication) + + name: Authentication + x-traitTag: true + - description: | + Create and manage authorizations (API tokens). + + An _authorization_ contains a list of `read` and `write` + permissions for organization resources and provides an API token for authentication. + An authorization belongs to an organization and only contains permissions for that organization. + + We recommend creating a generic user to create and manage tokens for writing data. + + ### User sessions with authorizations + + Optionally, when creating an authorization, you can scope it to a specific user. + If a user signs in with username and password, creating a _user session_, + the session carries the permissions granted by all the user's authorizations. + For more information, see [how to assign a token to a specific user](/influxdb/latest/security/tokens/create-token/). + To create a user session, use the [`POST /api/v2/signin` endpoint](#operation/PostSignin). + + ### Related endpoints + + - [Signin](#tag/Signin) + - [Signout](#tag/Signout) + + ### Related guides + + - [Authorize API requests](/influxdb/latest/api-guide/api_intro/#authentication) + - [Manage API tokens](/influxdb/latest/security/tokens/) + - [Assign a token to a specific user](/influxdb/latest/security/tokens/create-token/) + name: Authorizations (API tokens) + - name: Backup + - description: | + Store your data in InfluxDB [buckets](/influxdb/latest/reference/glossary/#bucket). + A bucket is a named location where time series data is stored. All buckets + have a [retention period](/influxdb/latest/reference/glossary/#retention-period), + a duration of time that each data point persists. InfluxDB drops all + points with timestamps older than the bucket’s retention period. + A bucket belongs to an organization. + + ### Related guides + + - [Manage buckets](/influxdb/latest/organizations/buckets/) + name: Buckets + - name: Cells + - name: Checks + - description: | + To specify resources, some InfluxDB API endpoints require parameters or + properties in the request--for example, + writing to a `bucket` resource in an `org` (_organization_ resource). + + ### Common parameters + + | Query parameter | Value type | Description | + |:------------------------ |:--------------------- |:-------------------------------------------| + | `bucket` | string | The bucket name or ID ([find your bucket](/influxdb/latest/organizations/buckets/view-buckets/). | + | `bucketID` | string | The bucket ID ([find your bucket](/influxdb/latest/organizations/buckets/view-buckets/). | + | `org` | string | The organization name or ID ([find your organization](/influxdb/latest/organizations/view-orgs/). | + | `orgID` | 16-byte string | The organization ID ([find your organization](/influxdb/latest/organizations/view-orgs/). | + name: Common parameters + x-traitTag: true + - name: Config + - name: Dashboards + - name: Data I/O endpoints + - description: | + The InfluxDB 1.x data model includes [databases](/influxdb/v1.8/concepts/glossary/#database) + and [retention policies](/influxdb/v1.8/concepts/glossary/#retention-policy-rp). + InfluxDB 2.x replaces databases and retention policies with buckets. + To support InfluxDB 1.x query and write patterns in InfluxDB 2.x, + databases and retention policies are mapped to buckets using the + database and retention policy (DBRP) mapping service. + The DBRP mapping service uses the database and retention policy + specified in 1.x compatibility API requests to route operations to a bucket. + + ### Related guides + + - [Database and retention policy mapping](/influxdb/latest/reference/api/influxdb-1x/dbrp/) + name: DBRPs + - description: | + Generate profiling and trace reports. + + Use routes under `/debug/pprof` to analyze the Go runtime of InfluxDB. + These endpoints generate [Go runtime profiles](https://pkg.go.dev/runtime/pprof) + and **trace** reports. + **Profiles** are collections of stack traces that show call sequences + leading to instances of a particular event, such as allocation. + + For more information about **pprof profile** and **trace** reports, + see the following resources: + + - [Google pprof tool](https://github.com/google/pprof) + - [Golang diagnostics](https://go.dev/doc/diagnostics) + name: Debug + - description: | + Delete data from an InfluxDB bucket. + name: Delete + - description: | + InfluxDB `/api/v2` API endpoints use standard HTTP request and response headers. + The following table shows common headers used by many InfluxDB API endpoints. + Some endpoints may use other headers that perform functions more specific to those endpoints--for example, + the `POST /api/v2/write` endpoint accepts the `Content-Encoding` header to indicate the compression applied to line protocol in the request body. + + | Header | Value type | Description | + |:------------------------ |:--------------------- |:-------------------------------------------| + | `Accept` | string | The content type that the client can understand. | + | `Authorization` | string | The authorization scheme and credential. | + | `Content-Length` | integer | The size of the entity-body, in bytes, sent to the database. | + | `Content-Type` | string | The format of the data in the request body. | + name: Headers + x-traitTag: true + - name: Health + - name: Labels + - name: Legacy Authorizations + - name: Legacy Query + - name: Legacy Write + - name: Metrics + - name: NotificationEndpoints + - name: NotificationRules + - description: | + Create and manage your [organizations](/influxdb/latest/reference/glossary/#organization). + An organization is a workspace for a group of users. Organizations can be + used to separate different environments, projects, teams or users within + InfluxDB. + + Use the `/api/v2/orgs` endpoints to create, view, and manage organizations. + name: Organizations + - description: | + Some InfluxDB API [list operations](#tag/SupportedOperations) may support the following query parameters for paginating results: + + | Query parameter | Value type | Description | + |:------------------------ |:--------------------- |:-------------------------------------------| + | `limit` | integer | The maximum number of records to return (after other parameters are applied). | + | `offset` | integer | The number of records to skip (before `limit`, after other parameters are applied). | + | `after` | string (resource ID) | Only returns resources created after the specified resource. | + + ### Limitations + + - For specific endpoint parameters and examples, see the endpoint definition. + - If you specify an `offset` parameter value greater than the total number of records, + then InfluxDB returns an empty list in the response + (given `offset` skips the specified number of records). + + The following example passes `offset=50` to skip the first 50 results, + but the user only has 10 buckets: + + ```sh + curl --request GET "INFLUX_URL/api/v2/buckets?limit=1&offset=50" \ + --header "Authorization: Token INFLUX_API_TOKEN" + ``` + + The response contains the following: + + ```json + { + "links": { + "prev": "/api/v2/buckets?descending=false\u0026limit=1\u0026offset=49\u0026orgID=ORG_ID", + "self": "/api/v2/buckets?descending=false\u0026limit=1\u0026offset=50\u0026orgID=ORG_ID" + }, + "buckets": [] + } + ``` + name: Pagination + x-traitTag: true + - name: Ping + - description: | + Retrieve data, analyze queries, and get query suggestions. + name: Query + - description: | + See the [**API Quick Start**](/influxdb/latest/api-guide/api_intro/) + to get up and running authenticating with tokens, writing to buckets, and querying data. + + [**InfluxDB API client libraries**](/influxdb/latest/api-guide/client-libraries/) + are available for popular languages and ready to import into your application. + name: Quick start + x-traitTag: true + - name: Ready + - name: RemoteConnections + - name: Replications + - name: Resources + - description: | + InfluxDB `/api/v2` API endpoints use standard HTTP status codes for success and failure responses. + The response body may include additional details. + For details about a specific operation's response, + see **Responses** and **Response Samples** for that operation. + + API operations may return the following HTTP status codes: + + |  Code  | Status | Description | + |:-----------:|:------------------------ |:--------------------- | + | `200` | Success | | + | `204` | No content | For a `POST` request, `204` indicates that InfluxDB accepted the request and request data is valid. Asynchronous operations, such as `write`, might not have completed yet. | + | `400` | Bad request | May indicate one of the following:
  • Line protocol is malformed. The response body contains the first malformed line in the data and indicates what was expected. For partial writes, the number of points written and the number of points rejected are also included. For more information, check the `rejected_points` measurement in your `_monitoring` bucket.
  • `Authorization` header is missing or malformed or the API token doesn't have permission for the operation.
| + | `401` | Unauthorized | May indicate one of the following:
  • `Authorization: Token` header is missing or malformed
  • API token value is missing from the header
  • API token doesn't have permission. For more information about token types and permissions, see [Manage API tokens](/influxdb/latest/security/tokens/)
| + | `404` | Not found | Requested resource was not found. `message` in the response body provides details about the requested resource. | + | `413` | Request entity too large | Request payload exceeds the size limit. | + | `422` | Unprocessable entity | Request data is invalid. `code` and `message` in the response body provide details about the problem. | + | `429` | Too many requests | API token is temporarily over the request quota. The `Retry-After` header describes when to try the request again. | + | `500` | Internal server error | | + | `503` | Service unavailable | Server is temporarily unavailable to process the request. The `Retry-After` header describes when to try the request again. | + name: Response codes + x-traitTag: true + - name: Restore + - name: Routes + - name: Rules + - name: Scraper Targets + - name: Secrets + - name: Security and access endpoints + - name: Setup + - name: Signin + - name: Signout + - name: Sources + - description: "The following table shows the most common operations that the InfluxDB `/api/v2` API supports.\nSome resources may support other operations that perform functions more specific to those resources.\nFor example, you can use the `PATCH /api/v2/scripts` endpoint to update properties of a script\nresource.\n\n| Operation | |\n|:----------|:-----------------------------------------------------------------------|\n| Write | Writes (`POST`) data to a bucket. |\n| Run | Executes (`POST`) a query or script and returns the result. |\n| List |\tRetrieves (`GET`) a list of zero or more resources. |\n| Create |\tCreates (`POST`) a new resource and returns the resource. |\n| Update |\tModifies (`PUT`) an existing resource to reflect data in your request. |\n| Delete |\tRemoves (`DELETE`) a specific resource. |\n" + name: Supported operations + x-traitTag: true + - name: System information endpoints + - description: | + Process and analyze your data with [tasks](/influxdb/latest/reference/glossary/#task) + in the InfluxDB task engine. + Use the `/api/v2/tasks` endpoints to schedule and manage tasks, retry task runs, and retrieve run logs. + + To configure a task, provide the script and the schedule to run the task. + For examples, see how to create a task with the [`POST /api/v2/tasks` endpoint](#operation/PostTasks). + + + + ### Properties + + A `task` object contains information about an InfluxDB task resource. + + The following table defines the properties that appear in a `task` object: + + + + ### Related guides + + - [Get started with tasks](/influxdb/latest/process-data/get-started/) + - [Common data processing tasks](/influxdb/latest/process-data/common-tasks/) + name: Tasks + - name: Telegraf Plugins + - name: Telegrafs + - description: | + Export and apply InfluxDB **templates**. + Manage **stacks** of templated InfluxDB resources. + + InfluxDB templates are prepackaged configurations for + everything from dashboards and Telegraf to notifications and alerts. + Use InfluxDB templates to quickly configure a fresh instance of InfluxDB, + back up your dashboard configuration, or share your configuration with the + InfluxData community. + + Use the `/api/v2/templates` endpoints to export templates and apply templates. + + **InfluxDB stacks** are stateful InfluxDB templates that let you + add, update, and remove installed template resources over time, avoid duplicating + resources when applying the same or similar templates more than once, and + apply changes to distributed instances of InfluxDB OSS or InfluxDB Cloud. + + Use the `/api/v2/stacks` endpoints to manage installed template resources. + + ### Related guides + + - [InfluxDB stacks](/influxdb/latest/influxdb-templates/stacks/) + - [InfluxDB templates](/influxdb/latest/influxdb-templates/) + name: Templates + - description: | + Manage users for your organization. + Users are those with access to InfluxDB. + To grant a user permission to access data, add them as a member of an + organization and provide them with an API token. + + ### User sessions with authorizations + + Optionally, you can scope an authorization (and its API token) to a user. + If a user signs in with username and password, creating a _user session_, + the session carries the permissions granted by all the user's authorizations. + To create a user session, use the [`POST /api/v2/signin` endpoint](#operation/PostSignin). + + ### Related guides + + - [Manage users](/influxdb/latest/influxdb/latest/users/) + - [Create a token scoped to a user](/influxdb/latest/latest/security/tokens/create-token/#create-a-token-scoped-to-a-user) + name: Users + - name: Variables + - name: Views + - description: | + Write time series data to [buckets](/influxdb/latest/reference/glossary/#bucket). + name: Write +x-tagGroups: + - name: Overview + tags: + - Quick start + - Authentication + - Supported operations + - Headers + - Common parameters + - Pagination + - Response codes + - name: '' + tags: + - Data I/O endpoints + - Security and access endpoints + - System information endpoints + - name: All endpoints + tags: + - Authorizations (API tokens) + - Backup + - Buckets + - Cells + - Checks + - Config + - Dashboards + - DBRPs + - Debug + - Delete + - Health + - Labels + - Legacy Authorizations + - Legacy Query + - Legacy Write + - Metrics + - NotificationEndpoints + - NotificationRules + - Organizations + - Ping + - Query + - Ready + - RemoteConnections + - Replications + - Resources + - Restore + - Routes + - Rules + - Scraper Targets + - Secrets + - Setup + - Signin + - Signout + - Sources + - Tasks + - Telegraf Plugins + - Telegrafs + - Templates + - Users + - Variables + - Views + - Write diff --git a/api-docs/v2.6/swaggerV1Compat.yml b/api-docs/v2.6/swaggerV1Compat.yml new file mode 100644 index 000000000..5e0275afe --- /dev/null +++ b/api-docs/v2.6/swaggerV1Compat.yml @@ -0,0 +1,432 @@ +openapi: 3.0.0 +info: + title: InfluxDB OSS v1 compatibility API documentation + version: 2.5.0 v1 compatibility + description: | + The InfluxDB 1.x compatibility /write and /query endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others. + + If you want to use the latest InfluxDB /api/v2 API instead, see the [InfluxDB v2 API documentation](/influxdb/v2.4/api/). + + This documentation is generated from the + [InfluxDB OpenAPI specification](https://github.com/influxdata/openapi/blob/influxdb-oss-v2.4.0/contracts/swaggerV1Compat.yml). + license: + name: MIT + url: https://opensource.org/licenses/MIT +servers: + - url: / +paths: + /write: + post: + operationId: PostWriteV1 + tags: + - Write + summary: Write time series data into InfluxDB in a V1-compatible format + requestBody: + description: Line protocol body + required: true + content: + text/plain: + schema: + type: string + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/AuthUserV1' + - $ref: '#/components/parameters/AuthPassV1' + - in: query + name: db + schema: + type: string + required: true + description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy. + - in: query + name: rp + schema: + type: string + description: Retention policy name. + - in: query + name: precision + schema: + type: string + description: Write precision. + - in: header + name: Content-Encoding + description: When present, its value indicates to the database that compression is applied to the line protocol body. + schema: + type: string + description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity. + default: identity + enum: + - gzip + - identity + responses: + '204': + description: Write data is correctly formatted and accepted for writing to the bucket. + '400': + description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written. + content: + application/json: + schema: + $ref: '#/components/schemas/LineProtocolError' + '401': + description: Token does not have sufficient permissions to write to this organization and bucket or the organization and bucket do not exist. + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + '403': + description: No token was sent and they are required. + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + '413': + description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written. + content: + application/json: + schema: + $ref: '#/components/schemas/LineProtocolLengthError' + '429': + description: Token is temporarily over quota. The Retry-After header describes when to try the write again. + headers: + Retry-After: + description: A non-negative decimal integer indicating the seconds to delay after the response is received. + schema: + type: integer + format: int32 + '503': + description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again. + headers: + Retry-After: + description: A non-negative decimal integer indicating the seconds to delay after the response is received. + schema: + type: integer + format: int32 + default: + description: Internal server error + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + /query: + post: + operationId: PostQueryV1 + tags: + - Query + summary: Query InfluxDB in a V1 compatible format + requestBody: + description: InfluxQL query to execute. + content: + text/plain: + schema: + type: string + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/AuthUserV1' + - $ref: '#/components/parameters/AuthPassV1' + - in: header + name: Accept + schema: + type: string + description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps. + default: application/json + enum: + - application/json + - application/csv + - text/csv + - application/x-msgpack + - in: header + name: Accept-Encoding + description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand. + schema: + type: string + description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity. + default: identity + enum: + - gzip + - identity + - in: header + name: Content-Type + schema: + type: string + enum: + - application/vnd.influxql + - in: query + name: db + schema: + type: string + required: true + description: Bucket to query. + - in: query + name: rp + schema: + type: string + description: Retention policy name. + - in: query + name: q + description: Defines the influxql query to run. + schema: + type: string + responses: + '200': + description: Query results + headers: + Content-Encoding: + description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body + schema: + type: string + description: Specifies that the response in the body is encoded with gzip or not encoded with identity. + default: identity + enum: + - gzip + - identity + Trace-Id: + description: The Trace-Id header reports the request's trace ID, if one was generated. + schema: + type: string + description: Specifies the request's trace ID. + content: + application/csv: + schema: + $ref: '#/components/schemas/InfluxQLCSVResponse' + text/csv: + schema: + $ref: '#/components/schemas/InfluxQLCSVResponse' + application/json: + schema: + $ref: '#/components/schemas/InfluxQLResponse' + application/x-msgpack: + schema: + type: string + format: binary + '429': + description: Token is temporarily over quota. The Retry-After header describes when to try the read again. + headers: + Retry-After: + description: A non-negative decimal integer indicating the seconds to delay after the response is received. + schema: + type: integer + format: int32 + default: + description: Error processing query + content: + application/json: + schema: + $ref: '#/components/schemas/Error' +components: + parameters: + TraceSpan: + in: header + name: Zap-Trace-Span + description: OpenTracing span context + example: + trace_id: '1' + span_id: '1' + baggage: + key: value + required: false + schema: + type: string + AuthUserV1: + in: query + name: u + required: false + schema: + type: string + description: Username. + AuthPassV1: + in: query + name: p + required: false + schema: + type: string + description: User token. + schemas: + InfluxQLResponse: + properties: + results: + type: array + oneOf: + - required: + - statement_id + - error + - required: + - statement_id + - series + items: + type: object + properties: + statement_id: + type: integer + error: + type: string + series: + type: array + items: + type: object + properties: + name: + type: string + tags: + type: object + additionalProperties: + type: string + partial: + type: boolean + columns: + type: array + items: + type: string + values: + type: array + items: + type: array + items: {} + InfluxQLCSVResponse: + type: string + example: | + name,tags,time,test_field,test_tag test_measurement,,1603740794286107366,1,tag_value test_measurement,,1603740870053205649,2,tag_value test_measurement,,1603741221085428881,3,tag_value + Error: + properties: + code: + description: Code is the machine-readable error code. + readOnly: true + type: string + enum: + - internal error + - not found + - conflict + - invalid + - unprocessable entity + - empty value + - unavailable + - forbidden + - too many requests + - unauthorized + - method not allowed + message: + readOnly: true + description: Message is a human-readable message. + type: string + required: + - code + - message + LineProtocolError: + properties: + code: + description: Code is the machine-readable error code. + readOnly: true + type: string + enum: + - internal error + - not found + - conflict + - invalid + - empty value + - unavailable + message: + readOnly: true + description: Message is a human-readable message. + type: string + op: + readOnly: true + description: Op describes the logical code operation during error. Useful for debugging. + type: string + err: + readOnly: true + description: Err is a stack of errors that occurred during processing of the request. Useful for debugging. + type: string + line: + readOnly: true + description: First line within sent body containing malformed data + type: integer + format: int32 + required: + - code + - message + - op + - err + LineProtocolLengthError: + properties: + code: + description: Code is the machine-readable error code. + readOnly: true + type: string + enum: + - invalid + message: + readOnly: true + description: Message is a human-readable message. + type: string + maxLength: + readOnly: true + description: Max length in bytes for a body of line-protocol. + type: integer + format: int32 + required: + - code + - message + - maxLength + securitySchemes: + TokenAuthentication: + type: apiKey + name: Authorization + in: header + description: | + Use the [Token authentication](#section/Authentication/TokenAuthentication) + scheme to authenticate to the InfluxDB API. + + + In your API requests, send an `Authorization` header. + For the header value, provide the word `Token` followed by a space and an InfluxDB API token. + The word `Token` is case-sensitive. + + + ### Syntax + + `Authorization: Token YOUR_INFLUX_TOKEN` + + + For examples and more information, see the following: + - [`/authorizations`](#tag/Authorizations) endpoint. + - [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication). + - [Manage API tokens](/influxdb/cloud/security/tokens/). + BasicAuthentication: + type: http + scheme: basic + description: | + Use the HTTP [Basic authentication](#section/Authentication/BasicAuthentication) + scheme with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme): + + + For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/). + QuerystringAuthentication: + type: apiKey + in: query + name: u=&p= + description: | + Use the [Querystring authentication](#section/Authentication/QuerystringAuthentication) + scheme with InfluxDB 1.x API parameters to provide credentials through the query string. + + + For examples and more information, see how to [authenticate with a username and password](/influxdb/cloud/reference/api/influxdb-1x/). +security: + - TokenAuthentication: [] + - BasicAuthentication: [] + - QuerystringAuthentication: [] +tags: + - name: Authentication + description: | + The InfluxDB 1.x API requires authentication for all requests. + InfluxDB Cloud uses InfluxDB API tokens to authenticate requests. + + + For more information, see the following: + - [Token authentication](#section/Authentication/TokenAuthentication) + - [Basic authentication](#section/Authentication/BasicAuthentication) + - [Querystring authentication](#section/Authentication/QuerystringAuthentication) + + + x-traitTag: true + - name: Query + - name: Write +x-tagGroups: [] diff --git a/config.staging.toml b/config.staging.toml index fadfd2e2b..461baad94 100644 --- a/config.staging.toml +++ b/config.staging.toml @@ -25,6 +25,7 @@ hrefTargetBlank = true smartDashes = false [taxonomies] + "influxdb/v2.6/tag" = "influxdb/v2.6/tags" "influxdb/v2.5/tag" = "influxdb/v2.5/tags" "influxdb/v2.4/tag" = "influxdb/v2.4/tags" "influxdb/v2.3/tag" = "influxdb/v2.3/tags" diff --git a/config.toml b/config.toml index b376af7a4..ac749830a 100644 --- a/config.toml +++ b/config.toml @@ -21,6 +21,7 @@ hrefTargetBlank = true smartDashes = false [taxonomies] + "influxdb/v2.6/tag" = "influxdb/v2.6/tags" "influxdb/v2.5/tag" = "influxdb/v2.5/tags" "influxdb/v2.4/tag" = "influxdb/v2.4/tags" "influxdb/v2.3/tag" = "influxdb/v2.3/tags" diff --git a/content/influxdb/v2.4/reference/release-notes/influx-cli.md b/content/influxdb/v2.4/reference/release-notes/influx-cli.md index bd8cb91c1..fe1f58512 100644 --- a/content/influxdb/v2.4/reference/release-notes/influx-cli.md +++ b/content/influxdb/v2.4/reference/release-notes/influx-cli.md @@ -8,7 +8,7 @@ menu: name: influx CLI --- -## v2.5.0 [2022-10-21] +## v2.6.0 [2022-10-21] ### Features diff --git a/content/influxdb/v2.4/reference/release-notes/influxdb.md b/content/influxdb/v2.4/reference/release-notes/influxdb.md index 4516dc0f2..0605617ee 100644 --- a/content/influxdb/v2.4/reference/release-notes/influxdb.md +++ b/content/influxdb/v2.4/reference/release-notes/influxdb.md @@ -8,7 +8,7 @@ menu: weight: 101 --- -## v2.4 [2022-08-19] +## v2.4.0 [2022-08-19] ### Features diff --git a/content/influxdb/v2.5/reference/release-notes/influxdb.md b/content/influxdb/v2.5/reference/release-notes/influxdb.md index 17bb85958..3a5d2fed7 100644 --- a/content/influxdb/v2.5/reference/release-notes/influxdb.md +++ b/content/influxdb/v2.5/reference/release-notes/influxdb.md @@ -14,7 +14,7 @@ weight: 101 - Fix permissions issue in Debian and Red Hat package managers. -## v2.5 [2022-11-01] +## v2.5.0 [2022-11-01] ### Features @@ -40,7 +40,7 @@ weight: 101 - Upgrade to [Go 1.18.7](https://go.dev/doc/go1.18) - Upgrade to [Rust 1.63.0](https://www.rust-lang.org/) -## v2.4 [2022-08-19] +## v2.4.0 [2022-08-19] ### Features diff --git a/content/influxdb/v2.6/_index.md b/content/influxdb/v2.6/_index.md new file mode 100644 index 000000000..960400617 --- /dev/null +++ b/content/influxdb/v2.6/_index.md @@ -0,0 +1,19 @@ +--- +title: InfluxDB OSS 2.6 documentation +description: > + InfluxDB OSS is an open source time series database designed to handle high write and query loads. + Learn how to use and leverage InfluxDB in use cases such as monitoring metrics, IoT data, and events. +layout: landing-influxdb +menu: + influxdb_2_6: + name: InfluxDB OSS 2.6 +weight: 1 +--- + +#### Welcome +Welcome to the InfluxDB v2.6 documentation! +InfluxDB is an open source time series database designed to handle high write and query workloads. + +This documentation is meant to help you learn how to use and leverage InfluxDB to meet your needs. +Common use cases include infrastructure monitoring, IoT data collection, events handling, and more. +If your use case involves time series data, InfluxDB is purpose-built to handle it. diff --git a/content/influxdb/v2.6/admin/_index.md b/content/influxdb/v2.6/admin/_index.md new file mode 100644 index 000000000..aad38efab --- /dev/null +++ b/content/influxdb/v2.6/admin/_index.md @@ -0,0 +1,13 @@ +--- +title: Administer InfluxDB +description: > + Use the InfluxDB API, user interface (UI), and CLIs to perform administrative + tasks in InfluxDB. +menu: influxdb_2_6 +weight: 18 +--- + +Use the InfluxDB API, user interface (UI), and CLIs to perform administrative +tasks in InfluxDB. + +{{< children >}} \ No newline at end of file diff --git a/content/influxdb/v2.6/admin/internals/_index.md b/content/influxdb/v2.6/admin/internals/_index.md new file mode 100644 index 000000000..4b12a4a44 --- /dev/null +++ b/content/influxdb/v2.6/admin/internals/_index.md @@ -0,0 +1,17 @@ +--- +title: Manage InfluxDB internal systems +description: > + Manage the internal systems of InfluxDB such as the Time Series Index (TSI), + the time-structured merge tree (TSM) storage engine, and the write-ahead log (WAL). +menu: + influxdb_2_6: + name: Manage internal systems + parent: Administer InfluxDB +weight: 20 +cascade: + influxdb/v2.6/tags: [storage, internals] +--- + +Manage InfluxDB internal systems, including the time series index (TSI), time-structured merge tree (TSM) storage engine, and write-ahead log (WAL). + +{{< children >}} \ No newline at end of file diff --git a/content/influxdb/v2.6/admin/internals/tsi/_index.md b/content/influxdb/v2.6/admin/internals/tsi/_index.md new file mode 100644 index 000000000..c2ee79f9a --- /dev/null +++ b/content/influxdb/v2.6/admin/internals/tsi/_index.md @@ -0,0 +1,17 @@ +--- +title: Manage the InfluxDB time series index (TSI) +description: > + The InfluxDB [time series index (TSI)](/influxdb/v2.6/reference/internals/storage-engine/#time-series-index-tsi) + indexes or caches measurement and tag data to ensure queries are performant. + Use the `influxd inspect` command to manage the TSI index. +menu: + influxdb_2_6: + name: Manage TSI indexes + parent: Manage internal systems +weight: 101 +--- + +The InfluxDB [time series index (TSI)](/influxdb/v2.6/reference/internals/storage-engine/#time-series-index-tsi) +indexes or caches measurement and tag data to ensure queries are performant. + +{{< children >}} diff --git a/content/influxdb/v2.6/admin/internals/tsi/inspect.md b/content/influxdb/v2.6/admin/internals/tsi/inspect.md new file mode 100644 index 000000000..0454c6782 --- /dev/null +++ b/content/influxdb/v2.6/admin/internals/tsi/inspect.md @@ -0,0 +1,251 @@ +--- +title: Inspect TSI indexes +description: > + Use the `influxd inspect` command to inspect the InfluxDB TSI index. +menu: + influxdb_2_6: + parent: Manage TSI indexes +related: + - /influxdb/v2.6/reference/internals/storage-engine/ + - /influxdb/v2.6/reference/internals/file-system-layout/ + - /influxdb/v2.6/reference/cli/influxd/inspect/dump-tsi/ + - /influxdb/v2.6/reference/cli/influxd/inspect/export-index/ + - /influxdb/v2.6/reference/cli/influxd/inspect/report-tsi/ +--- + +Use the `influxd inspect` command to inspect the InfluxDB [time series index (TSI)](/influxdb/v2.6/reference/internals/storage-engine/#time-series-index-tsi). + +- [Output information about TSI index files](#output-information-about-tsi-index-files) + - [Output raw series data stored in the index](#output-raw-series-data-stored-in-the-index) + - [Output measurement data stored in the index](#output-measurement-data-stored-in-the-index) +- [Export TSI index data as SQL](#export-tsi-index-data-as-sql) +- [Report the cardinality of TSI files](#report-the-cardinality-of-tsi-files) + +## Output information about TSI index files + +Use the [`influxd inspect dump-tsi` command](/influxdb/v2.6/reference/cli/influxd/inspect/dump-tsi/) +to output low-level details about TSI index (`tsi1`) files. + +Provide the following: + +- ({{< req >}}) `--series-file` flag with the path to bucket's + [`_series` directory](/influxdb/v2.6/reference/internals/file-system-layout/#tsm-directories-and-files-layout). +- ({{< req >}}) Path to the shard's + [`index` directory](/influxdb/v2.6/reference/internals/file-system-layout/#tsm-directories-and-files-layout) + +```sh +influxd inspect dump-tsi \ + --series-file ~/.influxdbv2/engine/data/056d83f962a08461/_series \ + ~/.influxdbv2/engine/data/056d83f962a08461/autogen/1023/index +``` + +{{< expand-wrapper >}} +{{% expand "View example output" %}} +``` +[LOG FILE] L0-00000006.tsl +Series: 0 +Measurements: 0 +Tag Keys: 0 +Tag Values: 0 + +[INDEX FILE] L3-00000008.tsi +Measurements: 3 + Series data size: 0 (0.0b) + Bytes per series: 0.0b +Tag Keys: 15 +Tag Values: 1025 + Series: 1700 + Series data size: 0 (0.0b) + Bytes per series: 0.0b + +[LOG FILE] L0-00000010.tsl +Series: 0 +Measurements: 0 +Tag Keys: 0 +Tag Values: 0 + +[INDEX FILE] L2-00000011.tsi +Measurements: 1 + Series data size: 0 (0.0b) + Bytes per series: 0.0b +Tag Keys: 5 +Tag Values: 9 + Series: 10 + Series data size: 0 (0.0b) + Bytes per series: 0.0b +``` +{{% /expand %}} +{{< /expand-wrapper >}} + +### Output raw series data stored in the index + +To output raw series data stored in index files, include the `--series` flag with +the `influxd inspect dump-tsi` command: + +```sh +influxd inspect dump-tsi \ + --series \ + --series-file ~/.influxdbv2/engine/data/056d83f962a08461/_series \ + ~/.influxdbv2/engine/data/056d83f962a08461/autogen/1023/index +``` + +{{< expand-wrapper >}} +{{% expand "View example output" %}} +``` +earthquake,code=6000iuad,id=us6000iuad,magType=mww,net=us,title=M\ 5.2\ -\ 101\ km\ SE\ of\ Palca\,\ Peru +earthquake,code=71377273,id=pr71377273,magType=md,net=pr,title=M\ 1.9\ -\ Puerto\ Rico\ region +earthquake,code=73794611,id=nc73794611,magType=md,net=nc,title=M\ 0.6\ -\ 13km\ ESE\ of\ Mammoth\ Lakes\,\ CA +earthquake,code=40361800,id=ci40361800,magType=ml,net=ci,title=M\ 1.3\ -\ 12km\ SE\ of\ Olancha\,\ CA +earthquake,code=6000itfk,id=us6000itfk,magType=mb,net=us,title=M\ 4.4\ -\ Mindanao\,\ Philippines +earthquake,code=2022ucrr,id=ok2022ucrr,magType=ml,net=ok,title=M\ 1.4\ -\ 4\ km\ SSE\ of\ Dover\,\ Oklahoma +earthquake,code=73792706,id=nc73792706,magType=md,net=nc,title=M\ 0.6\ -\ 7km\ W\ of\ Cobb\,\ CA +earthquake,code=6000isjn,id=us6000isjn,magType=mww,net=us,title=M\ 5.5\ -\ 69\ km\ E\ of\ Hualien\ City\,\ Taiwan +earthquake,code=022d8mp4dd,id=ak022d8mp4dd,magType=ml,net=ak,title=M\ 1.3\ -\ Southern\ Alaska +earthquake,code=022dbrb8vb,id=ak022dbrb8vb,magType=ml,net=ak,title=M\ 1.6\ -\ 37\ km\ NE\ of\ Paxson\,\ Alaska +earthquake,code=6000iu2e,id=us6000iu2e,magType=mb,net=us,title=M\ 4.1\ -\ 81\ km\ WSW\ of\ San\ Antonio\ de\ los\ Cobres\,\ Argentina +``` +{{% /expand %}} +{{< /expand-wrapper >}} + +### Output measurement data stored in the index + +To output measurement information stored in index files, include the `--measurement` +flag with the `influxd inspect dump-tsi` command: + +```sh +influxd inspect dump-tsi \ + --measurements \ + --series-file ~/.influxdbv2/engine/data/056d83f962a08461/_series \ + ~/.influxdbv2/engine/data/056d83f962a08461/autogen/1023/index +``` + +{{< expand-wrapper >}} +{{% expand "View example output" %}} +``` +Measurement +earthquake +explosion +quarry blast + + +Measurement +earthquake +explosion +ice quake +quarry blast + + +Measurement +earthquake +explosion +``` +{{% /expand %}} +{{< /expand-wrapper >}} + +## Export TSI index data as SQL + +Use the [`influxd inspect export-index` command](/influxdb/v2.6/reference/cli/influxd/inspect/export-index/) +to export an index in SQL format for easier inspection and debugging. +Provide the following: + +- `--series-path` flag with the path to the bucket's + [`_series` directory](/influxdb/v2.6/reference/internals/file-system-layout/#tsm-directories-and-files-layout). +- `--index-path` flag with the path to the shard's + [`index` directory](/influxdb/v2.6/reference/internals/file-system-layout/#tsm-directories-and-files-layout). + +```sh +influxd inspect export-index \ + --series-path ~/.influxdbv2/engine/data/056d83f962a08461/_series \ + --index-path ~/.influxdbv2/engine/data/056d83f962a08461/autogen/1023/index +``` + +{{< expand-wrapper >}} +{{% expand "View example output" %}} +```sql +CREATE TABLE IF NOT EXISTS measurement_series ( + name TEXT NOT NULL, + series_id INTEGER NOT NULL +); + +CREATE TABLE IF NOT EXISTS tag_value_series ( + name TEXT NOT NULL, + key TEXT NOT NULL, + value TEXT NOT NULL, + series_id INTEGER NOT NULL +); + +BEGIN TRANSACTION; +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26920); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26928); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26936); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26944); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26952); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26960); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26968); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26976); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26984); +INSERT INTO measurement_series (name, series_id) VALUES ('earthquake', 26992); +COMMIT; +``` +{{% /expand %}} +{{< /expand-wrapper >}} + +## Report the cardinality of TSI files + +Use the [`influxd inspect report-tsi` command](/influxdb/v2.6/reference/cli/influxd/inspect/report-tsi/) +to output information about the cardinality of data in a bucket's index. +Provide the following: + +- `--bucket-id` with the ID of the bucket. + +```sh +influxd inspect report-tsi --bucket-id 056d83f962a08461 +``` + +{{< expand-wrapper >}} +{{% expand "View example output" %}} +``` +Summary +Database Path: /Users/scottanderson/.influxdbv2/engine/data/056d83f962a08461 +Cardinality (exact): 101698 + +Measurement Cardinality (exact) + +"earthquake" 99876 +"quarry blast" 1160 +"explosion" 589 +"ice quake" 58 +"other event" 10 +"chemical explosion" 2 +"rock burst" 1 +"sonic boom" 1 +"volcanic eruption" 1 + + +=============== +Shard ID: 452 +Path: /Users/scottanderson/.influxdbv2/engine/data/056d83f962a08461/autogen/452 +Cardinality (exact): 1644 + +Measurement Cardinality (exact) + +"earthquake" 1607 +"quarry blast" 29 +"explosion" 7 +"sonic boom" 1 +=============== + +=============== +Shard ID: 453 +Path: /Users/scottanderson/.influxdbv2/engine/data/056d83f962a08461/autogen/453 +Cardinality (exact): 2329 + +Measurement Cardinality (exact) + +"earthquake" 2298 +"quarry blast" 24 +"explosion" 7 +=============== +``` +{{% /expand %}} +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/admin/internals/tsi/rebuild-index.md b/content/influxdb/v2.6/admin/internals/tsi/rebuild-index.md new file mode 100644 index 000000000..01e6dbd80 --- /dev/null +++ b/content/influxdb/v2.6/admin/internals/tsi/rebuild-index.md @@ -0,0 +1,100 @@ +--- +title: Rebuild the TSI index +description: > + Flush and rebuild the TSI index to purge corrupt index files or remove indexed + data that is out of date. +menu: + influxdb_2_6: + parent: Manage TSI indexes +weight: 201 +related: + - /influxdb/v2.6/reference/internals/storage-engine/ + - /influxdb/v2.6/reference/internals/file-system-layout/ + - /influxdb/v2.6/reference/cli/influxd/inspect/build-tsi/ +--- + +In some cases, it may be necessary to flush and rebuild the TSI index. +For example, purging corrupt index files or removing outdated indexed data. + +To rebuild your InfluxDB TSI index: + +1. **Stop the InfluxDB (`influxd`) process**. + + {{% warn %}} +Rebuilding the TSI index while the `influxd` is running could prevent some data +from being queryable. + {{% /warn %}} + +2. Navigate to the `data` directory in your + [InfluxDB engine path](/influxdb/v2.6/reference/internals/file-system-layout/). + _The engine path depends on your operating system or + [custom engine path setting](/influxdb/v2.6/reference/config-options/#engine-path)._ + + {{< code-tabs-wrapper >}} +{{% code-tabs %}} +[macOS & Linux](#) +[Windows (PowerShell)](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +cd ~/.influxdbv2/engine/data/ +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```powershell +cd -Path 'C:\%USERPROFILE%\.influxdbv2\engine\data\' +``` +{{% /code-tab-content %}} + {{< /code-tabs-wrapper >}} + +3. **Delete all `_series` directories in your InfluxDB `data` directory.** + By default, `_series` directories are are stored at `/data//_series`, + but check for and remove `_series` directories throughout the + `data` directory. + + {{< code-tabs-wrapper >}} +{{% code-tabs %}} +[macOS & Linux](#) +[Windows (PowerShell)](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +find . -type d -name _series -exec -delete +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```powershell +get-childitem -Include _series -Recurse -force | Remove-Item -Force -Recurse +``` +{{% /code-tab-content %}} + {{< /code-tabs-wrapper >}} + + +4. **Delete all `index` directories.** By default, `index` directories are stored at + `/data//autogen//index`, but check for and remove + `index` directories throughout the `data` directory. + + {{< code-tabs-wrapper >}} +{{% code-tabs %}} +[macOS & Linux](#) +[Windows (PowerShell)](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +find . -type d -name index -exec -delete +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```powershell +get-childitem -Include index -Recurse -force | Remove-Item -Force -Recurse +``` +{{% /code-tab-content %}} + {{< /code-tabs-wrapper >}} + + +5. Use the [`influxd inspect build-tsi` command](/influxdb/v2.6/reference/cli/influxd/inspect/build-tsi/) + to rebuild the TSI index. + + ```sh + influxd inspect build-tsi + ``` \ No newline at end of file diff --git a/content/influxdb/v2.6/admin/internals/tsm/_index.md b/content/influxdb/v2.6/admin/internals/tsm/_index.md new file mode 100644 index 000000000..8f515caf4 --- /dev/null +++ b/content/influxdb/v2.6/admin/internals/tsm/_index.md @@ -0,0 +1,28 @@ +--- +title: Manage InfluxDB TSM files +description: > + ... +menu: + influxdb_2_6: + name: Manage TSM files + parent: Manage internal systems +weight: 101 +draft: true +--- + + + +- influxd inspect delete-tsm Deletes a measurement from a raw tsm file. +- influxd inspect dump-tsm Dumps low-level details about tsm1 files +- influxd inspect export-lp Export TSM data as line protocol +- influxd inspect report-tsm Run TSM report +- influxd inspect verify-tombstone Verify the integrity of tombstone files +- influxd inspect verify-tsm Verifies the integrity of TSM files +- influxd inspect verify-wal Check for WAL corruption +- influxd inspect verify-tombstone Verify the integrity of tombstone files +- influxd inspect verify-seriesfile Verifies the integrity of series files. +- influxd inspect build-tsi --compact-series-file (Compact a series file without rebuilding the index) \ No newline at end of file diff --git a/content/influxdb/v2.6/admin/internals/wal/_index.md b/content/influxdb/v2.6/admin/internals/wal/_index.md new file mode 100644 index 000000000..f5a2ea368 --- /dev/null +++ b/content/influxdb/v2.6/admin/internals/wal/_index.md @@ -0,0 +1,20 @@ +--- +title: Manage InfluxDB WAL files +description: > + ... +menu: + influxdb_2_6: + name: Manage WAL files + parent: Manage internal systems +weight: 101 +draft: true +--- + + + +dump-wal Dumps TSM data from WAL files +verify-wal Check for WAL corruption \ No newline at end of file diff --git a/content/influxdb/v2.6/admin/logs.md b/content/influxdb/v2.6/admin/logs.md new file mode 100644 index 000000000..25721cfc0 --- /dev/null +++ b/content/influxdb/v2.6/admin/logs.md @@ -0,0 +1,275 @@ +--- +title: Manage InfluxDB logs +description: > + Learn how to configure, manage, and process your InfluxDB logs. +menu: + influxdb_2_6: + name: Manage logs + parent: Administer InfluxDB +weight: 10 +--- + +Learn how to configure, manage, and process your InfluxDB logs: + +- [Configure your InfluxDB log location](#configure-your-influxdb-log-location) +- [Configure your log level](#configure-your-log-level) +- [Enable the Flux query log](#enable-the-flux-query-log) +- [Use external tools to manage and process logs](#use-external-tools-to-manage-and-process-logs) +- [Log formats](#log-formats) + +## Configure your InfluxDB log location + +By default, InfluxDB outputs all logs to **stdout**. To view InfluxDB logs, +view the output of the [`influxd`](/influxdb/v2.6/reference/cli/influxd/) process. + +- [Write logs to a file](#write-logs-to-a-file) +- [Logs when running InfluxDB as a service](#logs-when-running-influxdb-as-a-service) + +### Write logs to a file + +To write InfluxDB logs to a file, redirect **stdout** to a file when starting +the InfluxDB service ([`influxd`](/influxdb/v2.6/reference/cli/influxd/)). + +```sh +influxd 1> /path/to/influxdb.log +``` + +{{% note %}} +When logging to a file, InfluxDB uses the [logfmt](#logfmt) format. +{{% /note %}} + +### Logs when running InfluxDB as a service + +If you use a service manager to run InfluxDB, the service manager determines the location of logs. + +{{< tabs-wrapper >}} +{{% tabs %}} +[systemd](#) +[sysvinit](#) +{{% /tabs %}} + +{{% tab-content %}} + +Most Linux systems direct logs to the `systemd` journal. +To access these logs, use the following command: + +```sh +sudo journalctl -u influxdb.service +``` + +For more information, see the [journald.conf documentation](https://www.freedesktop.org/software/systemd/man/journald.conf.html). + +{{% /tab-content %}} + + +{{% tab-content %}} + +When InfluxDB is run as a service, **stdout** is discarded by default (sent to `/dev/null`). +To write logs to a file: + +1. Open the InfluxDB startup script (`/etc/default/influxdb`) in a text editor. +2. Set the `STDOUT` environment variable to the path where you want to store + the InfluxDB logs. For example: + + ```conf + STDOUT=/var/log/influxdb/influxd.log + ``` + +3. Save the changes to the startup script. +4. Restart the InfluxDB service to apply the changes. + + ```sh + service influxdb restart + ``` + +{{% /tab-content %}} + +{{< /tabs-wrapper >}} + +## Configure your log level + +Use the [`log-level` InfluxDB configuration option](/influxdb/v2.6/reference/config-options/#log-level) +to specify the log levels the InfluxDB service outputs. +InfluxDB supports the following log levels: + +- **debug**: Output logs with debug, info, and error log levels. +- **info**: _(Default)_ Output logs with info and error log levels. +- **error**: Output logs with the error log level only. + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[influxd flag](#) +[Environment variable](#) +[InfluxDB configuration file](#) +{{% /tabs %}} + +{{% tab-content %}} +```sh +influxd --log-level=info +``` +{{% /tab-content %}} + +{{% tab-content %}} +```sh +export INFLUXD_LOG_LEVEL=info +``` +{{% /tab-content %}} + +{{% tab-content %}} + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[YAML](#) +[TOML](#) +[JSON](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```yml +log-level: info +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```toml +log-level = "info" +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```json +{ + "log-level": "info" +} +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +{{% /tab-content %}} + +{{< /tabs-wrapper >}} + +_For information about configuring InfluxDB, see [InfluxDB configuration options](/influxdb/v2.6/reference/config-options/)._ + +## Enable the Flux query log + +Use the [`flux-log-enabled` configuration option](/influxdb/v2.6/reference/config-options/#flux-log-enabled) +to enable Flux query logging. InfluxDB outputs Flux query logs to **stdout** +with all other InfluxDB logs. + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[influxd flag](#) +[Environment variable](#) +[InfluxDB configuration file](#) +{{% /tabs %}} + +{{% tab-content %}} +```sh +influxd --flux-log-enabled +``` +{{% /tab-content %}} + +{{% tab-content %}} +```sh +export INFLUXD_FLUX_LOG_ENABLED=true +``` +{{% /tab-content %}} + +{{% tab-content %}} + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[YAML](#) +[TOML](#) +[JSON](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```yml +flux-log-enabled: true +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```toml +flux-log-enabled = true +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```json +{ + "flux-log-enabled": true +} +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +{{% /tab-content %}} + +{{< /tabs-wrapper >}} + +_For information about configuring InfluxDB, see [InfluxDB configuration options](/influxdb/v2.6/reference/config-options/)._ + + +## Use external tools to manage and process logs + +Use the following popular tools to manage and process InfluxDB logs: + +### logrotate + +[logrotate](https://github.com/logrotate/logrotate) simplifies the +administration of log files and provides automatic rotation compression, removal +and mailing of log files. Logrotate can be set to handle a log file hourly, +daily, weekly, monthly or when the log file gets to a certain size. + +### hutils + +[hutils](https://blog.heroku.com/hutils-explore-your-structured-data-logs) is a +collection of command line utilities for working with logs with [logfmt](#logfmt) +encoding, including: + +- **lcut**: Extracts values from a logfmt trace based on a specified field name. +- **lfmt**: Reformats and highlights key sections of logfmt lines. +- **ltap**: Accesses messages from log providers in a consistent way to allow + easy parsing by other utilities that operate on logfmt traces. +- **lviz**: Visualizes logfmt output by building a tree out of a dataset + combining common sets of key-value pairs into shared parent nodes. + +### lnav (Log File Navigator) + +[lnav (Log File Navigator)](http://lnav.org/) is an advanced log file viewer useful for watching +and analyzing log files from a terminal. +The lnav viewer provides a single log view, automatic log format detection, +filtering, timeline view, pretty-print view, and querying logs using SQL. + + +## Log formats + +InfluxDB outputs logs in one of two formats depending on the location of where +logs are output. + +- [Console/TTY](#consoletty) +- [logfmt](#logfmt) + +### Console/TTY + +**When logging to a terminal or other TTY devices**, InfluxDB uses a console-friendly format. + +##### Example console/TTY format +```sh +2022-09-29T21:58:29.936355Z info Welcome to InfluxDB {"log_id": "0dEoz3C0000", "version": "dev", "commit": "663d43d210", "build_date": "2022-09-29T21:58:29Z", "log_level": "info"} +2022-09-29T21:58:29.977671Z info Resources opened {"log_id": "0dEoz3C0000", "service": "bolt", "path": "/Users/exampleuser/.influxdbv2/influxd.bolt"} +2022-09-29T21:58:29.977891Z info Resources opened {"log_id": "0dEoz3C0000", "service": "sqlite", "path": "/Users/exampleuser/.influxdbv2/influxd.sqlite"} +2022-09-29T21:58:30.059709Z info Checking InfluxDB metadata for prior version. {"log_id": "0dEoz3C0000", "bolt_path": "/Users/exampleuser/.influxdbv2/influxd.bolt"} +``` + +### logfmt + +**When logging to a file**, InfluxDB uses **logfmt**, a machine-readable +structured log format that provides simpler integrations with external tools like +[Splunk](https://www.splunk.com/), [Papertrail](https://www.papertrail.com/), +[Elasticsearch](https://www.elastic.co/), and other third party tools. + +##### Example logfmt format +```sh +ts=2022-09-29T16:54:16.021427Z lvl=info msg="Welcome to InfluxDB" log_id=0dEYZvqG000 version=dev commit=663d43d210 build_date=2022-09-29T16:54:15Z log_level=info +ts=2022-09-29T16:54:16.062239Z lvl=info msg="Resources opened" log_id=0dEYZvqG000 service=bolt path=/Users/exampleuser/.influxdbv2/influxd.bolt +ts=2022-09-29T16:54:16.062457Z lvl=info msg="Resources opened" log_id=0dEYZvqG000 service=sqlite path=/Users/exampleuser/.influxdbv2/influxd.sqlite +ts=2022-09-29T16:54:16.144430Z lvl=info msg="Checking InfluxDB metadata for prior version." log_id=0dEYZvqG000 bolt_path=/Users/exampleuser/.influxdbv2/influxd.bolt +``` diff --git a/content/influxdb/v2.6/api-guide/_index.md b/content/influxdb/v2.6/api-guide/_index.md new file mode 100644 index 000000000..5545d8208 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/_index.md @@ -0,0 +1,41 @@ +--- +title: Develop with the InfluxDB API +seotitle: Use the InfluxDB API +description: Interact with InfluxDB 2.5 using a rich API for writing and querying data and more. +weight: 4 +menu: + influxdb_2_6: + name: Develop with the API +influxdb/v2.6/tags: [api] +--- + +The InfluxDB v2 API provides a programmatic interface for interactions with InfluxDB. +Access the InfluxDB API using the `/api/v2/` endpoint. + +## Developer guides + +- [API Quick Start](/influxdb/v2.6/api-guide/api_intro/) + +## InfluxDB client libraries + +InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API. +For tutorials and information about client libraries, see [InfluxDB client libraries](/{{< latest "influxdb" >}}/api-guide/client-libraries/). + +## InfluxDB v2 API documentation + +InfluxDB OSS {{< current-version >}} API documentation + +### View InfluxDB API documentation locally + +InfluxDB API documentation is built into the `influxd` service and represents +the API specific to the current version of InfluxDB. +To view the API documentation locally, [start InfluxDB](/influxdb/v2.6/get-started/#start-influxdb) +and visit the `/docs` endpoint in a browser ([localhost:8086/docs](http://localhost:8086/docs)). + +## InfluxDB v1 compatibility API documentation + +The InfluxDB v2 API includes [InfluxDB 1.x compatibility endpoints](/influxdb/v2.6/reference/api/influxdb-1x/) +that work with InfluxDB 1.x client libraries and third-party integrations like +[Grafana](https://grafana.com) and others. + +View full v1 compatibility API documentation diff --git a/content/influxdb/v2.6/api-guide/api_intro.md b/content/influxdb/v2.6/api-guide/api_intro.md new file mode 100644 index 000000000..a98824d84 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/api_intro.md @@ -0,0 +1,75 @@ +--- +title: API Quick Start +seotitle: Use the InfluxDB API +description: Interact with InfluxDB using a rich API for writing and querying data and more. +weight: 3 +menu: + influxdb_2_6: + name: Quick start + parent: Develop with the API +aliases: + - /influxdb/v2.6/tools/api/ +influxdb/cloud/tags: [api] +--- + +InfluxDB offers a rich API and [client libraries](/influxdb/v2.6/api-guide/client-libraries) ready to integrate with your application. Use popular tools like Curl and [Postman](/influxdb/v2.6/api-guide/postman) for rapidly testing API requests. + +This section will guide you through the most commonly used API methods. + +For detailed documentation on the entire API, see [InfluxDBv2 API Reference](/influxdb/v2.6/reference/api/#influxdb-v2-api-documentation). + +{{% note %}} +If you need to use InfluxDB {{< current-version >}} with **InfluxDB 1.x** API clients and integrations, see the [1.x compatibility API](/influxdb/v2.6/reference/api/influxdb-1x/). +{{% /note %}} + +## Bootstrap your application + +With most API requests, you'll need to provide a minimum of your InfluxDB URL, Organization, and Authorization Token. + +[Install InfluxDB OSS v2.x](/influxdb/v2.6/install/) or upgrade to +an [InfluxDB Cloud account](/influxdb/cloud/sign-up). + +### Authentication + +InfluxDB uses [API tokens](/influxdb/v2.6/security/tokens/) to authorize API requests. + +1. Before exploring the API, use the InfluxDB UI to +[create an initial API token](/influxdb/v2.6/security/tokens/create-token/) for your application. + +2. Include your API token in an `Authentication: Token YOUR_API_TOKEN` HTTP header with each request. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[curl](#curl) +[Node.js](#nodejs) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +{{% get-shared-text "api/v2.0/auth/oss/token-auth.sh" %}} +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```js +{{% get-shared-text "api/v2.0/auth/oss/token-auth.js" %}} +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +Postman is another popular tool for exploring APIs. See how to [send authenticated requests with Postman](/{{< latest "influxdb" >}}/api-guide/postman/#send-authenticated-api-requests-with-postman). + +## Buckets API + +Before writing data you'll need to create a Bucket in InfluxDB. +[Create a bucket](/influxdb/v2.6/organizations/buckets/create-bucket/#create-a-bucket-using-the-influxdb-api) using an HTTP request to the InfluxDB API `/buckets` endpoint. + +```sh +{{% get-shared-text "api/v2.0/buckets/oss/create.sh" %}} +``` + +## Write API + +[Write data to InfluxDB](/influxdb/v2.6/write-data/developer-tools/api/) using an HTTP request to the InfluxDB API `/api/v2/write` endpoint. + +## Query API + +[Query from InfluxDB](/influxdb/v2.6/query-data/execute-queries/influx-api/) using an HTTP request to the `/api/v2/query` endpoint. diff --git a/content/influxdb/v2.6/api-guide/client-libraries/_index.md b/content/influxdb/v2.6/api-guide/client-libraries/_index.md new file mode 100644 index 000000000..8554b1d6b --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/_index.md @@ -0,0 +1,27 @@ +--- +title: Use InfluxDB client libraries +description: > + InfluxDB client libraries are language-specific tools that integrate with the InfluxDB v2 API. + View the list of available client libraries. +weight: 101 +aliases: + - /influxdb/v2.6/reference/client-libraries/ + - /influxdb/v2.6/reference/api/client-libraries/ + - /influxdb/v2.6/tools/client-libraries/ + - /influxdb/v2.x/api-guide/client-libraries/ +menu: + influxdb_2_6: + name: Client libraries + parent: Develop with the API +influxdb/v2.6/tags: [client libraries] +--- + +InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API. +The following **InfluxDB v2** client libraries are available: + +{{% note %}} +These client libraries are in active development and may not be feature-complete. +This list will continue to grow as more client libraries are released. +{{% /note %}} + +{{< children type="list" >}} diff --git a/content/influxdb/v2.6/api-guide/client-libraries/arduino.md b/content/influxdb/v2.6/api-guide/client-libraries/arduino.md new file mode 100644 index 000000000..cba7ae8a9 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/arduino.md @@ -0,0 +1,21 @@ +--- +title: Arduino client library +seotitle: Use the InfluxDB Arduino client library +list_title: Arduino +description: Use the InfluxDB Arduino client library to interact with InfluxDB. +external_url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino +list_note: _– contributed by [tobiasschuerg](https://github.com/tobiasschuerg)_ +menu: + influxdb_2_6: + name: Arduino + parent: Client libraries + params: + url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino +weight: 201 +--- + +Arduino is an open-source hardware and software platform used for building electronics projects. + +The documentation for this client library is available on GitHub. + +Arduino InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/browserjs.md b/content/influxdb/v2.6/api-guide/client-libraries/browserjs.md new file mode 100644 index 000000000..4ae31dcc1 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/browserjs.md @@ -0,0 +1,117 @@ +--- +title: JavaScript client library for web browsers +seotitle: Use the InfluxDB JavaScript client library for web browsers +list_title: JavaScript for browsers +description: > + Use the InfluxDB JavaScript client library to interact with InfluxDB in web clients. +menu: + influxdb_2_6: + name: JavaScript for browsers + identifier: client_js_browsers + parent: Client libraries +influxdb/v2.6/tags: [client libraries, JavaScript] +weight: 201 +aliases: + - /influxdb/v2.6/reference/api/client-libraries/browserjs/ + - /influxdb/v2.6/api-guide/client-libraries/browserjs/write + - /influxdb/v2.6/api-guide/client-libraries/browserjs/query +related: + - /influxdb/v2.6/api-guide/client-libraries/nodejs/write/ + - /influxdb/v2.6/api-guide/client-libraries/nodejs/query/ +--- + +Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to interact with the InfluxDB API in browsers and front-end clients. This library supports both front-end and server-side environments and provides the following distributions: +* ECMAScript modules (ESM) and CommonJS modules (CJS) +* Bundled ESM +* Bundled UMD + +This guide presumes some familiarity with JavaScript, browser environments, and InfluxDB. +If you're just getting started with InfluxDB, see [Get started with InfluxDB](/{{% latest "influxdb" %}}/get-started/). + +{{% warn %}} +### Tokens in production applications +{{% api/browser-token-warning %}} +{{% /warn %}} + +* [Before you begin](#before-you-begin) +* [Use with module bundlers](#use-with-module-bundlers) +* [Use bundled distributions with browsers and module loaders](#use-bundled-distributions-with-browsers-and-module-loaders) +* [Get started with the example app](#get-started-with-the-example-app) + +## Before you begin + +1. Install [Node.js](https://nodejs.org/en/download/package-manager/) to serve your front-end app. + +2. Ensure that InfluxDB is running and you can connect to it. + For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/{{% latest "influxdb" %}}/reference/urls/). + +## Use with module bundlers + +If you use a module bundler like Webpack or Parcel, install `@influxdata/influxdb-client-browser`. +For more information and examples, see [Node.js](/{{% latest "influxdb" %}}/api-guide/client-libraries/nodejs/). + +## Use bundled distributions with browsers and module loaders + +1. Configure InfluxDB properties for your script. + + ```html + + ``` + +2. Import modules from the latest client library browser distribution. +`@influxdata/influxdb-client-browser` exports bundled ESM and UMD syntaxes. + + {{< code-tabs-wrapper >}} + {{% code-tabs %}} + [ESM](#import-esm) + [UMD](#import-umd) + {{% /code-tabs %}} + {{% code-tab-content %}} + ```html + + ``` + {{% /code-tab-content %}} + {{% code-tab-content %}} + ```html + + + ``` + {{% /code-tab-content %}} + {{< /code-tabs-wrapper >}} + +After you've imported the client library, you're ready to [write data](/{{% latest "influxdb" %}}/api-guide/client-libraries/nodejs/write/?t=nodejs) to InfluxDB. + +## Get started with the example app + +This library includes an example browser app that queries from and writes to your InfluxDB instance. + +1. Clone the [influxdb-client-js](https://github.com/influxdata/influxdb-client-js) repo. + +2. Navigate to the `examples` directory: + ```js + cd examples + ``` + +3. Update `./env_browser.js` with your InfluxDB [url](/{{% latest "influxdb" %}}/reference/urls/), [bucket](/{{% latest "influxdb" %}}/organizations/buckets/), [organization](/{{% latest "influxdb" %}}/organizations/), and [token](/{{% latest "influxdb" %}}/security/tokens/) + +4. Run the following command to start the application at [http://localhost:3001/examples/index.html]() + + ```sh + npm run browser + ``` + + `index.html` loads the `env_browser.js` configuration, the client library ESM modules, and the application in your browser. diff --git a/content/influxdb/v2.6/api-guide/client-libraries/csharp.md b/content/influxdb/v2.6/api-guide/client-libraries/csharp.md new file mode 100644 index 000000000..11928a9d9 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/csharp.md @@ -0,0 +1,20 @@ +--- +title: C# client library +list_title: C# +seotitle: Use the InfluxDB C# client library +description: Use the InfluxDB C# client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-csharp +menu: + influxdb_2_6: + name: C# + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-csharp +weight: 201 +--- + +C# is a general-purpose object-oriented programming language. + +The documentation for this client library is available on GitHub. + +C# InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/dart.md b/content/influxdb/v2.6/api-guide/client-libraries/dart.md new file mode 100644 index 000000000..dcdaecd66 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/dart.md @@ -0,0 +1,20 @@ +--- +title: Dart client library +list_title: Dart +seotitle: Use the InfluxDB Dart client library +description: Use the InfluxDB Dart client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-dart +menu: + influxdb_2_6: + name: Dart + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-dart +weight: 201 +--- + +Dart is a programming language created for quick application development for both web and mobile apps. + +The documentation for this client library is available on GitHub. + +Dart InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/go.md b/content/influxdb/v2.6/api-guide/client-libraries/go.md new file mode 100644 index 000000000..68f7a4a09 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/go.md @@ -0,0 +1,206 @@ +--- +title: Go client library +seotitle: Use the InfluxDB Go client library +list_title: Go +description: > + Use the InfluxDB Go client library to interact with InfluxDB. +menu: + influxdb_2_6: + name: Go + parent: Client libraries +influxdb/v2.6/tags: [client libraries, Go] +weight: 201 +aliases: + - /influxdb/v2.6/reference/api/client-libraries/go/ + - /influxdb/v2.6/tools/client-libraries/go/ +--- + +Use the [InfluxDB Go client library](https://github.com/influxdata/influxdb-client-go) to integrate InfluxDB into Go scripts and applications. + +This guide presumes some familiarity with Go and InfluxDB. +If just getting started, see [Get started with InfluxDB](/influxdb/v2.6/get-started/). + +## Before you begin + +1. [Install Go 1.13 or later](https://golang.org/doc/install). +2. Add the client package your to your project dependencies. + + ```sh + # Add InfluxDB Go client package to your project go.mod + go get github.com/influxdata/influxdb-client-go/v2 + ``` +3. Ensure that InfluxDB is running and you can connect to it. + For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/v2.6/reference/urls/). + +## Boilerplate for the InfluxDB Go Client Library + +Use the Go library to write and query data from InfluxDB. + +1. In your Go program, import the necessary packages and specify the entry point of your executable program. + + ```go + package main + + import ( + "context" + "fmt" + "time" + + "github.com/influxdata/influxdb-client-go/v2" + ) + ``` + +2. Define variables for your InfluxDB [bucket](/influxdb/v2.6/organizations/buckets/), [organization](/influxdb/v2.6/organizations/), and [token](/influxdb/v2.6/security/tokens/). + + ```go + bucket := "example-bucket" + org := "example-org" + token := "example-token" + // Store the URL of your InfluxDB instance + url := "http://localhost:8086" + ``` + +3. Create the the InfluxDB Go client and pass in the `url` and `token` parameters. + + ```go + client := influxdb2.NewClient(url, token) + ``` + +4. Create a **write client** with the `WriteAPIBlocking` method and pass in the `org` and `bucket` parameters. + + ```go + writeAPI := client.WriteAPIBlocking(org, bucket) + ``` + +5. To query data, create an InfluxDB **query client** and pass in your InfluxDB `org`. + + ```go + queryAPI := client.QueryAPI(org) + ``` + +## Write data to InfluxDB with Go + +Use the Go library to write data to InfluxDB. + +1. Create a [point](/influxdb/v2.6/reference/glossary/#point) and write it to InfluxDB using the `WritePoint` method of the API writer struct. + +2. Close the client to flush all pending writes and finish. + + ```go + p := influxdb2.NewPoint("stat", + map[string]string{"unit": "temperature"}, + map[string]interface{}{"avg": 24.5, "max": 45}, + time.Now()) + writeAPI.WritePoint(context.Background(), p) + client.Close() + ``` + +### Complete example write script + +```go +func main() { + bucket := "example-bucket" + org := "example-org" + token := "example-token" + // Store the URL of your InfluxDB instance + url := "http://localhost:8086" + // Create new client with default option for server url authenticate by token + client := influxdb2.NewClient(url, token) + // User blocking write client for writes to desired bucket + writeAPI := client.WriteAPIBlocking(org, bucket) + // Create point using full params constructor + p := influxdb2.NewPoint("stat", + map[string]string{"unit": "temperature"}, + map[string]interface{}{"avg": 24.5, "max": 45}, + time.Now()) + // Write point immediately + writeAPI.WritePoint(context.Background(), p) + // Ensures background processes finishes + client.Close() +} +``` +## Query data from InfluxDB with Go +Use the Go library to query data to InfluxDB. + +1. Create a Flux query and supply your `bucket` parameter. + + ```js + from(bucket:"") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "stat") + ``` + + The query client sends the Flux query to InfluxDB and returns the results as a FluxRecord object with a table structure. + +**The query client includes the following methods:** + +- `Query`: Sends the Flux query to InfluxDB. +- `Next`: Iterates over the query response. +- `TableChanged`: Identifies when the group key changes. +- `Record`: Returns the last parsed FluxRecord and gives access to value and row properties. +- `Value`: Returns the actual field value. + +```go +result, err := queryAPI.Query(context.Background(), `from(bucket:"") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "stat")`) +if err == nil { + for result.Next() { + if result.TableChanged() { + fmt.Printf("table: %s\n", result.TableMetadata().String()) + } + fmt.Printf("value: %v\n", result.Record().Value()) + } + if result.Err() != nil { + fmt.Printf("query parsing error: %s\n", result.Err().Error()) + } +} else { + panic(err) +} +``` + +**The FluxRecord object includes the following methods for accessing your data:** + +- `Table()`: Returns the index of the table the record belongs to. +- `Start()`: Returns the inclusive lower time bound of all records in the current table. +- `Stop()`: Returns the exclusive upper time bound of all records in the current table. +- `Time()`: Returns the time of the record. +- `Value() `: Returns the actual field value. +- `Field()`: Returns the field name. +- `Measurement()`: Returns the measurement name of the record. +- `Values()`: Returns a map of column values. +- `ValueByKey()`: Returns a value from the record for given column key. + +### Complete example query script + +```go + func main() { + // Create client + client := influxdb2.NewClient(url, token) + // Get query client + queryAPI := client.QueryAPI(org) + // Get QueryTableResult + result, err := queryAPI.Query(context.Background(), `from(bucket:"my-bucket")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`) + if err == nil { + // Iterate over query response + for result.Next() { + // Notice when group key has changed + if result.TableChanged() { + fmt.Printf("table: %s\n", result.TableMetadata().String()) + } + // Access data + fmt.Printf("value: %v\n", result.Record().Value()) + } + // Check for an error + if result.Err() != nil { + fmt.Printf("query parsing error: %s\n", result.Err().Error()) + } + } else { + panic(err) + } + // Ensures background processes finishes + client.Close() +} +``` + +For more information, see the [Go client README on GitHub](https://github.com/influxdata/influxdb-client-go). diff --git a/content/influxdb/v2.6/api-guide/client-libraries/java.md b/content/influxdb/v2.6/api-guide/client-libraries/java.md new file mode 100644 index 000000000..bdd0e1ed1 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/java.md @@ -0,0 +1,20 @@ +--- +title: Java client library +seotitle: Use the InfluxDB Java client library +list_title: Java +description: Use the Java client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-java +menu: + influxdb_2_6: + name: Java + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-java +weight: 201 +--- + +Java is one of the oldest and most popular class-based, object-oriented programming languages. + +The documentation for this client library is available on GitHub. + +Java InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/kotlin.md b/content/influxdb/v2.6/api-guide/client-libraries/kotlin.md new file mode 100644 index 000000000..2a30cff13 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/kotlin.md @@ -0,0 +1,20 @@ +--- +title: Kotlin client library +seotitle: Use the Kotlin client library +list_title: Kotlin +description: Use the InfluxDB Kotlin client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin +menu: + influxdb_2_6: + name: Kotlin + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin +weight: 201 +--- + +Kotlin is an open-source programming language that runs on the Java Virtual Machine (JVM). + +The documentation for this client library is available on GitHub. + +Kotlin InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/nodejs/_index.md b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/_index.md new file mode 100644 index 000000000..83981b7e6 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/_index.md @@ -0,0 +1,23 @@ +--- +title: Node.js JavaScript client library +seotitle: Use the InfluxDB JavaScript client library +list_title: Node.js +description: > + Use the InfluxDB Node.js JavaScript client library to interact with InfluxDB. +menu: + influxdb_2_6: + name: Node.js + parent: Client libraries +influxdb/v2.6/tags: [client libraries, JavaScript] +weight: 201 +aliases: + - /influxdb/v2.6/reference/api/client-libraries/nodejs/ + - /influxdb/v2.6/reference/api/client-libraries/js/ +--- + +Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to integrate InfluxDB into your Node.js application. +In this guide, you'll start a Node.js project from scratch and code some simple API operations. + +{{< children >}} + +{{% api/v2dot0/nodejs/learn-more %}} diff --git a/content/influxdb/v2.6/api-guide/client-libraries/nodejs/install.md b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/install.md new file mode 100644 index 000000000..ba60b6fa3 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/install.md @@ -0,0 +1,97 @@ +--- +title: Install the InfluxDB JavaScript client library +seotitle: Install the InfluxDB Node.js JavaScript client library +description: > + Install the JavaScript client library to interact with the InfluxDB API in Node.js. +menu: + influxdb_2_6: + name: Install + parent: Node.js +influxdb/v2.6/tags: [client libraries, JavaScript] +weight: 100 +aliases: + - /influxdb/v2.6/reference/api/client-libraries/nodejs/install +--- + + +## Install Node.js + +1. Install [Node.js](https://nodejs.org/en/download/package-manager/). + +2. Ensure that InfluxDB is running and you can connect to it. + For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/v2.6/reference/urls/). + +3. Start a new Node.js project. + The `npm` package manager is included with Node.js. + + ```sh + npm init -y influx-node-app + ``` + +## Install TypeScript + +Many of the client library examples use [TypeScript](https://www.typescriptlang.org/). Follow these steps to initialize the TypeScript project. + +1. Install TypeScript and type definitions for Node.js. + + ```sh + npm i -g typescript && npm i --save-dev @types/node + ``` +2. Create a TypeScript configuration with default values. + + ```sh + tsc --init + ``` +3. Run the TypeScript compiler. To recompile your code automatically as you make changes, pass the `watch` flag to the compiler. + + ```sh + tsc -w -p + ``` + +## Install dependencies + +The JavaScript client library contains two packages: `@influxdata/influxdb-client` and `@influxdata/influxdb-client-apis`. +Add both as dependencies of your project. + +1. Open a new terminal window and install `@influxdata/influxdb-client` for querying and writing data: + + ```sh + npm install --save @influxdata/influxdb-client + ``` + +3. Install `@influxdata/influxdb-client-apis` for access to the InfluxDB management APIs: + + ```sh + npm install --save @influxdata/influxdb-client-apis + ``` + +## Next steps + +Once you've installed the Javascript client library, you're ready to [write data](/influxdb/v2.6/api-guide/client-libraries/nodejs/write/) to InfluxDB or [get started](#get-started-with-examples) with other examples from the client library. + +## Get started with examples + +{{% note %}} +The client examples include an [`env`](https://github.com/influxdata/influxdb-client-js/blob/master/examples/env.js) module for accessing your InfluxDB properties from environment variables or from `env.js`. +The examples use these properties to interact with the InfluxDB API. +{{% /note %}} + +1. Set environment variables or update `env.js` with your InfluxDB [bucket](/influxdb/v2.6/organizations/buckets/), [organization](/influxdb/v2.6/organizations/), [token](/influxdb/v2.6/security/tokens/), and [url](/influxdb/v2.6/reference/urls/). + + ```sh + export INFLUX_URL=http://localhost:8086 + export INFLUX_TOKEN=YOUR_API_TOKEN + export INFLUX_ORG=YOUR_ORG + export INFLUX_BUCKET=YOUR_BUCKET + ``` + Replace the following: + - *`YOUR_API_TOKEN`*: InfluxDB API token + - *`YOUR_ORG`*: InfluxDB organization ID + - *`YOUR_BUCKET`*: InfluxDB bucket name + +2. Run an example script. + + ```sh + query.ts + ``` +{{% api/v2dot0/nodejs/learn-more %}} diff --git a/content/influxdb/v2.6/api-guide/client-libraries/nodejs/query.md b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/query.md new file mode 100644 index 000000000..206f29718 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/query.md @@ -0,0 +1,94 @@ +--- +title: Query data with the InfluxDB JavaScript client library +description: > + Use the JavaScript client library to query data with the InfluxDB API in Node.js. +menu: + influxdb_2_6: + name: Query + parent: Node.js +influxdb/v2.6/tags: [client libraries, JavaScript] +weight: 201 +aliases: + - /influxdb/v2.6/reference/api/client-libraries/nodejs/query +--- + +Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB. + +The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table. + +## Before you begin + +- [Install the client library and other dependencies](/influxdb/v2.6/api-guide/client-libraries/nodejs/install/). + +## Query InfluxDB + +1. Change to your new project directory and create a file for your query module. + + ```sh + cd influx-node-app && touch query.js + ``` + +2. Instantiate an `InfluxDB` client. Provide your InfluxDB URL and API token. + Use the `getQueryApi()` method of the client. + Provide your InfluxDB organization ID to create a configured **query client**. + + ```js + import { InfluxDB, Point } from '@influxdata/influxdb-client' + + const queryApi = new InfluxDB({YOUR_URL, YOUR_API_TOKEN}).getQueryApi(YOUR_ORG) + ``` + + Replace the following: + - *`YOUR_URL`*: InfluxDB URL + - *`YOUR_API_TOKEN`*: InfluxDB API token + - *`YOUR_ORG`*: InfluxDB organization ID + +3. Create a Flux query for your InfluxDB bucket. Store the query as a string variable. + {{% warn %}} + To prevent SQL injection attacks, avoid concatenating unsafe user input with queries. + {{% /warn %}} + + ```js + const fluxQuery = + 'from(bucket: "YOUR_BUCKET") + |> range(start: 0) + |> filter(fn: (r) => r._measurement == "temperature")' + ``` + Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket. + +4. Use the `queryRows()` method of the query client to query InfluxDB. + `queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object. + The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable). + `queryRows()` subscribes your observer to the observable. + Finally, the observer logs the rows from the response to the terminal. + + ```js + const observer = { + next(row, tableMeta) { + const o = tableMeta.toObject(row) + console.log( + `${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}` + ) + } + } + + queryApi.queryRows(fluxQuery, observer) + + ``` + +### Complete example + +```js +{{% get-shared-text "api/v2.0/query/query.mjs" %}} +``` + +To run the example from a file, set your InfluxDB environment variables and use `node` to execute the JavaScript file. + +```sh +export INFLUX_URL=http://localhost:8086 && \ +export INFLUX_TOKEN=YOUR_API_TOKEN && \ +export INFLUX_ORG=YOUR_ORG && \ +node query.js +``` + +{{% api/v2dot0/nodejs/learn-more %}} diff --git a/content/influxdb/v2.6/api-guide/client-libraries/nodejs/write.md b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/write.md new file mode 100644 index 000000000..0395364e4 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/nodejs/write.md @@ -0,0 +1,117 @@ +--- +title: Write data with the InfluxDB JavaScript client library +description: > + Use the JavaScript client library to write data with the InfluxDB API in Node.js. +menu: + influxdb_2_6: + name: Write + parent: Node.js +influxdb/v2.6/tags: [client libraries, JavaScript] +weight: 101 +aliases: + - /influxdb/v2.6/reference/api/client-libraries/nodejs/write +related: + - /influxdb/v2.6/write-data/troubleshoot/ +--- + +Use the [InfluxDB Javascript client library](https://github.com/influxdata/influxdb-client-js) to write data from a Node.js environment to InfluxDB. + +The Javascript client library includes the following convenient features for writing data to InfluxDB: +- Apply default tags to data points. +- Buffer points into batches to optimize data transfer. +- Automatically retry requests on failure. +- Set an optional HTTP proxy address for your network. + +### Before you begin + +- [Install the client library and other dependencies](/influxdb/v2.6/api-guide/client-libraries/nodejs/install/). +### Write data with the client library + +1. Instantiate an `InfluxDB` client. Provide your InfluxDB URL and API token. + + ```js + import {InfluxDB, Point} from '@influxdata/influxdb-client' + + const influxDB = new InfluxDB({YOUR_URL, YOUR_API_TOKEN}) + ``` + Replace the following: + - *`YOUR_URL`*: InfluxDB URL + - *`YOUR_API_TOKEN`*: InfluxDB API token + +2. Use the `getWriteApi()` method of the client to create a **write client**. + Provide your InfluxDB organization ID and bucket name. + + ```js + const writeApi = influxDB.getWriteApi(YOUR_ORG, YOUR_BUCKET) + ``` + Replace the following: + - *`YOUR_ORG`*: InfluxDB organization ID + - *`YOUR_BUCKET`*: InfluxDB bucket name + +3. To apply one or more [tags](/influxdb/v2.6/reference/glossary/#tag) to all points, use the `useDefaultTags()` method. + Provide tags as an object of key/value pairs. + + ```js + writeApi.useDefaultTags({region: 'west'}) + ``` + +4. Use the `Point()` constructor to create a [point](/influxdb/v2.6/reference/glossary/#point). + 1. Call the constructor and provide a [measurement](/influxdb/v2.6/reference/glossary/#measurement). + 2. To add one or more tags, chain the `tag()` method to the constructor. + Provide a `name` and `value`. + 3. To add a field of type `float`, chain the `floatField()` method to the constructor. + Provide a `name` and `value`. + + ```js + const point1 = new Point('temperature') + .tag('sensor_id', 'TLM010') + .floatField('value', 24) + ``` + +5. Use the `writePoint()` method to write the point to your InfluxDB bucket. + Finally, use the `close()` method to flush all pending writes. + The example logs the new data point followed by "WRITE FINISHED" to stdout. + + ```js + writeApi.writePoint(point1) + + writeApi.close().then(() => { + console.log('WRITE FINISHED') + }) + ``` + +### Complete example + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Curl](#curl) +[Node.js](#nodejs) +{{% /code-tabs %}} +{{% code-tab-content %}} + +```sh +{{< get-shared-text "api/v2.0/write/write.sh" >}} +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} + +```js +{{< get-shared-text "api/v2.0/write/write.mjs" >}} +``` + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +To run the example from a file, set your InfluxDB environment variables and use `node` to execute the JavaScript file. + +```sh +export INFLUX_URL=http://localhost:8086 && \ +export INFLUX_TOKEN=YOUR_API_TOKEN && \ +export INFLUX_ORG=YOUR_ORG && \ +export INFLUX_BUCKET=YOUR_BUCKET && \ +node write.js +``` + +### Response codes +_For information about **InfluxDB API response codes**, see +[InfluxDB API Write documentation](/influxdb/cloud/api/#operation/PostWrite)._ diff --git a/content/influxdb/v2.6/api-guide/client-libraries/php.md b/content/influxdb/v2.6/api-guide/client-libraries/php.md new file mode 100644 index 000000000..c6ae4bb54 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/php.md @@ -0,0 +1,20 @@ +--- +title: PHP client library +seotitle: Use the InfluxDB PHP client library +list_title: PHP +description: Use the InfluxDB PHP client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-php +menu: + influxdb_2_6: + name: PHP + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-php +weight: 201 +--- + +PHP is a popular general-purpose scripting language primarily used for web development. + +The documentation for this client library is available on GitHub. + +PHP InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/python.md b/content/influxdb/v2.6/api-guide/client-libraries/python.md new file mode 100644 index 000000000..cad897fce --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/python.md @@ -0,0 +1,193 @@ +--- +title: Python client library +seotitle: Use the InfluxDB Python client library +list_title: Python +description: > + Use the InfluxDB Python client library to interact with InfluxDB. +menu: + influxdb_2_6: + name: Python + parent: Client libraries +influxdb/v2.6/tags: [client libraries, python] +aliases: + - /influxdb/v2.6/reference/api/client-libraries/python/ + - /influxdb/v2.6/reference/api/client-libraries/python-cl-guide/ + - /influxdb/v2.6/tools/client-libraries/python/ +weight: 201 +--- + +Use the [InfluxDB Python client library](https://github.com/influxdata/influxdb-client-python) to integrate InfluxDB into Python scripts and applications. + +This guide presumes some familiarity with Python and InfluxDB. +If just getting started, see [Get started with InfluxDB](/influxdb/v2.6/get-started/). + +## Before you begin + +1. Install the InfluxDB Python library: + + ```sh + pip install influxdb-client + ``` + +2. Ensure that InfluxDB is running. + If running InfluxDB locally, visit http://localhost:8086. + (If using InfluxDB Cloud, visit the URL of your InfluxDB Cloud UI. + For example: https://us-west-2-1.aws.cloud2.influxdata.com.) + +## Write data to InfluxDB with Python + +We are going to write some data in [line protocol](/influxdb/v2.6/reference/syntax/line-protocol/) using the Python library. + +1. In your Python program, import the InfluxDB client library and use it to write data to InfluxDB. + + ```python + import influxdb_client + from influxdb_client.client.write_api import SYNCHRONOUS + ``` + +2. Define a few variables with the name of your [bucket](/influxdb/v2.6/organizations/buckets/), [organization](/influxdb/v2.6/organizations/), and [token](/influxdb/v2.6/security/tokens/). + + ```python + bucket = "" + org = "" + token = "" + # Store the URL of your InfluxDB instance + url="http://localhost:8086" + ``` + +3. Instantiate the client. The `InfluxDBClient` object takes three named parameters: `url`, `org`, and `token`. Pass in the named parameters. + + ```python + client = influxdb_client.InfluxDBClient( + url=url, + token=token, + org=org + ) + ``` + The `InfluxDBClient` object has a `write_api` method used for configuration. + +4. Instantiate a **write client** using the `client` object and the `write_api` method. Use the `write_api` method to configure the writer object. + + ```python + write_api = client.write_api(write_options=SYNCHRONOUS) + ``` + +5. Create a [point](/influxdb/v2.6/reference/glossary/#point) object and write it to InfluxDB using the `write` method of the API writer object. The write method requires three parameters: `bucket`, `org`, and `record`. + + ```python + p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3) + write_api.write(bucket=bucket, org=org, record=p) + ``` + +### Complete example write script + +```python +import influxdb_client +from influxdb_client.client.write_api import SYNCHRONOUS + +bucket = "" +org = "" +token = "" +# Store the URL of your InfluxDB instance +url="http://localhost:8086" + +client = influxdb_client.InfluxDBClient( + url=url, + token=token, + org=org +) + +# Write script +write_api = client.write_api(write_options=SYNCHRONOUS) + +p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3) +write_api.write(bucket=bucket, org=org, record=p) +``` +## Query data from InfluxDB with Python + +1. Instantiate the **query client**. + + ```python + query_api = client.query_api() + ``` + +2. Create a Flux query, and then format it as a Python string. + + ```python + query = 'from(bucket:"my-bucket")\ + |> range(start: -10m)\ + |> filter(fn:(r) => r._measurement == "my_measurement")\ + |> filter(fn:(r) => r.location == "Prague")\ + |> filter(fn:(r) => r._field == "temperature")' + ``` + + The query client sends the Flux query to InfluxDB and returns a Flux object with a table structure. + +3. Pass the `query()` method two named parameters:`org` and `query`. + + ```python + result = query_api.query(org=org, query=query) + ``` + +4. Iterate through the tables and records in the Flux object. + - Use the `get_value()` method to return values. + - Use the `get_field()` method to return fields. + + ```python + results = [] + for table in result: + for record in table.records: + results.append((record.get_field(), record.get_value())) + + print(results) + [(temperature, 25.3)] + ``` + +**The Flux object provides the following methods for accessing your data:** + +- `get_measurement()`: Returns the measurement name of the record. +- `get_field()`: Returns the field name. +- `get_value()`: Returns the actual field value. +- `values`: Returns a map of column values. +- `values.get("")`: Returns a value from the record for given column. +- `get_time()`: Returns the time of the record. +- `get_start()`: Returns the inclusive lower time bound of all records in the current table. +- `get_stop()`: Returns the exclusive upper time bound of all records in the current table. + + +### Complete example query script + +```python +import influxdb_client +from influxdb_client.client.write_api import SYNCHRONOUS + +bucket = "" +org = "" +token = "" +# Store the URL of your InfluxDB instance +url="http://localhost:8086" + +client = influxdb_client.InfluxDBClient( + url=url, + token=token, + org=org +) + +# Query script +query_api = client.query_api() +query = 'from(bucket:"my-bucket")\ +|> range(start: -10m)\ +|> filter(fn:(r) => r._measurement == "my_measurement")\ +|> filter(fn:(r) => r.location == "Prague")\ +|> filter(fn:(r) => r._field == "temperature")' +result = query_api.query(org=org, query=query) +results = [] +for table in result: + for record in table.records: + results.append((record.get_field(), record.get_value())) + +print(results) +[(temperature, 25.3)] +``` + +For more information, see the [Python client README on GitHub](https://github.com/influxdata/influxdb-client-python). diff --git a/content/influxdb/v2.6/api-guide/client-libraries/r.md b/content/influxdb/v2.6/api-guide/client-libraries/r.md new file mode 100644 index 000000000..859f34649 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/r.md @@ -0,0 +1,20 @@ +--- +title: R package client library +list_title: R +seotitle: Use the InfluxDB client R package +description: Use the InfluxDB client R package to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-r +menu: + influxdb_2_6: + name: R + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-r +weight: 201 +--- + +R is a programming language and software environment for statistical analysis, reporting, and graphical representation primarily used in data science. + +The documentation for this client library is available on GitHub. + +R InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/ruby.md b/content/influxdb/v2.6/api-guide/client-libraries/ruby.md new file mode 100644 index 000000000..eaceafaed --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/ruby.md @@ -0,0 +1,20 @@ +--- +title: Ruby client library +seotitle: Use the InfluxDB Ruby client library +list_title: Ruby +description: Use the InfluxDB Ruby client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-ruby +menu: + influxdb_2_6: + name: Ruby + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-ruby +weight: 201 +--- + +Ruby is a highly flexible, open-source, object-oriented programming language. + +The documentation for this client library is available on GitHub. + +Ruby InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/scala.md b/content/influxdb/v2.6/api-guide/client-libraries/scala.md new file mode 100644 index 000000000..a3da08cdc --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/scala.md @@ -0,0 +1,20 @@ +--- +title: Scala client library +seotitle: Use the InfluxDB Scala client library +list_title: Scala +description: Use the InfluxDB Scala client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala +menu: + influxdb_2_6: + name: Scala + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala +weight: 201 +--- + +Scala is a general-purpose programming language that supports both object-oriented and functional programming. + +The documentation for this client library is available on GitHub. + +Scala InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/client-libraries/swift.md b/content/influxdb/v2.6/api-guide/client-libraries/swift.md new file mode 100644 index 000000000..b204b6429 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/client-libraries/swift.md @@ -0,0 +1,20 @@ +--- +title: Swift client library +seotitle: Use the InfluxDB Swift client library +list_title: Swift +description: Use the InfluxDB Swift client library to interact with InfluxDB. +external_url: https://github.com/influxdata/influxdb-client-swift +menu: + influxdb_2_6: + name: Swift + parent: Client libraries + params: + url: https://github.com/influxdata/influxdb-client-swift +weight: 201 +--- + +Swift is a programming language created by Apple for building applications accross multiple Apple platforms. + +The documentation for this client library is available on GitHub. + +Swift InfluxDB client \ No newline at end of file diff --git a/content/influxdb/v2.6/api-guide/postman.md b/content/influxdb/v2.6/api-guide/postman.md new file mode 100644 index 000000000..21f4a0019 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/postman.md @@ -0,0 +1,57 @@ +--- +title: Use Postman with the InfluxDB API +description: > + Use [Postman](https://www.postman.com/), a popular tool for exploring APIs, + to interact with the [InfluxDB API](/influxdb/v2.6/api-guide/). +menu: + influxdb_2_6: + parent: Tools & integrations + name: Use Postman +weight: 105 +influxdb/v2.6/tags: [api, authentication] +aliases: + - /influxdb/v2.6/reference/api/postman/ +--- + +Use [Postman](https://www.postman.com/), a popular tool for exploring APIs, +to interact with the [InfluxDB API](/influxdb/v2.6/api-guide/). + +## Install Postman + +Download Postman from the [official downloads page](https://www.postman.com/downloads/). + +Or to install with Homebrew on macOS, run the following command: + +```sh +brew install --cask postman +``` + +## Send authenticated API requests with Postman + +All requests to the [InfluxDB v2 API](/influxdb/v2.6/api-guide/) must include an [InfluxDB API token](/influxdb/v2.6/security/tokens/). + +{{% note %}} + +#### Authenticate with a username and password + +If you need to send a username and password (`Authorization: Basic`) to the [InfluxDB 1.x compatibility API](/influxdb/v2.6/reference/api/influxdb-1x/), see how to [authenticate with a username and password scheme](/influxdb/v2.6/reference/api/influxdb-1x/#authenticate-with-the-token-scheme). + +{{% /note %}} + +To configure Postman to send an [InfluxDB API token](/influxdb/v2.6/security/tokens/) with the `Authorization: Token` HTTP header, do the following: + +1. If you have not already, [create a token](/influxdb/v2.6/security/tokens/create-token/). +2. In the Postman **Authorization** tab, select **API Key** in the **Type** dropdown. +3. For **Key**, enter `Authorization`. +4. For **Value**, enter `Token INFLUX_API_TOKEN`, replacing *`INFLUX_API_TOKEN`* with the token generated in step 1. +5. Ensure that the **Add to** option is set to **Header**. + +#### Test authentication credentials + +To test the authentication, in Postman, enter your InfluxDB API `/api/v2/` root endpoint URL and click **Send**. + +###### InfluxDB v2 API root endpoint + +```sh +http://localhost:8086/api/v2 +``` diff --git a/content/influxdb/v2.6/api-guide/tutorials/_index.md b/content/influxdb/v2.6/api-guide/tutorials/_index.md new file mode 100644 index 000000000..285bfed3b --- /dev/null +++ b/content/influxdb/v2.6/api-guide/tutorials/_index.md @@ -0,0 +1,30 @@ +--- +title: InfluxDB API client library tutorials +seotitle: Get started with InfluxDB API client libraries +description: Follow step-by-step tutorials to for InfluxDB API client libraries in your favorite framework or language. +weight: 4 +menu: + influxdb_2_6: + name: Client library tutorials + parent: Develop with the API +influxdb/v2.6/tags: [api] +--- + +Follow step-by-step tutorials to build an Internet-of-Things (IoT) application with InfluxData client libraries and your favorite framework or language. +InfluxData and the user community maintain client libraries for developers who want to take advantage of: + +- Idioms for InfluxDB requests, responses, and errors. +- Common patterns in a familiar programming language. +- Faster development and less boilerplate code. + +In these tutorials, you'll use the InfluxDB API and +client libraries to build a modern application, and learn the following: + +- InfluxDB core concepts. +- How the application interacts with devices and InfluxDB. +- How to authenticate apps and devices to the API. +- How to install a client library. +- How to write and query data in InfluxDB. +- How to use the InfluxData UI libraries to format data and create visualizations. + +{{< children >}} diff --git a/content/influxdb/v2.6/api-guide/tutorials/nodejs.md b/content/influxdb/v2.6/api-guide/tutorials/nodejs.md new file mode 100644 index 000000000..94ec21a44 --- /dev/null +++ b/content/influxdb/v2.6/api-guide/tutorials/nodejs.md @@ -0,0 +1,521 @@ +--- +title: JavaScript client library starter +seotitle: Use JavaScript client library to build a sample application +list_title: JavaScript +description: > + Build a JavaScript application that writes, queries, and manages devices with the + InfluxDB client library. +menu: + influxdb_2_6: + identifier: client-library-starter-js + name: JavaScript + parent: Client library tutorials +influxdb/v2.6/tags: [api, javascript, nodejs] +--- + +{{% api/iot-starter-intro %}} + +## Contents + +- [Contents](#contents) +- [Set up InfluxDB](#set-up-influxdb) + - [Authenticate with an InfluxDB API token](#authenticate-with-an-influxdb-api-token) +- [Introducing IoT Starter](#introducing-iot-starter) +- [Create the application](#create-the-application) +- [Install InfluxDB client library](#install-influxdb-client-library) +- [Configure the client library](#configure-the-client-library) +- [Build the API](#build-the-api) +- [Create the API to list devices](#create-the-api-to-list-devices) + - [Handle requests for device information](#handle-requests-for-device-information) + - [Retrieve and list devices](#retrieve-and-list-devices) +- [Create the API to register devices](#create-the-api-to-register-devices) + - [Create an authorization for the device](#create-an-authorization-for-the-device) + - [Write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket) +- [Install and run the UI](#install-and-run-the-ui) + +## Set up InfluxDB + +If you haven't already, [create an InfluxDB Cloud account](https://www.influxdata.com/products/influxdb-cloud/) or [install InfluxDB OSS](https://www.influxdata.com/products/influxdb/). + +### Authenticate with an InfluxDB API token + +For convenience in development, +[create an _All-Access_ token](/influxdb/v2.6/security/tokens/create-token/) +for your application. This grants your application full read and write +permissions on all resources within your InfluxDB organization. + +{{% note %}} + +For a production application, create and use a +{{% cloud-only %}}custom{{% /cloud-only %}}{{% oss-only %}}read-write{{% /oss-only %}} +token with minimal permissions and only use it with your application. + +{{% /note %}} + +## Introducing IoT Starter + +The application architecture has four layers: + +- **InfluxDB API**: InfluxDB v2 API. +- **IoT device**: Virtual or physical devices write IoT data to the InfluxDB API. +- **UI**: Sends requests to the server and renders views in the browser. +- **API**: Receives requests from the UI, sends requests to InfluxDB, and processes responses from InfluxDB. + +{{% note %}} +For the complete code referenced in this tutorial, see the [influxdata/iot-api-js repository](https://github.com/influxdata/iot-api-js). +{{% /note %}} + +## Install Yarn + +If you haven't already installed `yarn`, follow the [Yarn package manager installation instructions](https://yarnpkg.com/getting-started/install#nodejs-1610-1) for your version of Node.js. + +- To check the installed `yarn` version, enter the following code into your terminal: + + ```bash + yarn --version + ``` + +## Create the application + +Create a directory that will contain your `iot-api` projects. +The following example code creates an `iot-api` directory in your home directory +and changes to the new directory: + +```bash +mkdir ~/iot-api-apps +cd ~/iot-api-apps +``` + +Follow these steps to create a JavaScript application with [Next.js](https://nextjs.org/): + +1. In your `~/iot-api-apps` directory, open a terminal and enter the following commands to create the `iot-api-js` app from the NextJS [learn-starter template](https://github.com/vercel/next-learn/tree/master/basics/learn-starter): + + ```bash + yarn create-next-app iot-api-js --example "https://github.com/vercel/next-learn/tree/master/basics/learn-starter" + ``` + +2. After the installation completes, enter the following commands in your terminal to go into your `./iot-api-js` directory and start the development server: + + ```bash + cd iot-api-js + yarn dev -p 3001 + ``` + +To view the application, visit in your browser. + +## Install InfluxDB client library + +The InfluxDB client library provides the following InfluxDB API interactions: + +- Query data with the Flux language. +- Write data to InfluxDB. +- Batch data in the background. +- Retry requests automatically on failure. + +1. Enter the following command into your terminal to install the client library: + + ```bash + yarn add @influxdata/influxdb-client + ``` + +2. Enter the following command into your terminal to install `@influxdata/influxdb-client-apis`, the _management APIs_ that create, modify, and delete authorizations, buckets, tasks, and other InfluxDB resources: + + ```bash + yarn add @influxdata/influxdb-client-apis + ``` + +For more information about the client library, see the [influxdata/influxdb-client-js repo](https://github.com/influxdata/influxdb-client-js). + +## Configure the client library + +InfluxDB client libraries require configuration properties from your InfluxDB environment. +Typically, you'll provide the following properties as environment variables for your application: + +- `INFLUX_URL` +- `INFLUX_TOKEN` +- `INFLUX_ORG` +- `INFLUX_BUCKET` +- `INFLUX_BUCKET_AUTH` + +Next.js uses the `env` module to provide environment variables to your application. + +The `./.env.development` file is versioned and contains non-secret default settings for your _development_ environment. + +```bash +# .env.development + +INFLUX_URL=http://localhost:8086 +INFLUX_BUCKET=iot_center +INFLUX_BUCKET_AUTH=iot_center_devices +``` + +To configure secrets and settings that aren't added to version control, +create a `./.env.local` file and set the variables--for example, set your InfluxDB token and organization: + +```sh +# .env.local + +# INFLUX_TOKEN +# InfluxDB API token used by the application server to send requests to InfluxDB. +# For convenience in development, use an **All-Access** token. + +INFLUX_TOKEN=29Xx1KH9VkASPR2DSfRfFd82OwGD... + +# INFLUX_ORG +# InfluxDB organization ID you want to use in development. + +INFLUX_ORG=48c88459ee424a04 +``` + +Enter the following commands into your terminal to restart and load the `.env` files: + + 1. `CONTROL+C` to stop the application. + 2. `yarn dev` to start the application. + +Next.js sets variables that you can access in the `process.env` object--for example: + +```ts +console.log(process.env.INFLUX_ORG) +``` + +## Build the API + +Your application API provides server-side HTTP endpoints that process requests from the UI. +Each API endpoint is responsible for the following: + +1. Listen for HTTP requests (from the UI). +2. Translate requests into InfluxDB API requests. +3. Process InfluxDB API responses and handle errors. +4. Respond with status and data (for the UI). + +## Create the API to list devices + +Add the `/api/devices` API endpoint that retrieves, processes, and lists devices. +`/api/devices` uses the `/api/v2/query` InfluxDB API endpoint to query `INFLUX_BUCKET_AUTH` for a registered device. + +### Handle requests for device information + +1. Create a `./pages/api/devices/[[...deviceParams]].js` file to handle requests for `/api/devices` and `/api/devices//measurements/`. + +2. In the file, export a Next.js request `handler` function. +[See the example](https://github.com/influxdata/iot-api-js/blob/18d34bcd59b93ad545c5cd9311164c77f6d1995a/pages/api/devices/%5B%5B...deviceParams%5D%5D.js). + + {{% note %}} +In Next.js, the filename pattern `[[...param]].js` creates a _catch-all_ API route. +To learn more, see [Next.js dynamic API routes](https://nextjs.org/docs/api-routes/dynamic-api-routes). + {{% /note %}} + +### Retrieve and list devices + +Retrieve registered devices in `INFLUX_BUCKET_AUTH` and process the query results. + +1. Create a Flux query that gets the last row of each [series](/influxdb/v2.6/reference/glossary#series) that contains a `deviceauth` measurement. + The example query below returns rows that contain the `key` field (authorization ID) and excludes rows that contain a `token` field (to avoid exposing tokens to the UI). + + ```js + // Flux query finds devices + from(bucket:`${INFLUX_BUCKET_AUTH}`) + |> range(start: 0) + |> filter(fn: (r) => r._measurement == "deviceauth" and r._field != "token") + |> last() + ``` + +2. Use the `QueryApi` client to send the Flux query to the `POST /api/v2/query` InfluxDB API endpoint. + +Create a `./pages/api/devices/_devices.js` file that contains the following: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Node.js](#nodejs) +{{% /code-tabs %}} +{{% code-tab-content %}} + +{{% truncate %}} + +```ts +import { InfluxDB } from '@influxdata/influxdb-client' +import { flux } from '@influxdata/influxdb-client' + +const INFLUX_ORG = process.env.INFLUX_ORG +const INFLUX_BUCKET_AUTH = process.env.INFLUX_BUCKET_AUTH +const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.INFLUX_TOKEN}) + +/** + * Gets devices or a particular device when deviceId is specified. Tokens + * are not returned unless deviceId is specified. It can also return devices + * with empty/unknown key, such devices can be ignored (InfluxDB authorization is not associated). + * @param deviceId optional deviceId + * @returns promise with an Record. + */ + export async function getDevices(deviceId) { + const queryApi = influxdb.getQueryApi(INFLUX_ORG) + const deviceFilter = + deviceId !== undefined + ? flux` and r.deviceId == "${deviceId}"` + : flux` and r._field != "token"` + const fluxQuery = flux`from(bucket:${INFLUX_BUCKET_AUTH}) + |> range(start: 0) + |> filter(fn: (r) => r._measurement == "deviceauth"${deviceFilter}) + |> last()` + const devices = {} + + return await new Promise((resolve, reject) => { + queryApi.queryRows(fluxQuery, { + next(row, tableMeta) { + const o = tableMeta.toObject(row) + const deviceId = o.deviceId + if (!deviceId) { + return + } + const device = devices[deviceId] || (devices[deviceId] = {deviceId}) + device[o._field] = o._value + if (!device.updatedAt || device.updatedAt < o._time) { + device.updatedAt = o._time + } + }, + error: reject, + complete() { + resolve(devices) + }, + }) + }) +} +``` + +{{% /truncate %}} + +{{% caption %}}[iot-api-js/pages/api/devices/_devices.js getDevices(deviceId)](https://github.com/influxdata/iot-api-js/blob/18d34bcd59b93ad545c5cd9311164c77f6d1995a/pages/api/devices/_devices.js){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +The `_devices` module exports a `getDevices(deviceId)` function that queries +for registered devices, processes the data, and returns a Promise with the result. +If you invoke the function as `getDevices()` (without a _`deviceId`_), +it retrieves all `deviceauth` points and returns a Promise with `{ DEVICE_ID: ROW_DATA }`. + +To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI queryRows(query, consumer)` method. +`queryRows` executes the `query` and provides the Annotated CSV result as an Observable to the `consumer`. +`queryRows` has the following TypeScript signature: + +```ts +queryRows( + query: string | ParameterizedQuery, + consumer: FluxResultObserver +): void +``` + +{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/QueryApi.ts){{% /caption %}} + +The `consumer` that you provide must implement the [`FluxResultObserver` interface](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/results/FluxResultObserver.ts) and provide the following callback functions: + +- `next(row, tableMeta)`: processes the next row and table metadata--for example, to prepare the response. +- `error(error)`: receives and handles errors--for example, by rejecting the Promise. +- `complete()`: signals when all rows have been consumed--for example, by resolving the Promise. + +To learn more about Observers, see the [RxJS Guide](https://rxjs.dev/guide/observer). + +## Create the API to register devices + +In this application, a _registered device_ is a point that contains your device ID, authorization ID, and API token. +The API token and authorization permissions allow the device to query and write to `INFLUX_BUCKET`. +In this section, you add the API endpoint that handles requests from the UI, creates an authorization in InfluxDB, +and writes the registered device to the `INFLUX_BUCKET_AUTH` bucket. +To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.6/security/tokens/) + +The application API uses the following `/api/v2` InfluxDB API endpoints: + +- `POST /api/v2/query`: to query `INFLUX_BUCKET_AUTH` for a registered device. +- `GET /api/v2/buckets`: to get the bucket ID for `INFLUX_BUCKET`. +- `POST /api/v2/authorizations`: to create an authorization for the device. +- `POST /api/v2/write`: to write the device authorization to `INFLUX_BUCKET_AUTH`. + +1. Add a `./pages/api/devices/create.js` file to handle requests for `/api/devices/create`. +2. In the file, export a Next.js request `handler` function that does the following: + + 1. Accept a device ID in the request body. + 2. Query `INFLUX_BUCKET_AUTH` and respond with an error if an authorization exists for the device. + 3. [Create an authorization for the device](#create-an-authorization-for-the-device). + 4. [Write the device ID and authorization to `INFLUX_BUCKET_AUTH`](#write-the-device-authorization-to-a-bucket). + 5. Respond with `HTTP 200` when the write request completes. + +[See the example](https://github.com/influxdata/iot-api-js/blob/25b38c94a1f04ea71f2ef4b9fcba5350d691cb9d/pages/api/devices/create.js). + +### Create an authorization for the device + +In this section, you create an authorization with _read_-_write_ permission to `INFLUX_BUCKET` and receive an API token for the device. +The example below uses the following steps to create the authorization: + +1. Instantiate the `AuthorizationsAPI` client and `BucketsAPI` client with the configuration. +2. Retrieve the bucket ID. +3. Use the client library to send a `POST` request to the `/api/v2/authorizations` InfluxDB API endpoint. + +In `./api/devices/create.js`, add the following `createAuthorization(deviceId)` function: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Node.js](#nodejs) +{{% /code-tabs %}} +{{% code-tab-content %}} + +{{% truncate %}} + +```js +import { InfluxDB } from '@influxdata/influxdb-client' +import { getDevices } from './_devices' +import { AuthorizationsAPI, BucketsAPI } from '@influxdata/influxdb-client-apis' +import { Point } from '@influxdata/influxdb-client' + +const INFLUX_ORG = process.env.INFLUX_ORG +const INFLUX_BUCKET_AUTH = process.env.INFLUX_BUCKET_AUTH +const INFLUX_BUCKET = process.env.INFLUX_BUCKET + +const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.INFLUX_TOKEN}) + +/** + * Creates an authorization for a supplied deviceId + * @param {string} deviceId client identifier + * @returns {import('@influxdata/influxdb-client-apis').Authorization} promise with authorization or an error + */ +async function createAuthorization(deviceId) { + const authorizationsAPI = new AuthorizationsAPI(influxdb) + const bucketsAPI = new BucketsAPI(influxdb) + const DESC_PREFIX = 'IoTCenterDevice: ' + + const buckets = await bucketsAPI.getBuckets({name: INFLUX_BUCKET, orgID: INFLUX_ORG}) + const bucketId = buckets.buckets[0]?.id + + return await authorizationsAPI.postAuthorizations( + { + body: { + orgID: INFLUX_ORG, + description: DESC_PREFIX + deviceId, + permissions: [ + { + action: 'read', + resource: {type: 'buckets', id: bucketId, orgID: INFLUX_ORG}, + }, + { + action: 'write', + resource: {type: 'buckets', id: bucketId, orgID: INFLUX_ORG}, + }, + ], + }, + } + ) + +} +``` + +{{% /truncate %}} +{{% caption %}}[iot-api-js/pages/api/devices/create.js](https://github.com/influxdata/iot-api-js/blob/42a37d683b5e4df601422f85d2c22f5e9d592e68/pages/api/devices/create.js){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +To create an authorization that has _read_-_write_ permission to `INFLUX_BUCKET`, you need the bucket ID. +To retrieve the bucket ID, +`createAuthorization(deviceId)` calls the `BucketsAPI getBuckets` function that sends a `GET` request to +the `/api/v2/buckets` InfluxDB API endpoint. +`createAuthorization(deviceId)` then passes a new authorization in the request body with the following: + +- Bucket ID. +- Organization ID. +- Description: `IoTCenterDevice: DEVICE_ID`. +- List of permissions to the bucket. + +To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.6/security/tokens/). + +Next, [write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket). + +### Write the device authorization to a bucket + +With a device authorization in InfluxDB, write a point for the device and authorization details to `INFLUX_BUCKET_AUTH`. +Storing the device authorization in a bucket allows you to do the following: + +- Report device authorization history. +- Manage devices with and without tokens. +- Assign the same token to multiple devices. +- Refresh tokens. + +To write a point to InfluxDB, use the InfluxDB client library to send a `POST` request to the `/api/v2/write` InfluxDB API endpoint. +In `./pages/api/devices/create.js`, add the following `createDevice(deviceId)` function: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Node.js](#nodejs) +{{% /code-tabs %}} +{{% code-tab-content %}} + +```ts +/** Creates an authorization for a deviceId and writes it to a bucket */ +async function createDevice(deviceId) { + let device = (await getDevices(deviceId)) || {} + let authorizationValid = !!Object.values(device)[0]?.key + if(authorizationValid) { + console.log(JSON.stringify(device)) + return Promise.reject('This device ID is already registered and has an authorization.') + } else { + console.log(`createDeviceAuthorization: deviceId=${deviceId}`) + const authorization = await createAuthorization(deviceId) + const writeApi = influxdb.getWriteApi(INFLUX_ORG, INFLUX_BUCKET_AUTH, 'ms', { + batchSize: 2, + }) + const point = new Point('deviceauth') + .tag('deviceId', deviceId) + .stringField('key', authorization.id) + .stringField('token', authorization.token) + writeApi.writePoint(point) + await writeApi.close() + return + } +} +``` + +{{% caption %}}[iot-api-js/pages/api/devices/create.js](https://github.com/influxdata/iot-api-js/blob/25b38c94a1f04ea71f2ef4b9fcba5350d691cb9d/pages/api/devices/create.js){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +`createDevice(device_id)` takes a _`device_id`_ and writes data to `INFLUX_BUCKET_AUTH` in the following steps: + +1. Initialize `InfluxDBClient()` with `url`, `token`, and `org` values from the configuration. +2. Initialize a `WriteAPI` client for writing data to an InfluxDB bucket. +3. Create a `Point`. +4. Use `writeApi.writePoint(point)` to write the `Point` to the bucket. + +The function writes a point with the following elements: + +| Element | Name | Value | +|:------------|:-----------|:--------------------------| +| measurement | | `deviceauth` | +| tag | `deviceId` | device ID | +| field | `key` | authorization ID | +| field | `token` | authorization (API) token | + +## Install and run the UI + +`influxdata/iot-api-ui` is a standalone [Next.js React](https://nextjs.org/docs/basic-features/pages) UI that uses your application API to write and query data in InfluxDB. +`iot-api-ui` uses Next.js _[rewrites](https://nextjs.org/docs/api-reference/next.config.js/rewrites)_ to route all requests in the `/api/` path to your API. + +To install and run the UI, do the following: + +1. In your `~/iot-api-apps` directory, clone the [`influxdata/iot-api-ui` repo](https://github.com/influxdata/iot-api-ui) and go into the `iot-api-ui` directory--for example: + + ```bash + cd ~/iot-api-apps + git clone git@github.com:influxdata/iot-api-ui.git + cd ./iot-app-ui + ``` + +2. The `./.env.development` file contains default configuration settings that you can + edit or override (with a `./.env.local` file). +3. To start the UI, enter the following command into your terminal: + + ```bash + yarn dev + ``` + + To view the list and register devices, visit in your browser. + +To learn more about the UI components, see [`influxdata/iot-api-ui`](https://github.com/influxdata/iot-api-ui). diff --git a/content/influxdb/v2.6/api-guide/tutorials/python.md b/content/influxdb/v2.6/api-guide/tutorials/python.md new file mode 100644 index 000000000..0fe880b1c --- /dev/null +++ b/content/influxdb/v2.6/api-guide/tutorials/python.md @@ -0,0 +1,583 @@ +--- +title: Python client library starter +seotitle: Use Python client library to build a sample application +list_title: Python +description: > + Build an application that writes, queries, and manages devices with the InfluxDB + client library for Python. +weight: 3 +menu: + influxdb_2_6: + identifier: client-library-starter-py + name: Python + parent: Client library tutorials +influxdb/v2.6/tags: [api, python] +--- + +{{% api/iot-starter-intro %}} +- How to use the InfluxData UI libraries to format data and create visualizations. + +## Contents + +- [Contents](#contents) +- [Set up InfluxDB](#set-up-influxdb) + - [Authenticate with an InfluxDB API token](#authenticate-with-an-influxdb-api-token) +- [Introducing IoT Starter](#introducing-iot-starter) +- [Create the application](#create-the-application) +- [Install InfluxDB client library](#install-influxdb-client-library) +- [Configure the client library](#configure-the-client-library) +- [Build the API](#build-the-api) +- [Create the API to register devices](#create-the-api-to-register-devices) + - [Create an authorization for the device](#create-an-authorization-for-the-device) + - [Write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket) +- [Create the API to list devices](#create-the-api-to-list-devices) +- [Create IoT virtual device](#create-iot-virtual-device) +- [Write telemetry data](#write-telemetry-data) +- [Query telemetry data](#query-telemetry-data) +- [Define API responses](#define-api-responses) +- [Install and run the UI](#install-and-run-the-ui) + +## Set up InfluxDB + +If you haven't already, [create an InfluxDB Cloud account](https://www.influxdata.com/products/influxdb-cloud/) or [install InfluxDB OSS](https://www.influxdata.com/products/influxdb/). + +### Authenticate with an InfluxDB API token + +For convenience in development, +[create an _All-Access_ token](/influxdb/v2.6/security/tokens/create-token/) +for your application. This grants your application full read and write +permissions on all resources within your InfluxDB organization. + +{{% note %}} + +For a production application, create and use a +{{% cloud-only %}}custom{{% /cloud-only %}}{{% oss-only %}}read-write{{% /oss-only %}} +token with minimal permissions and only use it with your application. + +{{% /note %}} + +## Introducing IoT Starter + +The application architecture has four layers: + +- **InfluxDB API**: InfluxDB v2 API. +- **IoT device**: Virtual or physical devices write IoT data to the InfluxDB API. +- **UI**: Sends requests to the server and renders views in the browser. +- **API**: Receives requests from the UI, sends requests to InfluxDB, + and processes responses from InfluxDB. + +{{% note %}} +For the complete code referenced in this tutorial, see the [influxdata/iot-api-python repository](https://github.com/influxdata/iot-api-python). +{{% /note %}} + +## Create the application + +Create a directory that will contain your `iot-api` projects. +The following example code creates an `iot-api` directory in your home directory +and changes to the new directory: + +```bash +mkdir ~/iot-api-apps +cd ~/iot-api-apps +``` + +Use [Flask](https://flask.palletsprojects.com/), a lightweight Python web +framework, +to create your application. + +1. In your `~/iot-api-apps` directory, open a terminal and enter the following commands to create and navigate into a new project directory: + + ```bash + mkdir iot-api-python && cd $_ + ``` + +2. Enter the following commands in your terminal to create and activate a Python virtual environment for the project: + + ```bash + # Create a new virtual environment named "virtualenv" + # Python 3.8+ + python -m venv virtualenv + + # Activate the virtualenv (OS X & Linux) + source virtualenv/bin/activate + ``` + +3. After activation completes, enter the following commands in your terminal to install Flask with the `pip` package installer (included with Python): + + ```bash + pip install Flask + ``` + +4. In your project, create a `app.py` file that: + + 1. Imports the Flask package. + 2. Instantiates a Flask application. + 3. Provides a route to execute the application. + + ```python + from flask import Flask + app = Flask(__name__) + + @app.route("/") + def hello(): + return "Hello World!" + ``` + + {{% caption %}}[influxdata/iot-api-python app.py](https://github.com/influxdata/iot-api-python/blob/main/app.py){{% /caption %}} + + Start your application. + The following example code starts the application + on `http://localhost:3001` with debugging and hot-reloading enabled: + + ```bash + export FLASK_ENV=development + flask run -h localhost -p 3001 + ``` + + In your browser, visit to view the “Hello World!” response. + +## Install InfluxDB client library + +The InfluxDB client library provides the following InfluxDB API interactions: + +- Query data with the Flux language. +- Write data to InfluxDB. +- Batch data in the background. +- Retry requests automatically on failure. + +Enter the following command into your terminal to install the client library: + +```bash +pip install influxdb-client +``` + +For more information about the client library, see the [influxdata/influxdb-client-python repo](https://github.com/influxdata/influxdb-client-python). + +## Configure the client library + +InfluxDB client libraries require configuration properties from your InfluxDB environment. +Typically, you'll provide the following properties as environment variables for your application: + +- `INFLUX_URL` +- `INFLUX_TOKEN` +- `INFLUX_ORG` +- `INFLUX_BUCKET` +- `INFLUX_BUCKET_AUTH` + +To set up the client configuration, create a `config.ini` in your project's top +level directory and paste the following to provide the necessary InfluxDB credentials: + +```ini +[APP] +INFLUX_URL = +INFLUX_TOKEN = +INFLUX_ORG = +INFLUX_BUCKET = iot_center +INFLUX_BUCKET_AUTH = iot_center_devices +``` + +{{% caption %}}[/iot-api-python/config.ini](https://github.com/influxdata/iot-api-python/blob/main/config.ini){{% /caption %}} + +Replace the following: + +- **``**: your InfluxDB instance URL. +- **``**: your InfluxDB [API token](#authorization) with permission to query (_read_) buckets +and create (_write_) authorizations for devices. +- **``**: your InfluxDB organization ID. + +## Build the API + +Your application API provides server-side HTTP endpoints that process requests from the UI. +Each API endpoint is responsible for the following: + +1. Listen for HTTP requests (from the UI). +2. Translate requests into InfluxDB API requests. +3. Process InfluxDB API responses and handle errors. +4. Respond with status and data (for the UI). + +## Create the API to register devices + +In this application, a _registered device_ is a point that contains your device ID, authorization ID, and API token. +The API token and authorization permissions allow the device to query and write to `INFLUX_BUCKET`. +In this section, you add the API endpoint that handles requests from the UI, creates an authorization in InfluxDB, +and writes the registered device to the `INFLUX_BUCKET_AUTH` bucket. +To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.6/security/tokens/) + +The application API uses the following `/api/v2` InfluxDB API endpoints: + +- `POST /api/v2/query`: to query `INFLUX_BUCKET_AUTH` for a registered device. +- `GET /api/v2/buckets`: to get the bucket ID for `INFLUX_BUCKET`. +- `POST /api/v2/authorizations`: to create an authorization for the device. +- `POST /api/v2/write`: to write the device authorization to `INFLUX_BUCKET_AUTH`. + +### Create an authorization for the device + +In this section, you create an authorization with _read_-_write_ permission to `INFLUX_BUCKET` and receive an API token for the device. +The example below uses the following steps to create the authorization: + +1. Instantiate the `AuthorizationsAPI` client and `BucketsAPI` client with the configuration. +2. Retrieve the bucket ID. +3. Use the client library to send a `POST` request to the `/api/v2/authorizations` InfluxDB API endpoint. + +Create a `./api/devices.py` file that contains the following: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Python](#python) +{{% /code-tabs %}} +{{% code-tab-content %}} + +{{% truncate %}} + +```python +# Import the dependencies. +import configparser +from datetime import datetime +from uuid import uuid4 + +# Import client library classes. +from influxdb_client import Authorization, InfluxDBClient, Permission, PermissionResource, Point, WriteOptions +from influxdb_client.client.authorizations_api import AuthorizationsApi +from influxdb_client.client.bucket_api import BucketsApi +from influxdb_client.client.query_api import QueryApi +from influxdb_client.client.write_api import SYNCHRONOUS + +from api.sensor import Sensor + +# Get the configuration key-value pairs. + +config = configparser.ConfigParser() +config.read('config.ini') + +def create_authorization(device_id) -> Authorization: + influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'), + token=os.environ.get('INFLUX_TOKEN'), + org=os.environ.get('INFLUX_ORG')) + + authorization_api = AuthorizationsApi(influxdb_client) + # get bucket_id from bucket + buckets_api = BucketsApi(influxdb_client) + buckets = buckets_api.find_bucket_by_name(config.get('APP', 'INFLUX_BUCKET')) # function returns only 1 bucket + bucket_id = buckets.id + org_id = buckets.org_id + desc_prefix = f'IoTCenterDevice: {device_id}' + org_resource = PermissionResource(org_id=org_id, id=bucket_id, type="buckets") + read = Permission(action="read", resource=org_resource) + write = Permission(action="write", resource=org_resource) + permissions = [read, write] + authorization = Authorization(org_id=org_id, permissions=permissions, description=desc_prefix) + request = authorization_api.create_authorization(authorization) + return request +``` + +{{% /truncate %}} +{{% caption %}}[iot-api-python/api/devices.py](https://github.com/influxdata/iot-api-python/blob/d389a0e072c7a03dfea99e5663bdc32be94966bb/api/devices.py#L145){{% /caption %}} + +To create an authorization that has _read_-_write_ permission to `INFLUX_BUCKET`, you need the bucket ID. +To retrieve the bucket ID, `create_authorization(deviceId)` calls the +`BucketsAPI find_bucket_by_name` function that sends a `GET` request to +the `/api/v2/buckets` InfluxDB API endpoint. +`create_authorization(deviceId)` then passes a new authorization in the request body with the following: + +- Bucket ID. +- Organization ID. +- Description: `IoTCenterDevice: DEVICE_ID`. +- List of permissions to the bucket. + +To learn more about API tokens and authorizations, see [Manage API tokens](/influxdb/v2.6/security/tokens/). + +Next, [write the device authorization to a bucket](#write-the-device-authorization-to-a-bucket). + +### Write the device authorization to a bucket + +With a device authorization in InfluxDB, write a point for the device and authorization details to `INFLUX_BUCKET_AUTH`. +Storing the device authorization in a bucket allows you to do the following: + +- Report device authorization history. +- Manage devices with and without tokens. +- Assign the same token to multiple devices. +- Refresh tokens. + +To write a point to InfluxDB, use the InfluxDB client library to send a `POST` request to the `/api/v2/write` InfluxDB API endpoint. +In `./api/devices.py`, add the following `create_device(device_id)` function: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Python](#python) +{{% /code-tabs %}} +{{% code-tab-content %}} + +```python +def create_device(device_id=None): + influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'), + token=config.get('APP', 'INFLUX_TOKEN'), + org=config.get('APP', 'INFLUX_ORG')) + if device_id is None: + device_id = str(uuid4()) + write_api = influxdb_client.write_api(write_options=SYNCHRONOUS) + point = Point('deviceauth') \ + .tag("deviceId", device_id) \ + .field('key', f'fake_auth_id_{device_id}') \ + .field('token', f'fake_auth_token_{device_id}') + client_response = write_api.write(bucket=config.get('APP', 'INFLUX_BUCKET_AUTH'), record=point) + # write() returns None on success + if client_response is None: + return device_id + # Return None on failure + return None +``` + +{{% caption %}}[iot-api-python/api/devices.py](https://github.com/influxdata/iot-api-python/blob/f354941c80b6bac643ca29efe408fde1deebdc96/api/devices.py#L47){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +`create_device(device_id)` takes a _`device_id`_ and writes data to `INFLUX_BUCKET_AUTH` in the following steps: + +1. Initialize `InfluxDBClient()` with `url`, `token`, and `org` values from the configuration. +2. Initialize a `WriteAPI` client for writing data to an InfluxDB bucket. +3. Create a `Point`. +4. Use `write_api.write()` to write the `Point` to the bucket. +5. Check for failures--if the write was successful, `write_api` returns `None`. +6. Return _`device_id`_ if successful; `None` otherwise. + +The function writes a point with the following elements: + +| Element | Name | Value | +|:------------|:-----------|:--------------------------| +| measurement | | `deviceauth` | +| tag | `deviceId` | device ID | +| field | `key` | authorization ID | +| field | `token` | authorization (API) token | + +Next, [create the API to list devices](#create-the-api-to-list-devices). + +## Create the API to list devices + +Add the `/api/devices` API endpoint that retrieves, processes, and lists registered devices. + +1. Create a Flux query that gets the last row of each [series](/influxdb/v2.6/reference/glossary#series) that contains a `deviceauth` measurement. + The example query below returns rows that contain the `key` field (authorization ID) and excludes rows that contain a `token` field (to avoid exposing tokens to the UI). + + ```js + // Flux query finds devices + from(bucket:`${INFLUX_BUCKET_AUTH}`) + |> range(start: 0) + |> filter(fn: (r) => r._measurement == "deviceauth" and r._field != "token") + |> last() + ``` + +2. Use the `QueryApi` client to send the Flux query to the `POST /api/v2/query` InfluxDB API endpoint. + + In `./api/devices.py`, add the following: + + {{< code-tabs-wrapper >}} + {{% code-tabs %}} + [Python](#python) + {{% /code-tabs %}} + {{% code-tab-content %}} + + {{% truncate %}} + + ```python + def get_device(device_id=None) -> {}: + influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'), + token=os.environ.get('INFLUX_TOKEN'), + org=os.environ.get('INFLUX_ORG')) + # Queries must be formatted with single and double quotes correctly + query_api = QueryApi(influxdb_client) + device_filter = '' + if device_id: + device_id = str(device_id) + device_filter = f'r.deviceId == "{device_id}" and r._field != "token"' + else: + device_filter = f'r._field != "token"' + + flux_query = f'from(bucket: "{config.get("APP", "INFLUX_BUCKET_AUTH")}") ' \ + f'|> range(start: 0) ' \ + f'|> filter(fn: (r) => r._measurement == "deviceauth" and {device_filter}) ' \ + f'|> last()' + + response = query_api.query(flux_query) + result = [] + for table in response: + for record in table.records: + try: + 'updatedAt' in record + except KeyError: + record['updatedAt'] = record.get_time() + record[record.get_field()] = record.get_value() + result.append(record.values) + return result + ``` + +{{% /truncate %}} + +{{% caption %}}[iot-api-python/api/devices.py get_device()](https://github.com/influxdata/iot-api-python/blob/9bf44a659424a27eb937d545dc0455754354aef5/api/devices.py#L30){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +The `get_device(device_id)` function does the following: + +1. Instantiates a `QueryApi` client and sends the Flux query to InfluxDB. +2. Iterates over the `FluxTable` in the response and returns a list of tuples. + +## Create IoT virtual device + +Create a `./api/sensor.py` file that generates simulated weather telemetry data. +Follow the [example code](https://github.com/influxdata/iot-api-python/blob/f354941c80b6bac643ca29efe408fde1deebdc96/api/sensor.py) to create the IoT virtual device. + +Next, generate data for virtual devices and [write the data to InfluxDB](#write-telemetry-data). + +## Write telemetry data + +In this section, you write telemetry data to an InfluxDB bucket. +To write data, use the InfluxDB client library to send a `POST` request to the `/api/v2/write` InfluxDB API endpoint. + +The example below uses the following steps to generate data and then write it to InfluxDB: + +1. Initialize a `WriteAPI` instance. +2. Create a `Point` with the `environment` measurement and data fields for temperature, humidity, pressure, latitude, and longitude. +3. Use the `WriteAPI write` method to send the point to InfluxDB. + +In `./api/devices.py`, add the following `write_measurements(device_id)` function: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Python](#python) +{{% /code-tabs %}} +{{% code-tab-content %}} + +```python +def write_measurements(device_id): + influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'), + token=config.get('APP', 'INFLUX_TOKEN'), + org=config.get('APP', 'INFLUX_ORG')) + write_api = influxdb_client.write_api(write_options=SYNCHRONOUS) + virtual_device = Sensor() + coord = virtual_device.geo() + point = Point("environment") \ + .tag("device", device_id) \ + .tag("TemperatureSensor", "virtual_bme280") \ + .tag("HumiditySensor", "virtual_bme280") \ + .tag("PressureSensor", "virtual_bme280") \ + .field("Temperature", virtual_device.generate_measurement()) \ + .field("Humidity", virtual_device.generate_measurement()) \ + .field("Pressure", virtual_device.generate_measurement()) \ + .field("Lat", coord['latitude']) \ + .field("Lon", coord['latitude']) \ + .time(datetime.utcnow()) + print(f"Writing: {point.to_line_protocol()}") + client_response = write_api.write(bucket=config.get('APP', 'INFLUX_BUCKET'), record=point) + # write() returns None on success + if client_response is None: + # TODO Maybe also return the data that was written + return device_id + # Return None on failure + return None +``` + +{{% caption %}}[iot-api-python/api/devices.py write_measurement()](https://github.com/influxdata/iot-api-python/blob/f354941c80b6bac643ca29efe408fde1deebdc96/api/devices.py){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +## Query telemetry data + +In this section, you retrieve telemetry data from an InfluxDB bucket. +To retrieve data, use the InfluxDB client library to send a `POST` request to the `/api/v2/query` InfluxDB API endpoint. +The example below uses the following steps to retrieve and process telemetry data: + + 1. Query `environment` measurements in `INFLUX_BUCKET`. + 2. Filter results by `device_id`. + 3. Return CSV data that the [`influxdata/giraffe` UI library](https://github.com/influxdata/giraffe) can process. + +In `./api/devices.py`, add the following `get_measurements(device_id)` function: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Python](#python) +{{% /code-tabs %}} +{{% code-tab-content %}} + +```python +def get_measurements(query): + influxdb_client = InfluxDBClient(url=config.get('APP', 'INFLUX_URL'), + token=os.environ.get('INFLUX_TOKEN'), org=os.environ.get('INFLUX_ORG')) + query_api = QueryApi(influxdb_client) + result = query_api.query_csv(query, + dialect=Dialect( + header=True, + delimiter=",", + comment_prefix="#", + annotations=['group', 'datatype', 'default'], + date_time_format="RFC3339")) + response = '' + for row in result: + response += (',').join(row) + ('\n') + return response +``` + +{{% caption %}}[iot-api-python/api/devices.py get_measurements()](https://github.com/influxdata/iot-api-python/blob/9bf44a659424a27eb937d545dc0455754354aef5/api/devices.py#L122){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +## Define API responses + +In `app.py`, add API endpoints that match incoming requests and respond with the results of your modules. +In the following `/api/devices/` route example, `app.py` retrieves _`device_id`_ from `GET` and `POST` requests, passes it to the `get_device(device_id)` method and returns the result as JSON data with CORS `allow-` headers. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Python](#python) +{{% /code-tabs %}} +{{% code-tab-content %}} + +```python +@app.route('/api/devices/', methods=['GET', 'POST']) +def api_get_device(device_id): + if request.method == "OPTIONS": # CORS preflight + return _build_cors_preflight_response() + return _corsify_actual_response(jsonify(devices.get_device(device_id))) +``` + +{{% caption %}}[iot-api-python/app.py](https://github.com/influxdata/iot-api-python/blob/9bf44a659424a27eb937d545dc0455754354aef5/app.py){{% /caption %}} + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +Enter the following commands into your terminal to restart the application: + + 1. `CONTROL+C` to stop the application. + 2. `flask run -h localhost -p 3001` to start the application. + +To retrieve devices data from your API, visit in your browser. + +## Install and run the UI + +`influxdata/iot-api-ui` is a standalone [Next.js React](https://nextjs.org/docs/basic-features/pages) UI that uses your application API to write and query data in InfluxDB. +`iot-api-ui` uses Next.js _[rewrites](https://nextjs.org/docs/api-reference/next.config.js/rewrites)_ to route all requests in the `/api/` path to your API. + +To install and run the UI, do the following: + +1. In your `~/iot-api-apps` directory, clone the [`influxdata/iot-api-ui` repo](https://github.com/influxdata/iot-api-ui) and go into the `iot-api-ui` directory--for example: + + ```bash + cd ~/iot-api-apps + git clone git@github.com:influxdata/iot-api-ui.git + cd ./iot-app-ui + ``` + +2. The `./.env.development` file contains default configuration settings that you can + edit or override (with a `./.env.local` file). +3. To start the UI, enter the following command into your terminal: + + ```bash + yarn dev + ``` + + To view the list and register devices, visit in your browser. + +To learn more about the UI components, see [`influxdata/iot-api-ui`](https://github.com/influxdata/iot-api-ui). diff --git a/content/influxdb/v2.6/backup-restore/_index.md b/content/influxdb/v2.6/backup-restore/_index.md new file mode 100644 index 000000000..194217e25 --- /dev/null +++ b/content/influxdb/v2.6/backup-restore/_index.md @@ -0,0 +1,17 @@ +--- +title: Back up and restore data +seotitle: Backup and restore data with InfluxDB +description: > + InfluxDB provides tools that let you back up and restore data and metadata stored + in InfluxDB. +influxdb/v2.6/tags: [backup, restore] +menu: + influxdb_2_6: + name: Back up & restore data +weight: 9 +products: [oss] +--- + +InfluxDB provides tools to back up and restore data and metadata stored in InfluxDB. + +{{< children >}} diff --git a/content/influxdb/v2.6/backup-restore/backup.md b/content/influxdb/v2.6/backup-restore/backup.md new file mode 100644 index 000000000..4d69188c1 --- /dev/null +++ b/content/influxdb/v2.6/backup-restore/backup.md @@ -0,0 +1,49 @@ +--- +title: Back up data +seotitle: Back up data in InfluxDB +description: > + Use the `influx backup` command to back up data and metadata stored in InfluxDB. +menu: + influxdb_2_6: + parent: Back up & restore data +weight: 101 +related: + - /influxdb/v2.6/backup-restore/restore/ + - /influxdb/v2.6/reference/cli/influx/backup/ +products: [oss] +--- + +Use the [`influx backup` command](/influxdb/v2.6/reference/cli/influx/backup/) to back up +data and metadata stored in InfluxDB. +InfluxDB copies all data and metadata to a set of files stored in a specified directory +on your local filesystem. + +{{% note %}} +#### InfluxDB 1.x/2.x compatibility +The InfluxDB {{< current-version >}} `influx backup` command is not compatible with versions of InfluxDB prior to 2.0.0. +**For information about migrating data between InfluxDB 1.x and {{< current-version >}}, see:** + +- [Automatically upgrade from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.6/upgrade/v1-to-v2/automatic-upgrade/) +- [Manually upgrade from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.6/upgrade/v1-to-v2/manual-upgrade/) +{{% /note %}} + +{{% cloud %}} +The `influx backup` command **cannot** back up data stored in **{{< cloud-name "short" >}}**. +{{% /cloud %}} + +The `influx backup` command requires: + +- The directory path for where to store the backup file set +- The **root authorization token** (the token created for the first user in the + [InfluxDB setup process](/influxdb/v2.6/get-started/)). + +##### Back up data with the influx CLI +```sh +# Syntax +influx backup -t + +# Example +influx backup \ + path/to/backup_$(date '+%Y-%m-%d_%H-%M') \ + -t xXXXX0xXX0xxX0xx_x0XxXxXXXxxXX0XXX0XXxXxX0XxxxXX0Xx0xx== +``` diff --git a/content/influxdb/v2.6/backup-restore/restore.md b/content/influxdb/v2.6/backup-restore/restore.md new file mode 100644 index 000000000..63d5d0b62 --- /dev/null +++ b/content/influxdb/v2.6/backup-restore/restore.md @@ -0,0 +1,141 @@ +--- +title: Restore data +seotitle: Restore data in InfluxDB +description: > + Use the `influx restore` command to restore backup data and metadata from InfluxDB. +menu: + influxdb_2_6: + parent: Back up & restore data +weight: 101 +influxdb/v2.6/tags: [restore] +related: + - /influxdb/v2.6/backup-restore/backup/ + - /influxdb/v2.6/reference/cli/influxd/restore/ +products: [oss] +--- + +{{% cloud %}} +Restores **not supported in {{< cloud-name "short" >}}**. +{{% /cloud %}} + +Use the `influx restore` command to restore backup data and metadata from InfluxDB OSS. + +- [Restore data with the influx CLI](#restore-data-with-the-influx-cli) +- [Recover from a failed restore](#recover-from-a-failed-restore) + +InfluxDB moves existing data and metadata to a temporary location. +If the restore fails, InfluxDB preserves temporary data for recovery, +otherwise this data is deleted. +_See [Recover from a failed restore](#recover-from-a-failed-restore)._ + +{{% note %}} +#### Cannot restore to existing buckets +The `influx restore` command cannot restore data to existing buckets. +Use the `--new-bucket` flag to create a new bucket to restore data to. +To restore data and retain bucket names, [delete existing buckets](/influxdb/v2.6/organizations/buckets/delete-bucket/) +and then begin the restore process. +{{% /note %}} + +## Restore data with the influx CLI +Use the `influx restore` command and specify the path to the backup directory. + +_For more information about restore options and flags, see the +[`influx restore` documentation](/influxdb/v2.6/reference/cli/influx/restore/)._ + +- [Restore all time series data](#restore-all-time-series-data) +- [Restore data from a specific bucket](#restore-data-from-a-specific-bucket) +- [Restore and replace all InfluxDB data](#restore-and-replace-all-influxdb-data) + +### Restore all time series data +To restore all time series data from a backup directory, provide the following: + +- backup directory path + +```sh +influx restore /backups/2020-01-20_12-00/ +``` + +### Restore data from a specific bucket +To restore data from a specific backup bucket, provide the following: + +- backup directory path +- bucket name or ID + +```sh +influx restore \ + /backups/2020-01-20_12-00/ \ + --bucket example-bucket + +# OR + +influx restore \ + /backups/2020-01-20_12-00/ \ + --bucket-id 000000000000 +``` + +If a bucket with the same name as the backed up bucket already exists in InfluxDB, +use the `--new-bucket` flag to create a new bucket with a different name and +restore data into it. + +```sh +influx restore \ + /backups/2020-01-20_12-00/ \ + --bucket example-bucket \ + --new-bucket new-example-bucket +``` + +### Restore and replace all InfluxDB data +To restore and replace all time series data _and_ InfluxDB key-value data such as +tokens, users, dashboards, etc., include the following: + +- `--full` flag +- backup directory path + +```sh +influx restore \ + /backups/2020-01-20_12-00/ \ + --full +``` + +{{% note %}} +#### Restore to a new InfluxDB server +If using a backup to populate a new InfluxDB server: + +1. Retrieve the [admin token](/influxdb/v2.6/security/tokens/#admin-token) from your source InfluxDB instance. +2. Set up your new InfluxDB instance, but use the `-t`, `--token` flag to use the + **admin token** from your source instance as the admin token on your new instance. + + ```sh + influx setup --token My5uP3rSecR37t0keN + ``` +3. Restore the backup to the new server. + + ```sh + influx restore \ + /backups/2020-01-20_12-00/ \ + --full + ``` + +If you do not provide the admin token from your source InfluxDB instance as the +admin token in your new instance, the restore process and all subsequent attempts +to authenticate with the new server will fail. + +1. The first restore API call uses the auto-generated token to authenticate with + the new server and overwrites the entire key-value store in the new server, including + the auto-generated token. +2. The second restore API call attempts to upload time series data, but uses the + auto-generated token to authenticate with new server. + That token is overwritten in first restore API call and the process fails to authenticate. +{{% /note %}} + + +## Recover from a failed restore +If the restoration process fails, InfluxDB preserves existing data in a `tmp` +directory in the [target engine path](/influxdb/v2.6/reference/cli/influx/restore/#flags) +(default is `~/.influxdbv2/engine`). + +To recover from a failed restore: + +1. Copy the temporary files back into the `engine` directory. +2. Remove the `.tmp` extensions from each of the copied files. +3. Restart the `influxd` server. diff --git a/content/influxdb/v2.6/get-started/_index.md b/content/influxdb/v2.6/get-started/_index.md new file mode 100644 index 000000000..419a7451a --- /dev/null +++ b/content/influxdb/v2.6/get-started/_index.md @@ -0,0 +1,118 @@ +--- +title: Get started with InfluxDB +list_title: Get started +description: > + Start collecting, querying, processing, and visualizing data in InfluxDB OSS. +menu: + influxdb_2_6: + name: Get started +weight: 3 +influxdb/v2.6/tags: [get-started] +aliases: + - /influxdb/v2.6/introduction/get-started/ +--- + +InfluxDB {{< current-version >}} is the platform purpose-built to collect, store, +process and visualize time series data. +**Time series data** is a sequence of data points indexed in time order. +Data points typically consist of successive measurements made from the same +source and are used to track changes over time. +Examples of time series data include: + +- Industrial sensor data +- Server performance metrics +- Heartbeats per minute +- Electrical activity in the brain +- Rainfall measurements +- Stock prices + +This multi-part tutorial walks you through writing time series data to InfluxDB {{< current-version >}}, +querying that data, processing and alerting on the data, and then visualizing the data. + +## Key concepts before you get started + +Before you get started using InfluxDB, it's important to understand how time series +data is organized and stored in InfluxDB and some key definitions that are used +throughout this documentation. + +### Data organization + +The InfluxDB data model organizes time series data into buckets and measurements. +A bucket can contain multiple measurements. Measurements contain multiple +tags and fields. + +- **Bucket**: Named location where time series data is stored. + A bucket can contain multiple _measurements_. + - **Measurement**: Logical grouping for time series data. + All _points_ in a given measurement should have the same _tags_. + A measurement contains multiple _tags_ and _fields_. + - **Tags**: Key-value pairs with values that differ, but do not change often. + Tags are meant for storing metadata for each point--for example, + something to identify the source of the data like host, location, station, etc. + - **Fields**: Key-value pairs with values that change over time--for example: temperature, pressure, stock price, etc. + - **Timestamp**: Timestamp associated with the data. + When stored on disk and queried, all data is ordered by time. + +_For detailed information and examples of the InfluxDB data model, see +[Data elements](/influxdb/v2.6/reference/key-concepts/data-elements/)._ + +### Important definitions + +The following are important definitions to understand when using InfluxDB: + +- **Point**: Single data record identified by its _measurement, tag keys, tag values, field key, and timestamp_. +- **Series**: A group of points with the same + {{% oss-only %}}_measurement, tag keys, and tag values_.{{% /oss-only %}} + {{% cloud-only %}}_measurement, tag keys and values, and field key_.{{% /cloud-only %}} + +##### Example InfluxDB query results + +{{< influxdb/points-series >}} + +## Tools to use + +Throughout this tutorial, there are multiple tools you can use to interact with +InfluxDB {{< current-version >}}. Examples are provided for each of the following: + +- [InfluxDB user interface (UI)](#influxdb-user-interface-ui) +- [`influx` CLI](#influx-cli) +- [InfluxDB HTTP API](#influxdb-http-api) + +### InfluxDB user interface (UI) + +The InfluxDB UI provides a web-based visual interface for interacting with and managing InfluxDB. +{{% oss-only %}}The UI is packaged with InfluxDB and runs as part of the InfluxDB service. To access the UI, with InfluxDB running, visit [localhost:8086](http://localhost:8086) in your browser.{{% /oss-only %}} +{{% cloud-only %}}To access the InfluxDB Cloud UI, [log into your InfluxDB Cloud account](https://cloud2.influxdata.com).{{% /cloud-only %}} + +### `influx` CLI + +The `influx` CLI lets you interact with and manage InfluxDB {{< current-version >}} from a command line. +{{% oss-only %}}The CLI is packaged separately from InfluxDB and must be downloaded and installed separately.{{% /oss-only %}} +For detailed CLI installation instructions, see +[Use the influx CLI](/influxdb/v2.6/tools/influx-cli/). + +### InfluxDB HTTP API + +The [InfluxDB API](/influxdb/v2.6/reference/api/) provides a simple way to +interact with the InfluxDB {{< current-version >}} using HTTP(S) clients. +Examples in this tutorial use cURL, but any HTTP(S) client will work. + +{{% note %}} +#### InfluxDB client libraries + +[InfluxDB client libraries](/influxdb/v2.6/api-guide/client-libraries/) are +language-specific clients that interact with the InfluxDB HTTP API. +Examples for client libraries are not provided in this tutorial, but these can +be used to perform all the actions outlined in this tutorial. +{{% /note %}} + +## Authorization + +**InfluxDB {{< current-version >}} requires authentication** using [API tokens](/influxdb/v2.6/security/tokens/). +Each API token is associated with a user and a specific set of permissions for InfluxDB resources. + +{{< page-nav next="/influxdb/v2.6/get-started/setup/" >}} + +--- + +{{< influxdbu "influxdb-101" >}} diff --git a/content/influxdb/v2.6/get-started/process.md b/content/influxdb/v2.6/get-started/process.md new file mode 100644 index 000000000..de5a8391e --- /dev/null +++ b/content/influxdb/v2.6/get-started/process.md @@ -0,0 +1,1275 @@ +--- +title: Get started processing data +seotitle: Process data | Get started with InfluxDB +list_title: Process data +description: > + Learn how to process time series data to do things like downsample and alert + on data. +menu: + influxdb_2_6: + name: Process data + parent: Get started + identifier: get-started-process-data +weight: 103 +metadata: [4 / 5] +related: + - /influxdb/v2.6/process-data/ + - /influxdb/v2.6/process-data/get-started/ + - /{{< latest "flux" >}}/get-started/ + - /{{< latest "flux" >}}/stdlib/ +--- + +Now that you know the [basics of querying data from InfluxDB](/influxdb/v2.6/get-started/query/), +let's go beyond a basic query and begin to process the queried data. +"Processing" data could mean transforming, aggregating, downsampling, or alerting +on data. This tutorial covers the following data processing use cases: + +- [Remap or assign values in your data](#remap-or-assign-values-in-your-data) +- [Group data](#group-data) +- [Aggregate or select specific data](#aggregate-or-select-specific-data) +- [Pivot data into a relational schema](#pivot-data-into-a-relational-schema) +- [Downsample data](#downsample-data) +- [Automate processing with InfluxDB tasks](#automate-processing-with-influxdb-tasks) + +{{% note %}} +Most data processing operations require manually editing Flux queries. +If you're using the **InfluxDB Data Explorer**, switch to the **Script Editor** +instead of using the **Query Builder.** +{{% /note %}} + +## Remap or assign values in your data + +Use the [`map()` function](/{{< latest "flux" >}}/stdlib/universe/map/) to +iterate over each row in your data and update the values in that row. +`map()` is one of the most useful functions in Flux and will help you accomplish +many of they data processing operations you need to perform. + +{{< expand-wrapper >}} +{{% expand "Learn more about how `map()` works" %}} + +`map()` takes a single parameter, `fn`. +`fn` takes an anonymous function that reads each row as a +[record](/{{< latest "flux" >}}/data-types/composite/record/) named `r`. +In the `r` record, each key-value pair represents a column and its value. +For example: + +```js +r = { + _time: 2020-01-01T00:00:00Z, + _measurement: "home", + room: "Kitchen", + _field: "temp", + _value: 21.0, +} +``` + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | :----- | +| 2020-01-01T00:00:00Z | home | Kitchen | temp | 21.0 | + +The `fn` function modifies the `r` record in any way you need and returns a new +record for the row. For example, using the record above: + +```js +(r) => ({ _time: r._time, _field: "temp_F", _value: (r._value * 1.8) + 32.0}) + +// Returns: {_time: 2020-01-01T00:00:00Z, _field: "temp_F", _value: 69.8} +``` + +| _time | _field | _value | +| :------------------- | :----- | -----: | +| 2020-01-01T00:00:00Z | temp_F | 69.8 | + +Notice that some of the columns were dropped from the original row record. +This is because the `fn` function explicitly mapped the `_time`, `_field`, and `_value` columns. +To retain existing columns and only update or add specific columns, use the +`with` operator to extend your row record. +For example, using the record above: + +```js +(r) => ({r with _value: (r._value * 1.8) + 32.0, degrees: "F"}) + +// Returns: +// { +// _time: 2020-01-01T00:00:00Z, +// _measurement: "home", +// room: "Kitchen", +// _field: "temp", +// _value: 69.8, +// degrees: "F", +// } +``` + +| _time | _measurement | room | _field | _value | degrees | +| :------------------- | :----------- | :------ | :----- | -----: | :------ | +| 2020-01-01T00:00:00Z | home | Kitchen | temp | 69.8 | F | + +{{% /expand %}} +{{< /expand-wrapper >}} + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "hum") + |> map(fn: (r) => ({r with _value: r._value / 100.0})) +``` + +### Map examples + +{{< expand-wrapper >}} + +{{% expand "Perform mathematical operations" %}} + +`map()` lets your perform mathematical operations on your data. +For example, using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` field to return room temperatures in °C. +2. Use `map()` to iterate over each row and convert the °C temperatures in the + `_value` column to °F using the equation: `°F = (°C * 1.8) + 32.0`. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> map(fn: (r) => ({r with _value: (r._value * 1.8) + 32.0})) +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | ----------------: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 73.03999999999999 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 72.86 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 72.32 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 72.86 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 73.94 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 73.58000000000001 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 72.86 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | ----------------: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 72.14 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 72.14 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 72.32 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 72.68 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 73.03999999999999 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 72.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 71.96000000000001 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Conditionally assign a state" %}} + +Within a `map()` function, you can use [conditional expressions](/{{< latest "flux" >}}/spec/expressions/#conditional-expressions) (if/then/else) to conditionally assign values. +For example, using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `co` field to return carbon monoxide parts per million (ppm) readings in each room. +2. Use `map()` to iterate over each row, evaluate the value in the `_value` + column, and then conditionally assign a state: + + - If the carbon monoxide is less than 10 ppm, assign the state: **ok**. + - Otherwise, assign the state: **warning**. + + Store the state in a **state** column. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "co") + |> map(fn: (r) => ({r with state: if r._value < 10 then "ok" else "warning"})) +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | co | 1 | +| 2022-01-01T15:00:00Z | home | Kitchen | co | 3 | +| 2022-01-01T16:00:00Z | home | Kitchen | co | 7 | +| 2022-01-01T17:00:00Z | home | Kitchen | co | 9 | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | co | 1 | +| 2022-01-01T15:00:00Z | home | Living Room | co | 1 | +| 2022-01-01T16:00:00Z | home | Living Room | co | 4 | +| 2022-01-01T17:00:00Z | home | Living Room | co | 5 | +| 2022-01-01T18:00:00Z | home | Living Room | co | 9 | +| 2022-01-01T19:00:00Z | home | Living Room | co | 14 | +| 2022-01-01T20:00:00Z | home | Living Room | co | 17 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | state | +| :------------------- | :----------- | :------ | :----- | -----: | :------ | +| 2022-01-01T14:00:00Z | home | Kitchen | co | 1 | ok | +| 2022-01-01T15:00:00Z | home | Kitchen | co | 3 | ok | +| 2022-01-01T16:00:00Z | home | Kitchen | co | 7 | ok | +| 2022-01-01T17:00:00Z | home | Kitchen | co | 9 | ok | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | warning | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | warning | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | warning | + +| _time | _measurement | room | _field | _value | state | +| :------------------- | :----------- | :---------- | :----- | -----: | :------ | +| 2022-01-01T14:00:00Z | home | Living Room | co | 1 | ok | +| 2022-01-01T15:00:00Z | home | Living Room | co | 1 | ok | +| 2022-01-01T16:00:00Z | home | Living Room | co | 4 | ok | +| 2022-01-01T17:00:00Z | home | Living Room | co | 5 | ok | +| 2022-01-01T18:00:00Z | home | Living Room | co | 9 | ok | +| 2022-01-01T19:00:00Z | home | Living Room | co | 14 | warning | +| 2022-01-01T20:00:00Z | home | Living Room | co | 17 | warning | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Alert on data" %}} + +`map()` lets you execute more complex operations on a per row basis. +Using a [Flux block (`{}`)](/{{< latest "flux" >}}/spec/blocks/) in the `fn` function, +you can create scoped variables and execute other functions within the context +of each row. For example, you can send a message to [Slack](https://slack.com). + +{{% note %}} +For this example to actually send messages to Slack, you need to +[set up a Slack app that can send and receive messages](https://api.slack.com/messaging/sending). +{{% /note %}} + +For example, using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Import the [`slack` package](/{{< latest "flux" >}}/stdlib/slack/). +2. Query the `co` field to return carbon monoxide parts per million (ppm) readings in each room. +3. Use `map()` to iterate over each row, evaluate the value in the `_value` + column, and then conditionally assign a state: + + - If the carbon monoxide is less than 10 ppm, assign the state: **ok**. + - Otherwise, assign the state: **warning**. + + Store the state in a **state** column. +4. Use [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to return + only rows with **warning** in the state column. +5. Use `map()` to iterate over each row. + In your `fn` function, use a [Flux block (`{}`)](/{{< latest "flux" >}}/spec/blocks/) to: + + 1. Create a `responseCode` variable that uses [`slack.message()`](/{{< latest "flux" >}}/stdlib/slack/message/) + to send a message to Slack using data from the input row. + `slack.message()` returns the response code of the Slack API request as an integer. + 2. Use a `return` statement to return a new row record. + The new row should extend the input row with a new column, **sent**, with + a boolean value determined by the `responseCode` variable. + +`map()` sends a message to Slack for each row piped forward into the function. + +```js +import "slack" + +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "co") + |> map(fn: (r) => ({r with state: if r._value < 10 then "ok" else "warning"})) + |> filter(fn: (r) => r.state == "warning") + |> map( + fn: (r) => { + responseCode = + slack.message( + token: "mYSlacK70k3n", + color: "#ff0000", + channel: "#alerts", + text: "Carbon monoxide is at dangerous levels in the ${r.room}: ${r._value} ppm.", + ) + + return {r with sent: responseCode == 200} + }, + ) +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +The following input represents the data filtered by the **warning** state. + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | state | +| :------------------- | :----------- | :------ | :----- | -----: | :------ | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | warning | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | warning | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | warning | + +| _time | _measurement | room | _field | _value | state | +| :------------------- | :----------- | :---------- | :----- | -----: | :------ | +| 2022-01-01T19:00:00Z | home | Living Room | co | 14 | warning | +| 2022-01-01T20:00:00Z | home | Living Room | co | 17 | warning | + +{{% /tab-content %}} +{{% tab-content %}} + +The output includes a **sent** column indicating the if the message was sent. + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | state | sent | +| :------------------- | :----------- | :------ | :----- | -----: | :------ | :--- | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | warning | true | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | warning | true | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | warning | true | + +| _time | _measurement | room | _field | _value | state | sent | +| :------------------- | :----------- | :---------- | :----- | -----: | :------ | :--- | +| 2022-01-01T19:00:00Z | home | Living Room | co | 14 | warning | true | +| 2022-01-01T20:00:00Z | home | Living Room | co | 17 | warning | true | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +With the results above, you would receive the following messages in Slack: + +> Carbon monoxide is at dangerous levels in the Kitchen: 18 ppm. +> Carbon monoxide is at dangerous levels in the Kitchen: 22 ppm. +> Carbon monoxide is at dangerous levels in the Living Room: 14 ppm. +> Carbon monoxide is at dangerous levels in the Kitchen: 26 ppm. +> Carbon monoxide is at dangerous levels in the Living Room: 17 ppm. + +{{% note %}} +You can also use the [InfluxDB checks and notifications system](/influxdb/v2.6/monitor-alert/) +as a user interface for configuring checks and alerting on data. +{{% /note %}} + +{{% /expand %}} +{{< /expand-wrapper >}} + +## Group data + +Use the [`group()` function](/{{< latest "flux" >}}/stdlib/universe/group/) to +regroup your data by specific column values in preparation for further processing. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> group(columns: ["room", "_field"]) +``` + +{{% note %}} +Understanding data grouping and why it matters is important, but may be too much +for this "getting started" tutorial. +For more information about how data is grouped and why it matters, see the +[Flux data model](/{{< latest "flux" >}}/get-started/data-model/) documentation. +{{% /note %}} + +By default, `from()` returns data queried from InfluxDB grouped by series +(measurement, tags, and field key). +Each table in the returned stream of tables represents a group. +Each table contains the same values for the columns that data is grouped by. +This grouping is important as you [aggregate data](#aggregate-or-select-specific-data). + +### Group examples + +{{< expand-wrapper >}} +{{% expand "Group data by specific columns" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` and `hum` fields. +2. Use `group()` to group by only the `_field` column. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T10:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp" or r._field == "hum") + |> group(columns: ["_field"]) +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +The following data is output from the last `filter()` and piped forward into `group()`: + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +{{% flux/group-key "[_measurement=home, room=Kitchen, _field=hum]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 | + +{{% flux/group-key "[_measurement=home, room=Living Room, _field=hum]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 | + +{{% flux/group-key "[_measurement=home, room=Kitchen, _field=temp]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 | +| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 | +| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 | + +{{% flux/group-key "[_measurement=home, room=Living Room, _field=temp]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 | +| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 | +| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 | + +{{% /tab-content %}} +{{% tab-content %}} + +When grouped by `_field`, all rows with the `temp` field will be in one table +and all the rows with the `hum` field will be in another. +`_measurement` and `room` columns no longer affect how rows are grouped. + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +{{% flux/group-key "[_field=hum]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 | +| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 | + +{{% flux/group-key "[_field=temp]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 | +| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 | +| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 | +| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 | +| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Ungroup data" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` and `hum` fields. +2. Use `group()` without any parameters to "ungroup" data or group by no columns. + The default value of the `columns` parameter is an empty array (`[]`). + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T10:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp" or r._field == "hum") + |> group() +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +The following data is output from the last `filter()` and piped forward into `group()`: + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +{{% flux/group-key "[_measurement=home, room=Kitchen, _field=hum]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 | + +{{% flux/group-key "[_measurement=home, room=Living Room, _field=hum]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 | + +{{% flux/group-key "[_measurement=home, room=Kitchen, _field=temp]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 | +| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 | +| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 | + +{{% flux/group-key "[_measurement=home, room=Living Room, _field=temp]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | :----- | +| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 | +| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 | +| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 | + +{{% /tab-content %}} +{{% tab-content %}} + +When ungrouped, a data is returned in a single table. + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +{{% flux/group-key "[]" true %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 | +| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 | +| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 | +| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 | +| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 | +| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} +{{< /expand-wrapper >}} + +## Aggregate or select specific data + +Use Flux [aggregate](/{{< latest "flux" >}}/function-types/#aggregates) +or [selector](/{{< latest "flux" >}}/function-types/#selectors) functions to +return aggregate or selected values from **each** input table. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "co" or r._field == "hum" or r._field == "temp") + |> mean() +``` + +{{% note %}} +#### Aggregate over time + +If you want to query aggregate values over time, this is a form of +[downsampling](#downsample-data). +{{% /note %}} + +### Aggregate functions + +[Aggregate functions](/{{< latest "flux" >}}/function-types/#aggregates) drop +columns that are **not** in the [group key](/flux/v0.x/get-started/data-model/#group-key) +and return a single row for each input table with the aggregate value of that table. + +#### Aggregate examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the average temperature for each room" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` field. By default, `from()` returns the data grouped by + `_measurement`, `room` and `_field`, so each table represents a room. +2. Use `mean()` to return the average temperature from each room. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> mean() +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _measurement | room | _field | _value | +| :----------- | :------ | :----- | -----------------: | +| home | Kitchen | temp | 22.814285714285713 | + +| _measurement | room | _field | _value | +| :----------- | :---------- | :----- | ----------------: | +| home | Living Room | temp | 22.44285714285714 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Calculate the overall average temperature of all rooms" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` field. +2. Use `group()` to **ungroup** the data into a single table. By default, + `from()` returns the data grouped by`_measurement`, `room` and `_field`. + To get the overall average, you need to structure all results as a single table. +3. Use `mean()` to return the average temperature. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> group() + |> mean() +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +The following input data represents the ungrouped data that is piped forward +into `mean()`. + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _value | +| -----------------: | +| 22.628571428571426 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Count the number of points reported per room across all fields" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query all fields by simply filtering by the `home` measurement. +2. The fields in the `home` measurement are different types. + Use `toFloat()` to cast all field values to floats. +3. Use `group()` to group the data by `room`. +4. Use `count()` to return the number of rows in each input table. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> toFloat() + |> group(columns: ["room"]) + |> count() +``` + +##### Output + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| room | _value | +| :------ | -----: | +| Kitchen | 21 | + +| room | _value | +| :---------- | -----: | +| Living Room | 21 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +{{% note %}} +#### Assign a new aggregate timestamp + +`_time` is generally not part of the group key and will be dropped when using +aggregate functions. To assign a new timestamp to aggregate points, duplicate +the `_start` or `_stop` column, which represent the query bounds, as the +new `_time` column. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> mean() + |> duplicate(column: "_stop", as: "_time") +``` +{{% /note %}} + +### Selector functions + +[Selector functions](/{{< latest "flux" >}}/function-types/#selectors) return +one or more columns from each input table and retain all columns and their values. + +#### Selector examples + +{{< expand-wrapper >}} + +{{% expand "Return the first temperature from each room" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` field. +2. Use [`first()`](/{{< latest "flux" >}}/stdlib/universe/first/) to return the + first row from each table. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> first() +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Return the last temperature from each room" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` field. +2. Use [`last()`](/{{< latest "flux" >}}/stdlib/universe/last/) to return the + last row from each table. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> last() +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{% expand "Return the maximum temperature from each room" %}} + +Using the [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + +1. Query the `temp` field. +2. Use [`max()`](/{{< latest "flux" >}}/stdlib/universe/max/) to return the row + with the highest value in the `_value` column from each table. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> max() +``` + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Pivot data into a relational schema + +If coming from relational SQL or SQL-like query languages, such as InfluxQL, +the data model that Flux uses is different than what you're used to. +Flux returns multiple tables where each table contains a different field. +A "relational" schema structures each field as a column in each row. + +Use the [`pivot()` function](/{{< latest "flux" >}}/stdlib/universe/pivot/) to +pivot data into a "relational" schema based on timestamps. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "co" or r._field == "hum" or r._field == "temp") + |> filter(fn: (r) => r.room == "Kitchen") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") +``` + +{{< expand-wrapper >}} +{{% expand "View input and pivoted output" %}} + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | co | 1 | +| 2022-01-01T15:00:00Z | home | Kitchen | co | 3 | +| 2022-01-01T16:00:00Z | home | Kitchen | co | 7 | +| 2022-01-01T17:00:00Z | home | Kitchen | co | 9 | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | hum | 36.3 | +| 2022-01-01T15:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T16:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T17:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T18:00:00Z | home | Kitchen | hum | 36.9 | +| 2022-01-01T19:00:00Z | home | Kitchen | hum | 36.6 | +| 2022-01-01T20:00:00Z | home | Kitchen | hum | 36.5 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | co | hum | temp | +| :------------------- | :----------- | :------ | --: | ---: | ---: | +| 2022-01-01T14:00:00Z | home | Kitchen | 1 | 36.3 | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | 3 | 36.2 | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | 7 | 36 | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | 9 | 36 | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | 18 | 36.9 | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | 22 | 36.6 | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | 26 | 36.5 | 22.7 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} +{{< /expand-wrapper >}} + +## Downsample data + +Downsampling data is a strategy that improve performance at query time and also +optimizes long-term data storage. Simply put, downsampling reduces the number of +points returned by a query without losing the general trends in the data. + +_For more information about downsampling data, see +[Downsample data](/influxdb/v2.6/process-data/common-tasks/downsample-data/)._ + +The most common way to downsample data is by time intervals or "windows." +For example, you may want to query the last hour of data and return the average +value for every five minute window. + +Use [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +to downsample data by specified time intervals: + +- Use the `every` parameter to specify the duration of each window. +- Use the `fn` parameter to specify what [aggregate](/{{< latest "flux" >}}/function-types/#aggregates) + or [selector](/{{< latest "flux" >}}/function-types/#selectors) function + to apply to each window. +- _(Optional)_ Use the `timeSrc` parameter to specify which column value to + use to create the new aggregate timestamp for each window. + The default is `_stop`. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T14:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field == "temp") + |> aggregateWindow(every: 2h, fn: mean) +``` + +{{< expand-wrapper >}} +{{% expand "View input and downsampled output" %}} + +{{< tabs-wrapper >}} +{{% tabs "small" %}} +[Input](#) +[Output](#) +Click to view output +{{% /tabs %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{% tab-content %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----------------: | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.75 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 22.549999999999997 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 23.200000000000003 | +| 2022-01-01T20:00:01Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----------------: | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.65 | +| 2022-01-01T20:00:01Z | home | Living Room | temp | 22.2 | + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{% /expand %}} +{{< /expand-wrapper >}} + +## Automate processing with InfluxDB tasks + +[InfluxDB tasks](/influxdb/v2.6/process-data/get-started/) are scheduled queries +that can perform any of the data processing operations described above. +Generally tasks then use the [`to()` function](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/to/) +to write the processed result back to InfluxDB. + +_For more information about creating and configuring tasks, see +[Get started with InfluxDB tasks](/influxdb/v2.6/process-data/get-started/)._ + +#### Example downsampling task + +```js +option task = { + name: "Example task" + every: 1d, +} + +from(bucket: "get-started-downsampled") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "home") + |> aggregateWindow(every: 2h, fn: mean) +``` +{{< page-nav prev="/influxdb/v2.6/get-started/query/" next="/influxdb/v2.6/get-started/visualize/" keepTab=true >}} diff --git a/content/influxdb/v2.6/get-started/query.md b/content/influxdb/v2.6/get-started/query.md new file mode 100644 index 000000000..a3cdead0b --- /dev/null +++ b/content/influxdb/v2.6/get-started/query.md @@ -0,0 +1,603 @@ +--- +title: Get started querying data +seotitle: Query data | Get started with InfluxDB +list_title: Query data +description: > + Get started querying data in InfluxDB by learning about Flux and InfluxQL and + using tools like the InfluxDB UI, `influx` CLI, and InfluxDB API. +menu: + influxdb_2_6: + name: Query data + parent: Get started + identifier: get-started-query-data +weight: 102 +metadata: [3 / 5] +related: + - /influxdb/v2.6/query-data/ +--- + +InfluxDB supports many different tools for querying data, including: + +- InfluxDB user interface (UI) +- [InfluxDB HTTP API](/influxdb/v2.6/reference/api/) +- [`influx` CLI](/influxdb/v2.6/tools/influx-cli/) +- [Chronograf](/{{< latest "Chronograf" >}}/) +- [Grafana](/influxdb/v2.6/tools/grafana/) +- [InfluxDB client libraries](/influxdb/v2.6/api-guide/client-libraries/) + +This tutorial walks you through the fundamentals of querying data in InfluxDB and +focuses primarily on the two languages you can use to query your time series data: + +- **Flux**: A functional scripting language designed to query and process data + from InfluxDB and other data sources. +- **InfluxQL**: A SQL-like query language designed to query time series data from + InfluxDB. + +{{% note %}} +The examples in this section of the tutorial query the data from written in the +[Get started writing data](/influxdb/v2.6/get-started/write/#write-line-protocol-to-influxdb) section. +{{% /note %}} + +###### On this page: +- [Query data with Flux](#query-data-with-flux) + - [Flux query basics](#flux-query-basics) + - [Execute a Flux query](#execute-a-flux-query) +- [Query data with InfluxQL](#query-data-with-influxql) + - [InfluxQL query basics](#influxql-query-basics) + - [Execute an InfluxQL query](#execute-an-influxql-query) + +--- + +## Query data with Flux + +Flux is a functional scripting language that lets you query and process data +from InfluxDB and [other data sources](/flux/v0.x/query-data/). + +{{% note %}} +This is a brief introduction to writing Flux queries. +For a more in-depth introduction, see [Get started with Flux](/{{< latest "flux" >}}/get-started/). +{{% /note %}} + +### Flux query basics + +When querying InfluxDB with Flux, there are three primary functions you use: + +- [from()](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/): + Queries data from an InfluxDB bucket. +- [range()](/{{< latest "flux" >}}/stdlib/universe/range/): + Filters data based on time bounds. Flux requires "bounded" queries—queries + limited to a specific time range. +- [filter()](/{{< latest "flux" >}}/stdlib/universe/filter/): + Filters data based on column values. Each row is represented by `r` + and each column is represented by a property of `r`. + You can apply multiple subsequent filters. + + To see how `from()` structures data into rows and tables when returned from InfluxDB, + [view the data written in Get started writing to InfluxDB](/influxdb/v2.6/get-started/write/#view-the-written-data). + + {{< expand-wrapper >}} +{{% expand "Learn more about how `filter()` works" %}} + +[`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) reads each row as a +[record](/flux/v0.x/data-types/composite/record/) named `r`. +In the `r` record, each key-value pair represents a column and its value. +For example: + +```js +r = { + _time: 2020-01-01T00:00:00Z, + _measurement: "home", + room: "Kitchen", + _field: "temp", + _value: 21.0, +} +``` + +To filter rows, use [predicate expressions](/flux/v0.x/get-started/syntax-basics/#predicate-expressions) +to evaluate the values of columns. Given the row record above: + +```javascript +(r) => r._measurement == "home" // Returns true +(r) => r.room == "Kitchen" // Returns true +(r) => r._field == "co" // Returns false +(r) => r._field == "co" or r._field == "temp" // Returns true +(r) => r._value <= 20.0 // Returns false +``` + +Rows that evaluate to `true` are included in the `filter()` output. +Rows that evaluate to `false` are dropped from the `filter()` output. + {{% /expand %}} + {{< /expand-wrapper >}} + +#### Pipe-forward operator + +Flux uses the pipe-forward operator (`|>`) to pipe the output of one function as +input the next function as input. + +#### Query the example data + +The following Flux query returns the **co**, **hum**, and **temp** fields stored in +the **home** measurement with timestamps **between 2022-01-01T08:00:00Z and 2022-01-01T20:00:01Z**. + +```js +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field== "co" or r._field == "hum" or r._field == "temp") +``` + +### Execute a Flux query + +Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to execute Flux queries. + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + + +1. Visit + {{% oss-only %}}[localhost:8086](http://localhost:8086){{% /oss-only %}} + {{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}} + in a browser to log in and access the InfluxDB UI. + +2. In the left navigation bar, click **Data Explorer**. + +{{< nav-icon "data-explorer" "v4" >}} + +3. The InfluxDB Data Explorer provides two options for querying data with Flux: + + - [Query Builder](#query-builder) _(default)_: + Visual query builder that lets you select the time range, + measurement, tags, and fields to query. + - [Script Editor](#script-editor): + In-browser code editor for composing and running Flux scripts. + + --- + + #### Query builder + + **To build and execute a Flux query with the query builder**: + + 1. In the **{{% caps %}}FROM{{% /caps %}}** column, select the bucket to query. For this tutorial, + select the **get-started** bucket. + 2. In the next **filter** column, select **_measurement** from the + column dropdown menu, and then select the **home** measurement. + 3. _(Optional)_ To query a specific field or fields, in the next **filter** + column, select **_field** from the column dropdown menu, and then + select the fields to query. In this tutorial, there are three + fields: **co**, **hum**, and **temp**. + 4. _(Optional)_ To query by specific tag values, in the next **filter** + column, select then tag column from the column dropdown menu, and then + select the tag values to filter by. In this tutorial, the tag only + tag column is **room**. + 5. _(Optional)_ In the **{{% caps %}}Aggregate Function{{% /caps %}}** pane, + select an aggregate or selector function to use to downsample the data. + The default aggregate function is `mean`. + 6. In the time range dropdown menu, select **Custom Time Range**, and + select the following dates from the date selectors: + + - **Start**: 2022-01-01 08:00:00 + - **Stop**: 2022-01-01 20:00:01 + + _Note the addition of one second to the stop time. In Flux, stop + times are exclusive and will exclude points with that timestamp. + By adding one second, the query will include all points to + 2022-01-01 20:00:00_. + + 7. Click **{{% caps %}}Submit{{% /caps %}}** to execute the query with the + selected filters and operations and display the result. + + --- + + #### Script editor + + **To write and execute a Flux query with the query builder**: + + 1. In the Data Explorer, click **{{% caps %}}Script Editor{{% /caps %}}**. + 2. Write your Flux query in the Script Editor text field. + + _**Note**: You can either hand-write the functions or you can use the function list + to the right of the script editor to search for and inject functions._ + + 1. Use `from()` and specify the bucket to query with the `bucket` parameter. + For this tutorial, query the **get-started** bucket. + 2. Use `range()` to specify the time range to query. The `start` + parameter defines the earliest time to include in results. + The `stop` parameter specifies the latest time (exclusively) to + include in results. + + - **start**: 2022-01-01T08:00:00Z + - **stop**: 2022-01-01T20:00:01Z + + _Note the addition of one second to the stop time. In Flux, stop + times are exclusive and will exclude points with that timestamp. + By adding one second, the query will include all points to + 2022-01-01 20:00:00_. + + If you want to use the start and stop times selected in the time + selection dropdown menu, use `v.timeRangeStart` and `v.timeRangeStop` + as the values for the `start` and `stop` parameters. + + 3. Use `filter()` to filter results by the **home** measurement. + 4. _(Optional)_ Use `filter()` to filter results by a specific field. + In this tutorial, there are three fields: **co**, **hum**, and **temp**. + 5. _(Optional)_ Use `filter()` to filter results by specific + tag values. In this tutorial, there is one tag, **room**, with two + potential values: **Living Room** or **Kitchen**. + + ```js + from(bucket: from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field== "co" or r._field == "hum" or r._field == "temp") + ``` + + 3. Click **{{% caps %}}Submit{{% /caps %}}** to execute the query with the + selected filters and operations and display the result. + + +{{% /tab-content %}} +{{% tab-content %}} + + +1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/v2.6/tools/influx-cli/). +2. Use the [`influx query` command](/influxdb/v2.6/reference/cli/influx/query/) + to query InfluxDB using Flux. + + **Provide the following**: + + - String-encoded Flux query. + - [Connection and authentication credentials](/influxdb/v2.6/get-started/setup/?t=influx+CLI#configure-authentication-credentials) + +```sh +influx query ' +from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field== "co" or r._field == "hum" or r._field == "temp") +' +``` + + +{{% /tab-content %}} +{{% tab-content %}} + + +To query data from InfluxDB using Flux and the InfluxDB HTTP API, send a request +to the InfluxDB API [`/api/v2/query` endpoint](/influxdb/v2.6/api/#operation/PostQuery) +using the `POST` request method. + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/query" method="post" >}} + +Include the following with your request: + +- **Headers**: + - **Authorization**: Token + - **Content-Type**: application/vnd.flux + - **Accept**: application/csv + - _(Optional)_ **Accept-Encoding**: gzip +- **Query parameters**: + - **org**: InfluxDB organization name +- **Request body**: Flux query as plain text + +The following example uses cURL and the InfluxDB API to query data with Flux: + +```sh +curl --request POST \ +"$INFLUX_HOST/api/v2/query?org=$INFLUX_ORG&bucket=get-started" \ + --header "Authorization: Token $INFLUX_TOKEN" \ + --header "Content-Type: application/vnd.flux" \ + --header "Accept: application/csv" \ + --data 'from(bucket: "get-started") + |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:01Z) + |> filter(fn: (r) => r._measurement == "home") + |> filter(fn: (r) => r._field== "co" or r._field == "hum" or r._field == "temp") + ' +``` + +{{% note %}} +The InfluxDB `/api/v2/query` endpoint returns query results in +[annotated CSV](/influxdb/v2.6/reference/syntax/annotated-csv/). +{{% /note %}} + + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +### Flux query results + +{{< expand-wrapper >}} +{{% expand "View Flux query results" %}} + +{{% note %}} +`_start` and `_stop` columns have been omitted. +These columns, by default, represent the query time bounds and are added by `range()`. +{{% /note %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T09:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T10:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T11:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T12:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T13:00:00Z | home | Kitchen | co | 1 | +| 2022-01-01T14:00:00Z | home | Kitchen | co | 1 | +| 2022-01-01T15:00:00Z | home | Kitchen | co | 3 | +| 2022-01-01T16:00:00Z | home | Kitchen | co | 7 | +| 2022-01-01T17:00:00Z | home | Kitchen | co | 9 | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 | +| 2022-01-01T11:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T12:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T13:00:00Z | home | Kitchen | hum | 36.5 | +| 2022-01-01T14:00:00Z | home | Kitchen | hum | 36.3 | +| 2022-01-01T15:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T16:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T17:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T18:00:00Z | home | Kitchen | hum | 36.9 | +| 2022-01-01T19:00:00Z | home | Kitchen | hum | 36.6 | +| 2022-01-01T20:00:00Z | home | Kitchen | hum | 36.5 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 | +| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 | +| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T11:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T12:00:00Z | home | Kitchen | temp | 22.5 | +| 2022-01-01T13:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T09:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T10:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T11:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T12:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T13:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T14:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T15:00:00Z | home | Living Room | co | 1 | +| 2022-01-01T16:00:00Z | home | Living Room | co | 4 | +| 2022-01-01T17:00:00Z | home | Living Room | co | 5 | +| 2022-01-01T18:00:00Z | home | Living Room | co | 9 | +| 2022-01-01T19:00:00Z | home | Living Room | co | 14 | +| 2022-01-01T20:00:00Z | home | Living Room | co | 17 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T11:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T12:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T13:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T14:00:00Z | home | Living Room | hum | 36.1 | +| 2022-01-01T15:00:00Z | home | Living Room | hum | 36.1 | +| 2022-01-01T16:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T17:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T18:00:00Z | home | Living Room | hum | 36.2 | +| 2022-01-01T19:00:00Z | home | Living Room | hum | 36.3 | +| 2022-01-01T20:00:00Z | home | Living Room | hum | 36.4 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 | +| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 | +| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 | +| 2022-01-01T11:00:00Z | home | Living Room | temp | 22.2 | +| 2022-01-01T12:00:00Z | home | Living Room | temp | 22.2 | +| 2022-01-01T13:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## Query data with InfluxQL + +InfluxQL is a SQL-like query language similar to most SQL languages, but +specifically designed to query time series data from InfluxDB 0.x and 1.x. + +{{% note %}} +#### Map databases and retention policies to buckets + +Because InfluxQL was developed for earlier versions of InfluxDB, it depends on +**databases and retention policies** (DBRP) which have been replaced by +[buckets](/influxdb/v2.6/get-started/#data-organization) in InfluxDB {{< current-version >}}. +To use InfluxQL with InfluxDB {{< current-version >}}, first +[map database and retention policy (DBRP) combinations to an InfluxDB bucket](/influxdb/v2.6/query-data/influxql/dbrp/). +{{% /note %}} + +### InfluxQL query basics + +When querying InfluxDB with InfluxQL, the most basic query includes the following +statements and clauses: + +- `SELECT`: Specify which fields and tags to query. +- `FROM`: Specify the measurement to query. + Use the measurement name or a fully-qualified measurement name which includes + the database and retention policy. For example: `db.rp.measurement`. +- `WHERE`: _(Optional)_ Filter data based on fields, tags, and time. + +The following InfluxQL query returns the **co**, **hum**, and **temp** fields and +the **room** tag stored in the **home** measurement with timestamps +**between 2022-01-01T08:00:00Z and 2022-01-01T20:00:00Z**. + +```sql +SELECT co,hum,temp,room FROM "get-started".autogen.home WHERE time >= '2022-01-01T08:00:00Z' AND time <= '2022-01-01T20:00:00Z' +``` + +{{% note %}} +These are just the fundamentals of the InfluxQL syntax. +For more in-depth information, see the [InfluxQL documentation](/influxdb/v2.6/query-data/influxql/). +{{% /note %}} + +### Execute an InfluxQL query + +Use the **`influx` CLI**, or **InfluxDB API** to execute InfluxQL queries. + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + + +{{% note %}} +#### The InflxuDB UI does not support InfluxQL + +The InfluxDB {{< current-version >}} UI does not provide a way to query data with InfluxQL. +For a user interface that builds and executes InfluxQL queries, consider using +[Chronograf](/influxdb/v2.6/tools/chronograf/) or +[Grafana](/influxdb/v2.6/tools/grafana/) with InfluxDB {{< current-version >}}. +{{% /note %}} + + +{{% /tab-content %}} +{{% tab-content %}} + + +{{< cli/influx-creds-note >}} + +1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/v2.6/tools/influx-cli/). +2. Use the [`influx v1 shell` command](/influxdb/v2.6/reference/cli/influx/v1/shell/) + to start an InfluxQL shell and query InfluxDB using InfluxQL. + Provide the following: + + - [Connection and authentication credentials](/influxdb/v2.6/get-started/setup/?t=influx+CLI#configure-authentication-credentials) + + ```sh + influx v1 shell + ``` + +3. Enter an InfluxQL query and press {{< keybind mac="return" other="Enter ↵" >}}. + + ```sql + SELECT co,hum,temp,room FROM "get-started".autogen.home WHERE time >= '2022-01-01T08:00:00Z' AND time <= '2022-01-01T20:00:00Z' + ``` + + +{{% /tab-content %}} +{{% tab-content %}} + + +To query data from InfluxDB using InfluxQL and the InfluxDB HTTP API, send a request +to the InfluxDB API [`/query` 1.X compatibility endpoint](/influxdb/v2.6/reference/api/influxdb-1x/query/) +using the `POST` request method. + +{{< api-endpoint endpoint="http://localhost:8086/query" method="post" >}} + +Include the following with your request: + +- **Headers**: + - **Authorization**: Token + - **Accept**: application/json + - _(Optional)_ **Accept-Encoding**: gzip +- **Query parameters**: + - **db**: Database to query. + - **rp**: Retention policy to query data from. + - **q**: InfluxQL query to execute. + - **epoch**: _(Optional)_ Return results with + [Unix timestamps](/influxdb/v2.6/reference/glossary/#unix-timestamp) of a + specified precision instead of [RFC3339 timestamps](/influxdb/v2.6/reference/glossary/#rfc3339-timestamp). The following precisions are available: + + - `ns` - nanoseconds + - `u` or `µ` - microseconds + - `ms` - milliseconds + - `s` - seconds + - `m` - minutes + - `h` - hours + +- **Request body**: Flux query as plain text + +The following example uses cURL and the InfluxDB API to query data with InfluxQL: + +```sh +curl --get "$INFLUX_HOST/query?org=$INFLUX_ORG&bucket=get-started" \ + --header "Authorization: Token $INFLUX_TOKEN" \ + --data-urlencode "db=get-started" \ + --data-urlencode "rp=autogen" \ + --data-urlencode "q=SELECT co,hum,temp,room FROM home WHERE time >= '2022-01-01T08:00:00Z' AND time <= '2022-01-01T20:00:00Z'" +``` + +{{% note %}} +The InfluxDB `/write` 1.x compatibility endpoint returns query results in JSON format. +{{% /note %}} + + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +### InfluxQL query results + +{{< expand-wrapper >}} +{{% expand "View InfluxQL query results" %}} + +| time | room | co | hum | temp | +| :------------------- | :---------- | --: | ---: | ---: | +| 2022-01-01T08:00:00Z | Kitchen | 0 | 35.9 | 21 | +| 2022-01-01T08:00:00Z | Living Room | 0 | 35.9 | 21.1 | +| 2022-01-01T09:00:00Z | Kitchen | 0 | 36.2 | 23 | +| 2022-01-01T09:00:00Z | Living Room | 0 | 35.9 | 21.4 | +| 2022-01-01T10:00:00Z | Kitchen | 0 | 36.1 | 22.7 | +| 2022-01-01T10:00:00Z | Living Room | 0 | 36 | 21.8 | +| 2022-01-01T11:00:00Z | Kitchen | 0 | 36 | 22.4 | +| 2022-01-01T11:00:00Z | Living Room | 0 | 36 | 22.2 | +| 2022-01-01T12:00:00Z | Kitchen | 0 | 36 | 22.5 | +| 2022-01-01T12:00:00Z | Living Room | 0 | 35.9 | 22.2 | +| 2022-01-01T13:00:00Z | Kitchen | 1 | 36.5 | 22.8 | +| 2022-01-01T13:00:00Z | Living Room | 0 | 36 | 22.4 | +| 2022-01-01T14:00:00Z | Kitchen | 1 | 36.3 | 22.8 | +| 2022-01-01T14:00:00Z | Living Room | 0 | 36.1 | 22.3 | +| 2022-01-01T15:00:00Z | Kitchen | 3 | 36.2 | 22.7 | +| 2022-01-01T15:00:00Z | Living Room | 1 | 36.1 | 22.3 | +| 2022-01-01T16:00:00Z | Kitchen | 7 | 36 | 22.4 | +| 2022-01-01T16:00:00Z | Living Room | 4 | 36 | 22.4 | +| 2022-01-01T17:00:00Z | Kitchen | 9 | 36 | 22.7 | +| 2022-01-01T17:00:00Z | Living Room | 5 | 35.9 | 22.6 | +| 2022-01-01T18:00:00Z | Kitchen | 18 | 36.9 | 23.3 | +| 2022-01-01T18:00:00Z | Living Room | 9 | 36.2 | 22.8 | +| 2022-01-01T19:00:00Z | Kitchen | 22 | 36.6 | 23.1 | +| 2022-01-01T19:00:00Z | Living Room | 14 | 36.3 | 22.5 | +| 2022-01-01T20:00:00Z | Kitchen | 26 | 36.5 | 22.7 | +| 2022-01-01T20:00:00Z | Living Room | 17 | 36.4 | 22.2 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +**Congratulations!** You've learned the basics of querying data in InfluxDB. +For a deep dive into all the ways you can query InfluxDB, see the +[Query data in InfluxDB](/influxdb/v2.6/query-data/) section of documentation. + +Let's move on to more advanced data processing queries and automating queries +with InfluxDB tasks. + +{{< page-nav prev="/influxdb/v2.6/get-started/write/" next="/influxdb/v2.6/get-started/process/" keepTab=true >}} diff --git a/content/influxdb/v2.6/get-started/setup.md b/content/influxdb/v2.6/get-started/setup.md new file mode 100644 index 000000000..2f56c60e5 --- /dev/null +++ b/content/influxdb/v2.6/get-started/setup.md @@ -0,0 +1,457 @@ +--- +title: Set up InfluxDB +seotitle: Set up InfluxDB | Get started with InfluxDB +list_title: Set up InfluxDB +description: > + Learn how to set up InfluxDB for the "Get started with InfluxDB" tutorial. +menu: + influxdb_2_6: + name: Set up InfluxDB + parent: Get started + identifier: get-started-set-up +weight: 101 +metadata: [1 / 5] +related: + - /influxdb/v2.6/install/ + - /influxdb/v2.6/reference/config-options/ + - /influxdb/v2.6/security/tokens/ + - /influxdb/v2.6/organizations/buckets/ + - /influxdb/v2.6/tools/influx-cli/ + - /influxdb/v2.6/reference/api/ +--- + +As you get started with this tutorial, do the following to make sure everything +you need is in place. + +1. If you haven't already, [download and install InfluxDB](/influxdb/v2.6/install/). + + Installation instructions depend on your operating system. + Be sure to go through the installation and initialization process fully. + +2. **Start InfluxDB**. + + Run the `influxd` daemon to start the InfluxDB service, HTTP API, and + user interface (UI). + + ```sh + influxd + ``` + + {{% note %}} +#### Configure InfluxDB + +There are multiple ways to custom-configure InfluxDB. +For information about what configuration options are available and how to set them, +see [InfluxDB configuration options](/influxdb/v2.6/reference/config-options/). + {{% /note %}} + + Once running, the InfluxDB UI is accessible at [localhost:8086](http://localhost:8086). + +3. {{< req text="(Optional)" color="magenta" >}} **Download, install, and configure the `influx` CLI**. + + The `influx` CLI provides a simple way to interact with InfluxDB from a + command line. For detailed installation and setup instructions, + see [Use the influx CLI](/influxdb/v2.6/tools/influx-cli/). + +4. {{< req text="(Optional)" color="magenta" >}} **Create an All Access API token.** + + + During the InfluxDB initialization process, you created a user and API token + that has permissions to manage everything in your InfluxDB instance. + This is known as an **Operator token**. While you can use your Operator token + to interact with InfluxDB, we recommend creating an **all access token** that + is scoped to an organization. + + Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to create an + all access token. + + {{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + + +1. Visit + {{% oss-only %}}[localhost:8086](http://localhost:8086){{% /oss-only %}} + {{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}} + in a browser to log in and access the InfluxDB UI. + +2. Navigate to **Load Data** > **API Tokens** using the left navigation bar. +3. Click **+ {{% caps %}}Generate API token{{% /caps %}}** and select + **All Access API Token**. +4. Enter a description for the API token and click **{{< icon "check" >}} {{% caps %}}Save{{% /caps %}}**. +5. Copy the generated token and store it for safe keeping. + + +{{% /tab-content %}} +{{% tab-content %}} + + +1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/v2.6/tools/influx-cli/). +2. Use the [`influx auth create` command](/influxdb/v2.6/reference/cli/influx/auth/create/) + to create an all access token. + + **Provide the following**: + + - `--all-access` flag + - `--host` flag with your [InfluxDB host URL](/influxdb/v2.6/reference/urls/) + - `-o, --org` or `--org-id` flags with your InfluxDB organization name or + [ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) + - `-t, --token` flag with your Operator token + + ```sh + influx auth create \ + --all-access \ + --host http://localhost:8086 \ + --org \ + --token + ``` + +3. Copy the generated token and store it for safe keeping. + + +{{% /tab-content %}} +{{% tab-content %}} + + +Send a request to the InfluxDB API `/api/v2/authorizations` endpoint using the `POST` request method. + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/authorizations" method="post" >}} + +Include the following with your request: + +- **Headers**: + - **Authorization**: Token + - **Content-Type**: application/json +- **Request body**: JSON body with the following properties: + - **status**: `"active"` + - **description**: API token description + - **orgID**: [InfluxDB organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) + - **permissions**: Array of objects where each object represents permissions + for an InfluxDB resource type or a specific resource. Each permission contains the following properties: + - **action**: "read" or "write" + - **resource**: JSON object that represents the InfluxDB resource to grant + permission to. Each resource contains at least the following properties: + - **orgID**: [InfluxDB organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) + - **type**: Resource type. + _For information about what InfluxDB resource types exist, use the + [`/api/v2/resources` endpoint](/influxdb/v2.6/api/#operation/GetResources)._ + +The following example uses cURL and the InfluxDB API to generate an all access token: + +{{% truncate %}} +```sh +export INFLUX_HOST=http://localhost:8086 +export INFLUX_ORG_ID= +export INFLUX_TOKEN= + +curl --request POST \ +"$INFLUX_HOST/api/v2/authorizations" \ + --header "Authorization: Token $INFLUX_TOKEN" \ + --header "Content-Type: text/plain; charset=utf-8" \ + --data '{ + "status": "active", + "description": "All access token for get started tutorial", + "orgID": "'"$INFLUX_ORG_ID"'", + "permissions": [ + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "authorizations"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "authorizations"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "buckets"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "buckets"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dashboards"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dashboards"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "orgs"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "orgs"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "sources"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "sources"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "tasks"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "tasks"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "telegrafs"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "telegrafs"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "users"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "users"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "variables"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "variables"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "scrapers"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "scrapers"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "secrets"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "secrets"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "labels"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "labels"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "views"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "views"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "documents"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "documents"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationRules"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationRules"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationEndpoints"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationEndpoints"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "checks"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "checks"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dbrp"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dbrp"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notebooks"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notebooks"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "annotations"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "annotations"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "remotes"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "remotes"}}, + {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "replications"}}, + {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "replications"}} + ] + } +' +``` +{{% /truncate %}} + +The response body contains a JSON object with the following properties: + +- **id**: API Token ID +- **token**: API Token ({{< req "Important" >}}) +- **status**: Token status +- **description**: Token description +- **orgID**: InfluxDB organization ID the token is associated with +- **org**: InfluxDB organization name the token is associated with +- **userID**: User ID the token is associated with +- **user**: Username the token is associated with +- **permissions**: List of permissions for organization resources + +**Copy the generated `token` and store it for safe keeping.** + + +{{% /tab-content %}} + {{< /tabs-wrapper >}} + + {{% note %}} +We recommend using a password manager or a secret store to securely store +sensitive tokens. + {{% /note %}} + +5. **Configure authentication credentials**. + + As you go through this tutorial, interactions with InfluxDB {{< current-version >}} + require your InfluxDB **host**, **organization name or ID**, and your **API token**. + There are different methods for providing these credentials depending on + which client you use to interact with InfluxDB. + + {{% note %}} +When configuring your token, if you [created an all access token](#create-an-all-access-api-token), +use that token to interact with InfluxDB. Otherwise, use your operator token. + {{% /note %}} + + {{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + + +When managing InfluxDB through the InfluxDB UI, authentication credentials are +provided automatically using credentials associated with the user you log in with. + + +{{% /tab-content %}} +{{% tab-content %}} + + +There are three ways to provided authentication credentials to the `influx` CLI: + +{{< expand-wrapper >}} +{{% expand "CLI connection configurations (Recommended)" %}} + +The `influx` CLI lets you specify connection configuration presets that let +you store and quickly switch between multiple sets of InfluxDB connection +credentials. Use the [`influx config create` command](/influxdb/v2.6/reference/cli/influx/config/create/) +to create a new CLI connection configuration. Include the following flags: + + - `-n, --config-name`: Connection configuration name. This examples uses `get-started`. + - `-u, --host-url`: [InfluxDB host URL](/influxdb/v2.6/reference/urls/). + - `-o, --org`: InfluxDB organization name. + - `-t, --token`: InfluxDB API token. + +```sh +influx config create \ + --config-name get-started \ + --host-url http://localhost:8086 \ + --org \ + --token +``` + +_For more information about CLI connection configurations, see +[Install and use the `influx` CLI](/influxdb/v2.6/tools/influx-cli/#set-up-the-influx-cli)._ + +{{% /expand %}} + +{{% expand "Environment variables" %}} + +The `influx` CLI checks for specific environment variables and, if present, +uses those environment variables to populate authentication credentials. +Set the following environment variables in your command line session: + +- `INFLUX_HOST`: [InfluxDB host URL](/influxdb/v2.6/reference/urls/). +- `INFLUX_ORG`: InfluxDB organization name. +- `INFLUX_ORG_ID`: InfluxDB [organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id). +- `INFLUX_TOKEN`: InfluxDB API token. + +```sh +export INFLUX_HOST=http://localhost:8086 +export INFLUX_ORG= +export INFLUX_ORG_ID= +export INFLUX_TOKEN= +``` + +{{% /expand %}} + +{{% expand "Command flags" %}} + +Use the following `influx` CLI flags to provide required credentials to commands: + +- `--host`: [InfluxDB host URL](/influxdb/v2.6/reference/urls/). +- `-o`, `--org` or `--org-id`: InfluxDB organization name or + [ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id). +- `-t`, `--token`: InfluxDB API token. + +{{% /expand %}} +{{< /expand-wrapper >}} + +{{% note %}} +All `influx` CLI examples in this getting started tutorial assume your InfluxDB +**host**, **organization**, and **token** are provided by either the +[active `influx` CLI configuration](/influxdb/v2.6/reference/cli/influx/#provide-required-authentication-credentials) +or by environment variables. +{{% /note %}} + + +{{% /tab-content %}} +{{% tab-content %}} + + +When using the InfluxDB API, provide the required connection credentials in the +following ways: + +- **InfluxDB host**: The domain and port to send HTTP(S) requests to. +- **InfluxDB API Token**: Include an `Authorization` header that uses either + `Bearer` or `Token` scheme and your InfluxDB API token. For example: + `Authorization: Bearer 0xxx0o0XxXxx00Xxxx000xXXxoo0==`. +- **InfluxDB organization name or ID**: Depending on the API endpoint used, pass + this as part of the URL path, query string, or in the request body. + +All API examples in this tutorial use **cURL** from a command line. +To provide all the necessary credentials to the example cURL commands, set +the following environment variables in your command line session. + +```sh +export INFLUX_HOST=http://localhost:8086 +export INFLUX_ORG= +export INFLUX_ORG_ID= +export INFLUX_TOKEN= +``` + +{{% /tab-content %}} + {{< /tabs-wrapper >}} + +6. {{< req text="(Optional)" color="magenta" >}} **Create a bucket**. + + In the InfluxDB initialization process, you created a bucket. + You can use that bucket or create a new one specifically for this getting + started tutorial. All examples in this tutorial assume a bucket named + _get-started_. + + Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to create a + new bucket. + + {{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + + +1. Visit + {{% oss-only %}}[localhost:8086](http://localhost:8086){{% /oss-only %}} + {{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}} + in a browser to log in and access the InfluxDB UI. + +2. Navigate to **Load Data** > **Buckets** using the left navigation bar. +3. Click **+ {{< caps >}}Create bucket{{< /caps >}}**. +4. Provide a bucket name (get-started) and select {{% caps %}}Never{{% /caps %}} + to create a bucket with an infinite [retention period](/influxdb/v2.6/reference/glossary/#retention-period). +5. Click **{{< caps >}}Create{{< /caps >}}**. + + +{{% /tab-content %}} +{{% tab-content %}} + + +1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/v2.6/tools/influx-cli/). +2. Use the [`influx bucket create` command](/influxdb/v2.6/reference/cli/influx/bucket/create/) + to create a new bucket. + + **Provide the following**: + + - `-n, --name` flag with the bucket name. + - [Connection and authentication credentials](#configure-authentication-credentials) + + ```sh + influx bucket create --name get-started + ``` + + +{{% /tab-content %}} +{{% tab-content %}} + + +To create a bucket using the InfluxDB HTTP API, send a request to +the InfluxDB API `/api/v2/buckets` endpoint using the `POST` request method. + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/buckets" method="post" >}} + +Include the following with your request: + +- **Headers**: + - **Authorization**: Token `INFLUX_TOKEN` + - **Content-Type**: `application/json` +- **Request body**: JSON object with the following properties: + - **org**: InfluxDB organization name + - **name**: Bucket name + - **retentionRules**: List of retention rule objects that define the bucket's retention period. + Each retention rule object has the following properties: + - **type**: `"expire"` + - **everySeconds**: Retention period duration in seconds. + `0` indicates the retention period is infinite. + +```sh +export INFLUX_HOST=http://localhost:8086 +export INFLUX_ORG_ID= +export INFLUX_TOKEN= + +curl --request POST \ +"$INFLUX_HOST/api/v2/buckets" \ + --header "Authorization: Token $INFLUX_TOKEN" \ + --header "Content-Type: application/json" \ + --data '{ + "orgID": "'"$INFLUX_ORG_ID"'", + "name": "get-started", + "retentionRules": [ + { + "type": "expire", + "everySeconds": 0 + } + ] + }' +``` + +{{% /tab-content %}} + {{< /tabs-wrapper >}} + +{{< page-nav prev="/influxdb/v2.6/get-started/" next="/influxdb/v2.6/get-started/write/" keepTab=true >}} diff --git a/content/influxdb/v2.6/get-started/visualize.md b/content/influxdb/v2.6/get-started/visualize.md new file mode 100644 index 000000000..3d64e738b --- /dev/null +++ b/content/influxdb/v2.6/get-started/visualize.md @@ -0,0 +1,186 @@ +--- +title: Get started visualizing data +seotitle: Visualize data | Get started with InfluxDB +list_title: Visualize data +description: > + ... +menu: + influxdb_2_6: + name: Visualize data + parent: Get started + identifier: get-started-visualize-data +weight: 104 +metadata: [5 / 5] +related: + - /influxdb/v2.6/visualize-data/ + - /influxdb/v2.6/visualize-data/visualization-types/ + - /influxdb/v2.6/tools/chronograf/ + - /influxdb/v2.6/tools/grafana/ +--- + +There are many tools you can use to visualize your time series data including the +InfluxDB user interface (UI), [Chronograf](), and +[Grafana](/influxdb/v2.6/tools/grafana/). +This tutorial walks you through using the **InfluxDB UI** to create a simple dashboard. + +Dashboards are a powerful way of displaying time series data and can help to +identify trends and anomalies. A dashboard is comprised of one or more +dashboard cells. A **dashboard cell** visualizes the results of a query using +one of the available [visualization types](/influxdb/v2.6/visualize-data/visualization-types/). + +- [Create a dashboard](#create-a-dashboard) +- [Create dashboard cells](#create-dashboard-cells) +- [Create and use dashboard variables](#create-and-use-dashboard-variables) + - [Create a custom dashboard variable](#create-a-custom-dashboard-variable) + - [Use a custom dashboard variable](#use-a-custom-dashboard-variable) + +## Create a dashboard + +1. With InfluxDB running, visit [localhost:8086](http://localhost:8086) in your + browser to access the InfluxDB UI. +2. Log in and select **Dashboards** in the left navigation bar. + + {{< nav-icon "dashboards" >}} + +3. Click **+ {{% caps %}}Create Dashboard{{% /caps %}}** and select **New Dashboard**. +4. Click _**Name this Dashboard**_ and provide a name for the dashboard. + For this tutorial, we'll use **"Getting Started Dashboard"**. + +## Create dashboard cells + +With your new dashboard created and named, add a new dashboard cell: + +1. Click **{{< icon "add-cell" >}} {{% caps %}}Add Cell{{% /caps %}}**. +2. Click _**Name this Cell**_ and provide a name for the cell. + For this tutorial, we'll use **"Room temperature"**. +3. _(Optional)_ Select the visualization type from the visualization drop-down menu. + There are many different [visualization types](/influxdb/v2.6/visualize-data/visualization-types/) + available. + For this tutorial, use the default **Graph** visualization. +4. Use the query time range selector to select an absolute time range that + covers includes the time range of the + [data written in "Get started writing to InfluxDB"](/influxdb/v2.6/get-started/write/#view-the-written-data): + **2022-01-01T08:00:00Z** to **2022-01-01T20:00:01Z**. + + 1. The query time range selector defaults to querying data from the last hour + (**{{< icon "clock" >}} Past 1h**). + Click the time range selector drop-down menu and select **Custom Time Range**. + + {{< expand-wrapper >}} + {{% expand "View time range selector" %}} +{{< img-hd src="/img/influxdb/2-4-get-started-visualize-time-range.png" alt="InfluxDB time range selector" />}} + {{% /expand %}} + {{< /expand-wrapper >}} + + 2. Use the date picker to select the stop and stop date and time or manually + enter the following start and stop times: + + - **Start**: 2022-01-01 08:00:00 + - **Stop**: 2022-01-01 20:00:01 + + 3. Click **{{% caps %}}Apply Time Range{{% /caps %}}**. + +5. Use the **Query Builder** to select the measurement, fields, and tags to query: + + 1. In the **{{% caps %}}From{{% /caps %}}** column, select the **get-started** bucket. + 2. In the **Filter** column, select the **home** measurement. + 3. In the next **Filter** column, select the **temp** field. + 4. In the next **Filter** column, select the **room** tag and the **Kitchen** tag value. + +6. Click **{{% caps %}}Submit{{% /caps %}}** to run the query and visualize the + results. + + {{< img-hd src="/img/influxdb/2-4-get-started-visualize-query-builder.png" alt="InfluxDB Query Builder" />}} + +7. Click **{{< icon "check" >}}** to save the cell and return to the dashboard. + +## Create and use dashboard variables + +InfluxDB dashboard cells use **dashboard variables** to dynamically change +specific parts of cell queries. +The query builder automatically builds queries using the following +[predefined dashboard variables](/influxdb/v2.6/visualize-data/variables/#predefined-dashboard-variables), +each controlled by selections in your dashboard: + +- `v.timeRangeStart`: Start time of the queried time range specified by the time range selector. +- `v.timeRangeStop`: Stop time of the queried time range specified by the time range selector. +- `v.windowPeriod`: Window period used downsample data to one point per pixel in + a cell visualization. The value of this variable is determined by the pixel-width of the cell. + +### Create a custom dashboard variable + +Let's create a custom dashboard variable that we can use to change the field +displayed by your dashboard cell. + +1. Select **Settings > Variables** in the left navigation bar. + + {{< nav-icon "settings" >}} + +2. Click **+ {{% caps %}}Create Variable{{% /caps %}}** and select **New Variable**. +3. Name your variable. For this tutorial, name the variable, **"room"**. +4. Select the default **Query** dashboard variable type. + This variable type uses the results of a query to populate the list of potential + variable values. _For information about the other dashboard variable types, + see [Variable types](/influxdb/v2.6/visualize-data/variables/variable-types/)._ +5. Enter the following Flux query to return all the different `room` tag values + in your `get-started` bucket from the [Unix epoch](/influxdb/v2.6/reference/glossary/#unix-timestamp). + + ```js + import "influxdata/influxdb/schema" + + schema.tagValues(bucket: "get-started", tag: "room", start: time(v: 0)) + ``` + +6. Click **{{% caps %}}Create Variable{{% /caps %}}**. + +### Use a custom dashboard variable + +1. Navigate to your **Getting Started Dashboard** by clicking **Dashboards** in + the left navigation bar and the clicking on the name of your dashboard. + + {{< nav-icon "dashboards" >}} + +2. Click the **{{< icon "gear" >}}** on the **Room temperature** cell and select + **{{< icon "pencil" >}} Configure**. +3. Click **{{% caps %}}Script Editor{{% /caps %}}** to edit the Flux query + directly. +4. On line 5 of the Flux query, replace `"Kitchen"` with `v.room` to use the + selected value of the `room` dashboard variable. + + ```js + from(bucket: "get-started") + |> range(start: v.timeRangeStart, stop: v.timeRangeStop) + |> filter(fn: (r) => r["_measurement"] == "home") + |> filter(fn: (r) => r["_field"] == "temp") + |> filter(fn: (r) => r["room"] == v.room) + |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false) + |> yield(name: "mean") + ``` +5. Click **{{< icon "check" >}}** to save the cell and return to the dashboard. +6. Refresh the browser to reload the dashboard. +7. Use the **room variable** drop-down menu to select the room to display + recorded temperatures from. + + {{< img-hd src="/img/influxdb/2-4-get-started-visualize-variable-select.png" alt="InfluxDB dashboard variable selection" />}} + +_For more information about creating custom dashboard variables, see +[Use and manage dashboard variables](/influxdb/v2.6/visualize-data/variables/)._ + +{{< page-nav prev="/influxdb/v2.6/get-started/process/" >}} + +--- + +## Congratulations! + +You have walked through the +[basics of setting up, writing, querying, processing, and visualizing](/influxdb/v2.6/get-started/) +data with InfluxDB {{< current-version >}}. +Feel free to dive in deeper to each of these topics: + +- [Write data to InfluxDB](/influxdb/v2.6/write-data/) +- [Query data in InfluxDB](/influxdb/v2.6/query-data/) +- [Process data with InfluxDB](/influxdb/v2.6/process-data/) +- [Visualize data with the InfluxDB UI](/influxdb/v2.6/visualize-data/) + +If you have questions as you're getting started, reach out to us using the +available [Support and feedback](#bug-reports-and-feedback) channels. diff --git a/content/influxdb/v2.6/get-started/write.md b/content/influxdb/v2.6/get-started/write.md new file mode 100644 index 000000000..0bb70ff4c --- /dev/null +++ b/content/influxdb/v2.6/get-started/write.md @@ -0,0 +1,394 @@ +--- +title: Get started writing data +seotitle: Write data | Get started with InfluxDB +list_title: Write data +description: > + Get started writing data to InfluxDB by learning about line protocol and using + tools like the InfluxDB UI, `influx` CLI, and InfluxDB API. +menu: + influxdb_2_6: + name: Write data + parent: Get started + identifier: get-started-write-data +weight: 101 +metadata: [2 / 5] +related: + - /influxdb/v2.6/write-data/ + - /influxdb/v2.6/write-data/best-practices/ + - /influxdb/v2.6/reference/syntax/line-protocol/ + - /{{< latest "telegraf" >}}/ +--- + +InfluxDB provides many different options for ingesting or writing data, including +the following: + +- Influx user interface (UI) +- [InfluxDB HTTP API](/influxdb/v2.6/reference/api/) +- [`influx` CLI](/influxdb/v2.6/tools/influx-cli/) +- [Telegraf](/{{< latest "telegraf" >}}/) +- {{% cloud-only %}}[MQTT](/influxdb/cloud/write-data/no-code/native-subscriptions/){{% /cloud-only %}} +- [InfluxDB client libraries](/influxdb/v2.6/api-guide/client-libraries/) + +This tutorial walks you through the fundamental of using **line protocol** to write +data to InfluxDB. If using tools like Telegraf or InfluxDB client libraries, they will +build the line protocol for you, but it's good to understand how line protocol works. + +## Line protocol + +All data written to InfluxDB is written using **line protocol**, a text-based +format that lets you provide the necessary information to write a data point to InfluxDB. +_This tutorial covers the basics of line protocol, but for detailed information, +see the [Line protocol reference](/influxdb/v2.6/reference/syntax/line-protocol/)._ + +### Line protocol elements + +Each line of line protocol contains the following elements: + +{{< req type="key" >}} + +- {{< req "\*" >}} **measurement**: String that identifies the [measurement]() to store the data in. +- **tag set**: Comma-delimited list of key value pairs, each representing a tag. + Tag keys and values are unquoted strings. _Spaces, commas, and equal characters must be escaped._ +- {{< req "\*" >}} **field set**: Comma-delimited list key value pairs, each representing a field. + Field keys are unquoted strings. _Spaces and commas must be escaped._ + Field values can be [strings](/influxdb/v2.6/reference/syntax/line-protocol/#string) (quoted), + [floats](/influxdb/v2.6/reference/syntax/line-protocol/#float), + [integers](/influxdb/v2.6/reference/syntax/line-protocol/#integer), + [unsigned integers](/influxdb/v2.6/reference/syntax/line-protocol/#uinteger), + or [booleans](/influxdb/v2.6/reference/syntax/line-protocol/#boolean). +- **timestamp**: [Unix timestamp](/influxdb/v2.6/reference/syntax/line-protocol/#unix-timestamp) + associated with the data. InfluxDB supports up to nanosecond precision. + _If the precision if the timestamp is not in nanoseconds, you must specify the + precision when writing the data to InfluxDB._ + +#### Line protocol element parsing + +- **measurement**: Everything before the _first unescaped comma before the first whitespace_. +- **tag set**: Key-value pairs between the _first unescaped comma_ and the _first unescaped whitespace_. +- **field set**: Key-value pairs between the _first and second unescaped whitespaces_. +- **timestamp**: Integer value after the _second unescaped whitespace_. +- Lines are separated by the newline character (`\n`). + Line protocol is whitespace sensitive. + +--- + +{{< influxdb/line-protocol >}} + +--- + +_For schema design recommendations, see [InfluxDB schema design](/influxdb/v2.6/write-data/best-practices/schema-design/)._ + +## Construct line protocol + +With a basic understanding of line protocol, you can now construct line protocol +and write data to InfluxDB. +Consider a use case where you collect data from sensors in your home. +Each sensor collects temperature, humidity, and carbon monoxide readings. +To collect this data, use the following schema: + +- **measurement**: `home` + - **tags** + - `room`: Living Room or Kitchen + - **fields** + - `temp`: temperature in °C (float) + - `hum`: percent humidity (float) + - `co`: carbon monoxide in parts per million (integer) + - **timestamp**: Unix timestamp in _second_ precision + +Data is collected hourly beginning at 2022-01-01T08:00:00Z (UTC) until 2022-01-01T20:00:00Z (UTC). +The resulting line protocol would look something like the following: + +##### Home sensor data line protocol +```sh +home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000 +home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000 +home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600 +home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600 +home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200 +home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200 +home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800 +home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800 +home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400 +home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400 +home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000 +home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000 +home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600 +home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600 +home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200 +home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200 +home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800 +home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800 +home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400 +home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400 +home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000 +home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000 +home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600 +home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600 +home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200 +home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200 +``` + +## Write line protocol to InfluxDB + +Use the **InfluxDB UI**, **`influx` CLI**, or **InfluxDB API** to write the +line protocol above to InfluxDB. + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + + +1. Visit + {{% oss-only %}}[localhost:8086](http://localhost:8086){{% /oss-only %}} + {{% cloud-only %}}[cloud2.influxdata.com](https://cloud2.influxdata.com){{% /cloud-only %}} + in a browser to log in and access the InfluxDB UI. + +2. Navigate to **Load Data** > **Buckets** using the left navigation bar. + +{{< nav-icon "load data" >}} + +3. Click **{{< icon "plus" >}} {{< caps >}}Add Data{{< /caps >}}** on the bucket + you want to write the data to and select **Line Protocol**. +4. Select **{{< caps >}}Enter Manually{{< /caps >}}**. +5. {{< req "Important" >}} In the **Precision** drop-down menu above the line + protocol text field, select **Seconds** (to match to precision of the + timestamps in the line protocol). +6. Copy the [line protocol above](#home-sensor-data-line-protocol) and paste it + into the line protocol text field. +7. Click **{{< caps >}}Write Data{{< /caps >}}**. + +The UI will confirm that the data has been written successfully. + + +{{% /tab-content %}} +{{% tab-content %}} + + +1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/v2.6/tools/influx-cli/). +2. Use the [`influx write` command](/influxdb/v2.6/reference/cli/influx/write/) + to write the [line protocol above](#home-sensor-data-line-protocol) to InfluxDB. + + **Provide the following**: + + - `-b, --bucket` or `--bucket-id` flag with the bucket name or ID to write do. + - `-p, --precision` flag with the timestamp precision (`s`). + - String-encoded line protocol. + - [Connection and authentication credentials](/influxdb/v2.6/get-started/setup/?t=influx+CLI#configure-authentication-credentials) + + ```sh + influx write \ + --bucket get-started \ + --precision s " + home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000 + home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000 + home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600 + home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600 + home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200 + home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200 + home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800 + home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800 + home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400 + home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400 + home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000 + home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000 + home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600 + home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600 + home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200 + home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200 + home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800 + home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800 + home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400 + home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400 + home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000 + home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000 + home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600 + home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600 + home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200 + home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200 + " + ``` + + +{{% /tab-content %}} +{{% tab-content %}} + + +To write data to InfluxDB using the InfluxDB HTTP API, send a request to +the InfluxDB API `/api/v2/write` endpoint using the `POST` request method. + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/write" method="post" >}} + +Include the following with your request: + +- **Headers**: + - **Authorization**: Token + - **Content-Type**: text/plain; charset=utf-8 + - **Accept**: application/json +- **Query parameters**: + - **org**: InfluxDB organization name + - **bucket**: InfluxDB bucket name + - **precision**: timestamp precision (default is `ns`) +- **Request body**: Line protocol as plain text + +The following example uses cURL and the InfluxDB API to write line protocol +to InfluxDB: + +```sh +export INFLUX_HOST=http://localhost:8086 +export INFLUX_ORG= +export INFLUX_TOKEN= + +curl --request POST \ +"$INFLUX_HOST/api/v2/write?org=$INFLUX_ORG&bucket=get-started&precision=s" \ + --header "Authorization: Token $INFLUX_TOKEN" \ + --header "Content-Type: text/plain; charset=utf-8" \ + --header "Accept: application/json" \ + --data-binary " +home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000 +home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000 +home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600 +home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600 +home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200 +home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200 +home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800 +home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800 +home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400 +home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400 +home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000 +home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000 +home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600 +home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600 +home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200 +home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200 +home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800 +home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800 +home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400 +home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400 +home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000 +home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000 +home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600 +home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600 +home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200 +home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200 +" +``` + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{< expand-wrapper >}} +{{% expand "View the written data" %}} + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T09:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T10:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T11:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T12:00:00Z | home | Kitchen | co | 0 | +| 2022-01-01T13:00:00Z | home | Kitchen | co | 1 | +| 2022-01-01T14:00:00Z | home | Kitchen | co | 1 | +| 2022-01-01T15:00:00Z | home | Kitchen | co | 3 | +| 2022-01-01T16:00:00Z | home | Kitchen | co | 7 | +| 2022-01-01T17:00:00Z | home | Kitchen | co | 9 | +| 2022-01-01T18:00:00Z | home | Kitchen | co | 18 | +| 2022-01-01T19:00:00Z | home | Kitchen | co | 22 | +| 2022-01-01T20:00:00Z | home | Kitchen | co | 26 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 | +| 2022-01-01T11:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T12:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T13:00:00Z | home | Kitchen | hum | 36.5 | +| 2022-01-01T14:00:00Z | home | Kitchen | hum | 36.3 | +| 2022-01-01T15:00:00Z | home | Kitchen | hum | 36.2 | +| 2022-01-01T16:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T17:00:00Z | home | Kitchen | hum | 36 | +| 2022-01-01T18:00:00Z | home | Kitchen | hum | 36.9 | +| 2022-01-01T19:00:00Z | home | Kitchen | hum | 36.6 | +| 2022-01-01T20:00:00Z | home | Kitchen | hum | 36.5 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :------ | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 | +| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 | +| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T11:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T12:00:00Z | home | Kitchen | temp | 22.5 | +| 2022-01-01T13:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T14:00:00Z | home | Kitchen | temp | 22.8 | +| 2022-01-01T15:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T16:00:00Z | home | Kitchen | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Kitchen | temp | 22.7 | +| 2022-01-01T18:00:00Z | home | Kitchen | temp | 23.3 | +| 2022-01-01T19:00:00Z | home | Kitchen | temp | 23.1 | +| 2022-01-01T20:00:00Z | home | Kitchen | temp | 22.7 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T09:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T10:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T11:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T12:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T13:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T14:00:00Z | home | Living Room | co | 0 | +| 2022-01-01T15:00:00Z | home | Living Room | co | 1 | +| 2022-01-01T16:00:00Z | home | Living Room | co | 4 | +| 2022-01-01T17:00:00Z | home | Living Room | co | 5 | +| 2022-01-01T18:00:00Z | home | Living Room | co | 9 | +| 2022-01-01T19:00:00Z | home | Living Room | co | 14 | +| 2022-01-01T20:00:00Z | home | Living Room | co | 17 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T11:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T12:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T13:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T14:00:00Z | home | Living Room | hum | 36.1 | +| 2022-01-01T15:00:00Z | home | Living Room | hum | 36.1 | +| 2022-01-01T16:00:00Z | home | Living Room | hum | 36 | +| 2022-01-01T17:00:00Z | home | Living Room | hum | 35.9 | +| 2022-01-01T18:00:00Z | home | Living Room | hum | 36.2 | +| 2022-01-01T19:00:00Z | home | Living Room | hum | 36.3 | +| 2022-01-01T20:00:00Z | home | Living Room | hum | 36.4 | + +| _time | _measurement | room | _field | _value | +| :------------------- | :----------- | :---------- | :----- | -----: | +| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 | +| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 | +| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 | +| 2022-01-01T11:00:00Z | home | Living Room | temp | 22.2 | +| 2022-01-01T12:00:00Z | home | Living Room | temp | 22.2 | +| 2022-01-01T13:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T14:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T15:00:00Z | home | Living Room | temp | 22.3 | +| 2022-01-01T16:00:00Z | home | Living Room | temp | 22.4 | +| 2022-01-01T17:00:00Z | home | Living Room | temp | 22.6 | +| 2022-01-01T18:00:00Z | home | Living Room | temp | 22.8 | +| 2022-01-01T19:00:00Z | home | Living Room | temp | 22.5 | +| 2022-01-01T20:00:00Z | home | Living Room | temp | 22.2 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +**Congratulations!** You have written data to InfluxDB. The method described +above is the manual way of writing data, but there are other options available: + +- [Write data to InfluxDB using no-code solutions](/influxdb/v2.6/write-data/no-code/) +- [Write data to InfluxDB using developer tools](/influxdb/v2.6/write-data/developer-tools/) + +With data now stored in InfluxDB, let's query it. + +{{< page-nav prev="/influxdb/v2.6/get-started/setup/" next="/influxdb/v2.6/get-started/query/" keepTab=true >}} diff --git a/content/influxdb/v2.6/influxdb-templates/_index.md b/content/influxdb/v2.6/influxdb-templates/_index.md new file mode 100644 index 000000000..ac7c26c5b --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/_index.md @@ -0,0 +1,98 @@ +--- +title: InfluxDB templates +description: > + InfluxDB templates are prepackaged InfluxDB configurations that contain everything + from dashboards and Telegraf configurations to notifications and alerts. +menu: influxdb_2_6 +weight: 10 +influxdb/v2.6/tags: [templates] +--- + +InfluxDB templates are prepackaged InfluxDB configurations that contain everything +from dashboards and Telegraf configurations to notifications and alerts. +Use templates to monitor your technology stack, +set up a fresh instance of InfluxDB, back up your dashboard configuration, or +[share your configuration](https://github.com/influxdata/community-templates/) with the InfluxData community. + +**InfluxDB templates do the following:** + +- Reduce setup time by giving you resources that are already configured for your use-case. +- Facilitate secure, portable, and source-controlled InfluxDB resource states. +- Simplify sharing and using pre-built InfluxDB solutions. + +{{< youtube 2JjW4Rym9XE >}} + +View InfluxDB community templates + +## Template manifests + +A template **manifest** is a file that defines +InfluxDB [resources](#template-resources). +Template manifests support the following formats: + +- [YAML](https://yaml.org/) +- [JSON](https://www.json.org/) +- [Jsonnet](https://jsonnet.org/) + +{{% note %}} +Template manifests are compatible with +[Kubernetes Custom Resource Definitions (CRD)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/). +{{% /note %}} + +The `metadata.name` field in manifests uniquely identifies each resource in the template. +`metadata.name` values must be [DNS-1123](https://tools.ietf.org/html/rfc1123) compliant. +The `spec` object contains the resource configuration. + +#### Example + +```yaml +# bucket-template.yml +# Template manifest that defines two buckets. +apiVersion: influxdata.com/v2alpha1 +kind: Bucket +metadata: + name: thirsty-shaw-91b005 +spec: + description: My IoT Center Bucket + name: iot-center + retentionRules: + - everySeconds: 86400 + type: expire +--- +apiVersion: influxdata.com/v2alpha1 +kind: Bucket +metadata: + name: upbeat-fermat-91b001 +spec: + name: air_sensor +--- +``` + +_See [Create an InfluxDB template](/influxdb/v2.6/influxdb-templates/create/) for information about +generating template manifests._ + +### Template resources + +Templates may contain the following InfluxDB resources: + +- [buckets](/influxdb/v2.6/organizations/buckets/create-bucket/) +- [checks](/influxdb/v2.6/monitor-alert/checks/create/) +- [dashboards](/influxdb/v2.6/visualize-data/dashboards/create-dashboard/) +- [dashboard variables](/influxdb/v2.6/visualize-data/variables/create-variable/) +- [labels](/influxdb/v2.6/visualize-data/labels/) +- [notification endpoints](/influxdb/v2.6/monitor-alert/notification-endpoints/create/) +- [notification rules](/influxdb/v2.6/monitor-alert/notification-rules/create/) +- [tasks](/influxdb/v2.6/process-data/manage-tasks/create-task/) +- [Telegraf configurations](/influxdb/v2.6/write-data/no-code/use-telegraf/) + +## Stacks + +Use **InfluxDB stacks** to manage InfluxDB templates. +When you apply a template, InfluxDB associates resources in the template with a stack. +Use stacks to add, update, or remove InfluxDB templates over time. + +For more information, see [InfluxDB stacks](#influxdb-stacks) below. + +--- + +{{< children >}} diff --git a/content/influxdb/v2.6/influxdb-templates/create.md b/content/influxdb/v2.6/influxdb-templates/create.md new file mode 100644 index 000000000..6ed3f7483 --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/create.md @@ -0,0 +1,291 @@ +--- +title: Create an InfluxDB template +description: > + Use the InfluxDB UI and the `influx export` command to create InfluxDB templates. +menu: + influxdb_2_6: + parent: InfluxDB templates + name: Create a template + identifier: Create an InfluxDB template +weight: 103 +influxdb/v2.6/tags: [templates] +related: + - /influxdb/v2.6/reference/cli/influx/export/ + - /influxdb/v2.6/reference/cli/influx/export/all/ +--- + +Use the InfluxDB user interface (UI) and the [`influx export` command](/influxdb/v2.6/reference/cli/influx/export/) to +create InfluxDB templates from [resources](/influxdb/v2.6/influxdb-templates/#template-resources) in an organization. +Add buckets, Telegraf configurations, tasks, and more in the InfluxDB +UI and then export those resources as a template. + +{{< youtube 714uHkxKM6U >}} + +- [Create a template](#create-a-template) +- [Export resources to a template](#export-resources-to-a-template) +- [Include user-definable resource names](#include-user-definable-resource-names) +- [Troubleshoot template results and permissions](#troubleshoot-template-results-and-permissions) +- [Share your InfluxDB templates](#share-your-influxdb-templates) + +## Create a template + +Creating a new organization to contain only your template resources is an easy way +to ensure you export the resources you want. +Follow these steps to create a template from a new organization. + +1. [Start InfluxDB](/influxdb/v2.6/get-started/). +2. [Create a new organization](/influxdb/v2.6/organizations/create-org/). +3. In the InfluxDB UI, add one or more [resources](/influxdb/v2.6/influxdb-templates/#template-resources). +4. [Create an **All-Access** API token](/influxdb/v2.6/security/tokens/create-token/) (or a token that has **read** access to the organization). +5. Use the API token from **Step 4** with the [`influx export all` subcommand](/influxdb/v2.6/reference/cli/influx/export/all/) to [export all resources]() in the organization to a template file. + + ```sh + influx export all \ + -o YOUR_INFLUX_ORG \ + -t YOUR_ALL_ACCESS_TOKEN \ + -f ~/templates/template.yml + ``` + +## Export resources to a template + +The [`influx export` command](/influxdb/v2.6/reference/cli/influx/export/) and subcommands let you +export [resources](#template-resources) from an organization to a template manifest. +Your [API token](/influxdb/v2.6/security/tokens/) must have **read** access to resources that you want to export. + +If you want to export resources that depend on other resources, be sure to export the dependencies. + +{{< cli/influx-creds-note >}} + +To create a template that **adds, modifies, and deletes resources** when applied to an organization, use [InfluxDB stacks](/influxdb/v2.6/influxdb-templates/stacks/). +First, [initialize the stack](/influxdb/v2.6/influxdb-templates/stacks/init/) +and then [export the stack](#export-a-stack). + +To create a template that only **adds resources** when applied to an organization (and doesn't modify existing resources there), choose one of the following: +- [Export all resources](#export-all-resources) to export all resources or a filtered + subset of resources to a template. +- [Export specific resources](#export-specific-resources) by name or ID to a template. + +### Export all resources + +To export all [resources](/influxdb/v2.6/influxdb-templates/#template-resources) +within an organization to a template manifest file, use the +[`influx export all` subcommand](/influxdb/v2.6/reference/cli/influx/export/all/) +with the `--file` (`-f`) option. + +Provide the following: + +- **Destination path and filename** for the template manifest. + The filename extension determines the output format: + - `your-template.yml`: [YAML](https://yaml.org/) format + - `your-template.json`: [JSON](https://json.org/) format + +```sh +# Syntax +influx export all -f +``` + +#### Export resources filtered by labelName or resourceKind + +The [`influx export all` subcommand](/influxdb/v2.6/reference/cli/influx/export/all/) +accepts a `--filter` option that exports +only resources that match specified label names or resource kinds. +To filter on label name *and* resource kind, provide a `--filter` for each. + +#### Export only dashboards and buckets with specific labels + +The following example exports resources that match this predicate logic: + +```js +(resourceKind == "Bucket" or resourceKind == "Dashboard") +and +(labelName == "Example1" or labelName == "Example2") +``` + +```sh +influx export all \ + -f ~/templates/template.yml \ + --filter=resourceKind=Bucket \ + --filter=resourceKind=Dashboard \ + --filter=labelName=Example1 \ + --filter=labelName=Example2 +``` + +For more options and examples, see the +[`influx export all` subcommand](/influxdb/v2.6/reference/cli/influx/export/all/). + +### Export specific resources + +To export specific [resources](/influxdb/v2.6/influxdb-templates/#template-resources) by name or ID, use the **[`influx export` command](/influxdb/v2.6/reference/cli/influx/export/)** with one or more lists of resources to include. + +Provide the following: + +- **Destination path and filename** for the template manifest. + The filename extension determines the output format: + - `your-template.yml`: [YAML](https://yaml.org/) format + - `your-template.json`: [JSON](https://json.org/) format +- **Resource options** with corresponding lists of resource IDs or resource names to include in the template. + For information about what resource options are available, see the + [`influx export` command](/influxdb/v2.6/reference/cli/influx/export/). + +```sh +# Syntax +influx export -f [resource-flags] +``` + +#### Export specific resources by ID +```sh +influx export \ + --org-id ed32b47572a0137b \ + -f ~/templates/template.yml \ + -t $INFLUX_TOKEN \ + --buckets=00x000ooo0xx0xx,o0xx0xx00x000oo \ + --dashboards=00000xX0x0X00x000 \ + --telegraf-configs=00000x0x000X0x0X0 +``` + +#### Export specific resources by name +```sh +influx export \ + --org-id ed32b47572a0137b \ + -f ~/templates/template.yml \ + --bucket-names=bucket1,bucket2 \ + --dashboard-names=dashboard1,dashboard2 \ + --telegraf-config-names=telegrafconfig1,telegrafconfig2 +``` + +### Export a stack + +To export an InfluxDB [stack](/influxdb/v2.6/influxdb-templates/stacks/) and all its associated resources as a template, use the +`influx export stack` command. +Provide the following: + +- **Organization name** or **ID** +- **API token** with read access to the organization +- **Destination path and filename** for the template manifest. + The filename extension determines the output format: + - `your-template.yml`: [YAML](https://yaml.org/) format + - `your-template.json`: [JSON](https://json.org/) format +- **Stack ID** + +#### Export a stack as a template + +```sh +# Syntax +influx export stack \ + -o \ + -t \ + -f \ + + +# Example +influx export stack \ + -o my-org \ + -t mYSuP3RS3CreTt0K3n + -f ~/templates/awesome-template.yml \ + 05dbb791a4324000 +``` + +## Include user-definable resource names + +After exporting a template manifest, replace resource names with **environment references** +to let users customize resource names when installing your template. + +1. [Export a template](#export-a-template). +2. Select any of the following resource fields to update: + + - `metadata.name` + - `associations[].name` + - `endpointName` _(unique to `NotificationRule` resources)_ + +3. Replace the resource field value with an `envRef` object with a `key` property + that references the key of a key-value pair the user provides when installing the template. + During installation, the `envRef` object is replaced by the value of the + referenced key-value pair. + If the user does not provide the environment reference key-value pair, InfluxDB + uses the `key` string as the default value. + + {{< code-tabs-wrapper >}} + {{% code-tabs %}} +[YAML](#) +[JSON](#) + {{% /code-tabs %}} + {{% code-tab-content %}} +```yml +apiVersion: influxdata.com/v2alpha1 +kind: Bucket +metadata: + name: + envRef: + key: bucket-name-1 +``` + {{% /code-tab-content %}} + {{% code-tab-content %}} +```json +{ + "apiVersion": "influxdata.com/v2alpha1", + "kind": "Bucket", + "metadata": { + "name": { + "envRef": { + "key": "bucket-name-1" + } + } + } +} +``` + {{% /code-tab-content %}} + {{< /code-tabs-wrapper >}} + +Using the example above, users are prompted to provide a value for `bucket-name-1` +when [applying the template](/influxdb/v2.6/influxdb-templates/use/#apply-templates). +Users can also include the `--env-ref` flag with the appropriate key-value pair +when installing the template. + +```sh +# Set bucket-name-1 to "myBucket" +influx apply \ + -f /path/to/template.yml \ + --env-ref=bucket-name-1=myBucket +``` + +_If sharing your template, we recommend documenting what environment references +exist in the template and what keys to use to replace them._ + +{{% note %}} +#### Resource fields that support environment references + +Only the following fields support environment references: + +- `metadata.name` +- `spec.endpointName` +- `spec.associations.name` +{{% /note %}} + +## Troubleshoot template results and permissions + +If you get unexpected results, missing resources, or errors when exporting +templates, check the following: +- [Ensure `read` access](#ensure-read-access) +- [Use Organization ID](#use-organization-id) +- [Check for resource dependencies](#check-for-resource-dependencies) + +### Ensure read access + +The [API token](/influxdb/v2.6/security/tokens/) must have **read** access to resources that you want to export. The `influx export all` command only exports resources that the API token can read. For example, to export all resources in an organization that has ID `abc123`, the API token must have the `read:/orgs/abc123` permission. + +To learn more about permissions, see [how to view authorizations](/influxdb/v2.6/security/tokens/view-tokens/) and [how to create a token](/influxdb/v2.6/security/tokens/create-token/) with specific permissions. + +### Use Organization ID + +If your token doesn't have **read** access to the organization and you want to [export specific resources](#export-specific-resources), use the `--org-id ` flag (instead of `-o ` or `--org `) to provide the organization. + +### Check for resource dependencies + +If you want to export resources that depend on other resources, be sure to export the dependencies as well. Otherwise, the resources may not be usable. + +## Share your InfluxDB templates + +Share your InfluxDB templates with the entire InfluxData community. +Contribute your template to the [InfluxDB Community Templates](https://github.com/influxdata/community-templates/) repository on GitHub. + +View InfluxDB Community Templates diff --git a/content/influxdb/v2.6/influxdb-templates/stacks/_index.md b/content/influxdb/v2.6/influxdb-templates/stacks/_index.md new file mode 100644 index 000000000..f2a391863 --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/stacks/_index.md @@ -0,0 +1,26 @@ +--- +title: InfluxDB stacks +description: > + Use an InfluxDB stack to manage your InfluxDB templates—add, update, or remove templates over time. +menu: + influxdb_2_6: + parent: InfluxDB templates +weight: 105 +related: + - /influxdb/v2.6/reference/cli/influx/pkg/stack/ +--- + +Use InfluxDB stacks to manage [InfluxDB templates](/influxdb/v2.6/influxdb-templates). +When you apply a template, InfluxDB associates resources in the template with a stack. Use the stack to add, update, or remove InfluxDB templates over time. + + {{< children type="anchored-list" >}} + + {{< children readmore=true >}} + +{{% note %}} +**Key differences between stacks and templates**: + +- A template defines a set of resources in a text file outside of InfluxDB. When you apply a template, a stack is automatically created to manage the applied template. +- Stacks add, modify or delete resources in an instance. +- Templates do not recognize resources in an instance. All resources in the template are added, creating duplicate resources if a resource already exists. + {{% /note %}} diff --git a/content/influxdb/v2.6/influxdb-templates/stacks/init.md b/content/influxdb/v2.6/influxdb-templates/stacks/init.md new file mode 100644 index 000000000..73bebcf50 --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/stacks/init.md @@ -0,0 +1,73 @@ +--- +title: Initialize an InfluxDB stack +list_title: Initialize a stack +description: > + InfluxDB automatically creates a new stack each time you [apply an InfluxDB template](/influxdb/v2.6/influxdb-templates/use/) + **without providing a stack ID**. + To manually create or initialize a new stack, use the [`influx stacks init` command](/influxdb/v2.6/reference/cli/influx/stacks/init/). +menu: + influxdb_2_6: + parent: InfluxDB stacks + name: Initialize a stack +weight: 202 +related: + - /influxdb/v2.6/reference/cli/influx/stacks/init/ +list_code_example: | + ```sh + influx apply \ + -o example-org \ + -f path/to/template.yml + ``` + ```sh + influx stacks init \ + -o example-org \ + -n "Example Stack" \ + -d "InfluxDB stack for monitoring some awesome stuff" \ + -u https://example.com/template-1.yml \ + -u https://example.com/template-2.yml + ``` +--- + +InfluxDB automatically creates a new stack each time you [apply an InfluxDB template](/influxdb/v2.6/influxdb-templates/use/) +**without providing a stack ID**. +To manually create or initialize a new stack, use the [`influx stacks init` command](/influxdb/v2.6/reference/cli/influx/stacks/init/). + +## Initialize a stack when applying a template +To automatically create a new stack when [applying an InfluxDB template](/influxdb/v2.6/influxdb-templates/use/) +**don't provide a stack ID**. +InfluxDB applies the resources in the template to a new stack and provides the **stack ID** the output. + +```sh +influx apply \ + -o example-org \ + -f path/to/template.yml +``` + +## Manually initialize a new stack +Use the [`influx stacks init` command](/influxdb/v2.6/reference/cli/influx/stacks/init/) +to create or initialize a new InfluxDB stack. + +**Provide the following:** + +- Organization name or ID +- Stack name +- Stack description +- InfluxDB template URLs + + +```sh +# Syntax +influx stacks init \ + -o \ + -n \ + -d + +# Example +influx stacks init \ + -o example-org \ + -n "Example Stack" \ + -d "InfluxDB stack for monitoring some awesome stuff" \ + -u https://example.com/template-1.yml \ + -u https://example.com/template-2.yml +``` diff --git a/content/influxdb/v2.6/influxdb-templates/stacks/remove.md b/content/influxdb/v2.6/influxdb-templates/stacks/remove.md new file mode 100644 index 000000000..c54cf8396 --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/stacks/remove.md @@ -0,0 +1,39 @@ +--- +title: Remove an InfluxDB stack +list_title: Remove a stack +description: > + Use the [`influx stacks remove` command](/influxdb/v2.6/reference/cli/influx/stacks/remove/) + to remove an InfluxDB stack and all its associated resources. +menu: + influxdb_2_6: + parent: InfluxDB stacks + name: Remove a stack +weight: 205 +related: + - /influxdb/v2.6/reference/cli/influx/stacks/remove/ +list_code_example: | + ```sh + influx stacks remove \ + -o example-org \ + --stack-id=12ab34cd56ef + ``` +--- + +Use the [`influx stacks remove` command](/influxdb/v2.6/reference/cli/influx/stacks/remove/) +to remove an InfluxDB stack and all its associated resources. + +**Provide the following:** + +- Organization name or ID +- Stack ID + + +```sh +# Syntax +influx stacks remove -o --stack-id= + +# Example +influx stacks remove \ + -o example-org \ + --stack-id=12ab34cd56ef +``` diff --git a/content/influxdb/v2.6/influxdb-templates/stacks/save-time.md b/content/influxdb/v2.6/influxdb-templates/stacks/save-time.md new file mode 100644 index 000000000..3f4b78995 --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/stacks/save-time.md @@ -0,0 +1,165 @@ +--- +title: Save time with InfluxDB stacks +list_title: Save time with stacks +description: > + Discover how to use InfluxDB stacks to save time. +menu: + influxdb_2_6: + parent: InfluxDB stacks + name: Save time with stacks +weight: 201 +related: + - /influxdb/v2.6/reference/cli/influx/stacks/ + +--- + +Save time and money using InfluxDB stacks. Here's a few ideal use cases: + +- [Automate deployments with GitOps and stacks](#automate-deployments-with-gitops-and-stacks) +- [Apply updates from source-controlled templates](#apply-updates-from-source-controlled-templates) +- [Apply template updates across multiple InfluxDB instances](#apply-template-updates-across-multiple-influxdb-instances) +- [Develop templates](#develop-templates) + +### Automate deployments with GitOps and stacks + +GitOps is popular way to configure and automate deployments. Use InfluxDB stacks in a GitOps workflow +to automatically update distributed instances of InfluxDB OSS or InfluxDB Cloud. + +To automate an InfluxDB deployment with GitOps and stacks, complete the following steps: + +1. [Set up a GitHub repository](#set-up-a-github-repository) +2. [Add existing resources to the GitHub repository](#add-existing-resources-to-the-github-repository) +3. [Automate the creation of a stack for each folder](#automate-the-creation-of-a-stack-for-each-folder) +4. [Set up Github Actions or CircleCI](#set-up-github-actions-or-circleci) + +#### Set up a GitHub repository + +Set up a GitHub repository to back your InfluxDB instance. Determine how you want to organize the resources in your stacks within your Github repository. For example, organize resources under folders for specific teams or functions. + +We recommend storing all resources for one stack in the same folder. For example, if you monitor Redis, create a `redis` stack and put your Redis monitoring resources (a Telegraf configuration, four dashboards, a label, and two alert checks) into one Redis folder, each resource in a separate file. Then, when you need to update a Redis resource, it's easy to find and make changes in one location. + + {{% note %}} + Typically, we **do not recommend** using the same resource in multiple stacks. If your organization uses the same resource in multiple stacks, before you delete a stack, verify the stack does not include resources that another stack depends on. Stacks with buckets often contain data used by many different templates. Because of this, we recommend keeping buckets separate from the other stacks. + {{% /note %}} + +#### Add existing resources to the GitHub repository + +Skip this section if you are starting from scratch or don’t have existing resources you want to add to your stack. + +Use the `influx export` command to quickly export resources. Keep all your resources in a single file or have files for each one. You can always split or combine them later. + +For example, if you export resources for three stacks: `buckets`, `redis`, and `mysql`, your folder structure might look something like this when you are done: + + ```sh + influxdb-assets/ + ├── buckets/ + │ ├── telegraf_bucket.yml + ├── redis/ + │ ├── redis_overview_dashboard.yml + │ ├── redis_label.yml + │ ├── redis_cpu_check.yml + │ └── redis_mem_check.yml + ├── mysql/ + │ ├── mysql_assets.yml + └── README.md + + ``` + {{% note %}} + When you export a resource, InfluxDB creates a `meta.name` for that resource. These resource names should be unique inside your InfluxDB instance. Use a good naming convention to prevent duplicate `meta.names`. Changing the `meta.name` of the InfluxDB resource will cause the stack to orphan the resource with the previous name and create a new resource with the updated name. + {{% /note %}} + +Add the exported resources to your new GitHub repository. + +#### Automate the creation of a stack for each folder + +To automatically create a stack from each folder in your GitHub repository, create a shell script to check for an existing stack and if the stack isn't found, use the `influx stacks init` command to create a new stack. The following sample script creates a `redis` stack and automatically applies those changes to your instance: + +```sh +echo "Checking for existing redis stack..." +REDIS_STACK_ID=$(influx stacks --stack-name redis --json | jq -r '.[0].ID') +if [ "$REDIS_STACK_ID" == "null" ]; then + echo "No stack found. Initializing our stack..." + REDIS_STACK_ID=$(influx stacks init -n redis --json | jq -r '.ID') +fi + +# Setting the base path +BASE_PATH="$(pwd)" + +echo "Applying our redis stack..." +cat $BASE_PATH/redis/*.yml | \ +influx apply --force true --stack-id $REDIS_STACK_ID -q +``` + + {{% note %}} + The `--json` flag in the InfluxDB CLI is very useful when scripting against the CLI. This flag lets you grab important information easily using [`jq`](https://stedolan.github.io/jq/manual/v1.6/). + {{% /note %}} + +Repeat this step for each of the stacks in your repository. When a resource in your stack changes, re-run this script to apply updated resources to your InfluxDB instance. Re-applying a stack with an updated resource won't add, delete, or duplicate resources. + +#### Set up Github Actions or CircleCI + +Once you have a script to apply changes being made to your local instance, automate the deployment to other environments as needed. Use the InfluxDB CLI to maintain multiple [configuration profiles]() to easily switch profile and issue commands against other InfluxDB instances. To apply the same script to a different InfluxDB instance, change your active configuration profile using the `influx config set` command. Or set the desired profile dynamically using the `-c, --active-config` flag. + + {{% note %}} + Before you run automation scripts against shared environments, we recommend manually running the steps in your script. + {{% /note %}} + +Verify your deployment automation software lets you run a custom script, and then set up the custom script you've built locally another environment. For example, here's a custom Github Action that automates deployment: + +```yml +name: deploy-influxdb-resources + +on: + push: + branches: [ master ] + +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + with: + ref: ${{ github.ref }} + - name: Deploys repo to cloud + env: + # These secrets can be configured in the Github repo to connect to + # your InfluxDB instance. + INFLUX_TOKEN: ${{ secrets.INFLUX_TOKEN }} + INFLUX_ORG: ${{ secrets.INFLUX_ORG }} + INFLUX_URL: ${{ secrets.INFLUX_URL }} + GITHUB_REPO: ${{ github.repository }} + GITHUB_BRANCH: ${{ github.ref }} + run: | + cd /tmp + wget https://dl.influxdata.com/platform/nightlies/influx_nightly_linux_amd64.tar.gz + tar xvfz influx_nightly_linux_amd64.tar.gz + sudo cp influx_nightly_linux_amd64/influx /usr/local/bin/ + cd $GITHUB_WORKSPACE + # This runs the script to set up your stacks + chmod +x ./setup.sh + ./setup.sh prod +``` + +For more information about using GitHub Actions in your project, check out the complete [Github Actions documentation](https://github.com/features/actions). + +### Apply updates from source-controlled templates + +You can use a variety of InfluxDB templates from many different sources including +[Community Templates](https://github.com/influxdata/community-templates/) or +self-built custom templates. +As templates are updated over time, stacks let you gracefully +apply updates without creating duplicate resources. + +### Apply template updates across multiple InfluxDB instances + +In many cases, you may have more than one instance of InfluxDB running and want to apply +the same template to each separate instance. +Using stacks, you can make changes to a stack on one instance, +[export the stack as a template](/influxdb/v2.6/influxdb-templates/create/#export-a-stack) +and then apply the changes to your other InfluxDB instances. + +### Develop templates + +InfluxDB stacks aid in developing and maintaining InfluxDB templates. +Stacks let you modify and update template manifests and apply those changes in +any stack that uses the template. diff --git a/content/influxdb/v2.6/influxdb-templates/stacks/update.md b/content/influxdb/v2.6/influxdb-templates/stacks/update.md new file mode 100644 index 000000000..8fd13ec3a --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/stacks/update.md @@ -0,0 +1,56 @@ +--- +title: Update an InfluxDB stack +list_title: Update a stack +description: > + Use the [`influx apply` command](/influxdb/v2.6/reference/cli/influx/apply/) + to update a stack with a modified template. + When applying a template to an existing stack, InfluxDB checks to see if the + resources in the template match existing resources. + InfluxDB updates, adds, and removes resources to resolve differences between + the current state of the stack and the newly applied template. +menu: + influxdb_2_6: + parent: InfluxDB stacks + name: Update a stack +weight: 203 +related: + - /influxdb/v2.6/reference/cli/influx/apply + - /influxdb/v2.6/reference/cli/influx/stacks/update/ +list_code_example: | + ```sh + influx apply \ + -o example-org \ + -u http://example.com/template-1.yml \ + -u http://example.com/template-2.yml \ + --stack-id=12ab34cd56ef + ``` +--- + +Use the [`influx apply` command](/influxdb/v2.6/reference/cli/influx/apply/) +to update a stack with a modified template. +When applying a template to an existing stack, InfluxDB checks to see if the +resources in the template match existing resources. +InfluxDB updates, adds, and removes resources to resolve differences between +the current state of the stack and the newly applied template. + +Each stack is uniquely identified by a **stack ID**. +For information about retrieving your stack ID, see [View stacks](/influxdb/v2.6/influxdb-templates/stacks/view/). + +**Provide the following:** + +- Organization name or ID +- Stack ID +- InfluxDB template URLs to apply + + +```sh +influx apply \ + -o example-org \ + -u http://example.com/template-1.yml \ + -u http://example.com/template-2.yml \ + --stack-id=12ab34cd56ef +``` + +Template resources are uniquely identified by their `metadata.name` field. +If errors occur when applying changes to a stack, all applied changes are +reversed and the stack is returned to its previous state. diff --git a/content/influxdb/v2.6/influxdb-templates/stacks/view.md b/content/influxdb/v2.6/influxdb-templates/stacks/view.md new file mode 100644 index 000000000..7c8abe135 --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/stacks/view.md @@ -0,0 +1,69 @@ +--- +title: View InfluxDB stacks +list_title: View stacks +description: > + Use the [`influx stacks` command](/influxdb/v2.6/reference/cli/influx/stacks/) + to view installed InfluxDB stacks and their associated resources. +menu: + influxdb_2_6: + parent: InfluxDB stacks + name: View stacks +weight: 204 +related: + - /influxdb/v2.6/reference/cli/influx/stacks/ +list_code_example: | + ```sh + influx stacks -o example-org + ``` +--- + +Use the [`influx stacks` command](/influxdb/v2.6/reference/cli/influx/stacks/) +to view installed InfluxDB stacks and their associated resources. + +**Provide the following:** + +- Organization name or ID + + +```sh +# Syntax +influx stacks -o + +# Example +influx stacks -o example-org +``` + +### Filter stacks + +To output information about specific stacks, use the `--stack-name` or `--stack-id` +flags to filter output by stack names or stack IDs. + +##### Filter by stack name + +```sh +# Syntax +influx stacks \ + -o \ + --stack-name= + +# Example +influx stacks \ + -o example-org \ + --stack-name=stack1 \ + --stack-name=stack2 +``` + +### Filter by stack ID + +```sh +# Syntax +influx stacks \ + -o \ + --stack-id= + +# Example +influx stacks \ + -o example-org \ + --stack-id=12ab34cd56ef \ + --stack-id=78gh910i11jk +``` diff --git a/content/influxdb/v2.6/influxdb-templates/use.md b/content/influxdb/v2.6/influxdb-templates/use.md new file mode 100644 index 000000000..a43d9161d --- /dev/null +++ b/content/influxdb/v2.6/influxdb-templates/use.md @@ -0,0 +1,241 @@ +--- +title: Use InfluxDB templates +description: > + Use the `influx` command line interface (CLI) to summarize, validate, and apply + templates from your local filesystem and from URLs. +menu: + influxdb_2_6: + parent: InfluxDB templates + name: Use templates +weight: 102 +influxdb/v2.6/tags: [templates] +related: + - /influxdb/v2.6/reference/cli/influx/apply/ + - /influxdb/v2.6/reference/cli/influx/template/ + - /influxdb/v2.6/reference/cli/influx/template/validate/ +--- + +Use the `influx` command line interface (CLI) to summarize, validate, and apply +templates from your local filesystem and from URLs. + +- [Use InfluxDB community templates](#use-influxdb-community-templates) +- [View a template summary](#view-a-template-summary) +- [Validate a template](#validate-a-template) +- [Apply templates](#apply-templates) + + +## Use InfluxDB community templates +The [InfluxDB community templates repository](https://github.com/influxdata/community-templates/) +is home to a growing number of InfluxDB templates developed and maintained by +others in the InfluxData community. +Apply community templates directly from GitHub using a template's download URL +or download the template. + +{{< youtube 2JjW4Rym9XE >}} + +{{% note %}} +When attempting to access the community templates via the URL, the templates use the following +as the root of the URL: + +```sh +https://raw.githubusercontent.com/influxdata/community-templates/master/ +``` + +For example, the Docker community template can be accessed via: + +```sh +https://raw.githubusercontent.com/influxdata/community-templates/master/docker/docker.yml +``` +{{% /note %}} + +View InfluxDB Community Templates + +## View a template summary +To view a summary of what's included in a template before applying the template, +use the [`influx template` command](/influxdb/v2.6/reference/cli/influx/template/). +View a summary of a template stored in your local filesystem or from a URL. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[From a file](#) +[From a URL](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +# Syntax +influx template -f + +# Example +influx template -f /path/to/template.yml +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```sh +# Syntax +influx template -u + +# Example +influx template -u https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +## Validate a template +To validate a template before you install it or troubleshoot a template, use +the [`influx template validate` command](/influxdb/v2.6/reference/cli/influx/template/validate/). +Validate a template stored in your local filesystem or from a URL. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[From a file](#) +[From a URL](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +# Syntax +influx template validate -f + +# Example +influx template validate -f /path/to/template.yml +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```sh +# Syntax +influx template validate -u + +# Example +influx template validate -u https://raw.githubusercontent.com/influxdata/community-templates/master/linux_system/linux_system.yml +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +## Apply templates +Use the [`influx apply` command](/influxdb/v2.6/reference/cli/influx/apply/) to install templates +from your local filesystem or from URLs. + +- [Apply a template from a file](#apply-a-template-from-a-file) +- [Apply all templates in a directory](#apply-all-templates-in-a-directory) +- [Apply a template from a URL](#apply-a-template-from-a-url) +- [Apply templates from both files and URLs](#apply-templates-from-both-files-and-urls) +- [Define environment references](#define-environment-references) +- [Include a secret when installing a template](#include-a-secret-when-installing-a-template) + +{{% note %}} +#### Apply templates to an existing stack +To apply a template to an existing stack, include the stack ID when applying the template. +Any time you apply a template without a stack ID, InfluxDB initializes a new stack +and all new resources. +For more information, see [InfluxDB stacks](/influxdb/v2.6/influxdb-templates/stacks/). +{{% /note %}} + +### Apply a template from a file +To install templates stored on your local machine, use the `-f` or `--file` flag +to provide the **file path** of the template manifest. + +```sh +# Syntax +influx apply -o -f + +# Examples +# Apply a single template +influx apply -o example-org -f /path/to/template.yml + +# Apply multiple templates +influx apply -o example-org \ + -f /path/to/this/template.yml \ + -f /path/to/that/template.yml +``` + +### Apply all templates in a directory +To apply all templates in a directory, use the `-f` or `--file` flag to provide +the **directory path** of the directory where template manifests are stored. +By default, this only applies templates stored in the specified directory. +To apply all templates stored in the specified directory and its subdirectories, +include the `-R`, `--recurse` flag. + +```sh +# Syntax +influx apply -o -f + +# Examples +# Apply all templates in a directory +influx apply -o example-org -f /path/to/template/dir/ + +# Apply all templates in a directory and its subdirectories +influx apply -o example-org -f /path/to/template/dir/ -R +``` + +### Apply a template from a URL +To apply templates from a URL, use the `-u` or `--template-url` flag to provide the URL +of the template manifest. + +```sh +# Syntax +influx apply -o -u + +# Examples +# Apply a single template from a URL +influx apply -o example-org -u https://example.com/templates/template.yml + +# Apply multiple templates from URLs +influx apply -o example-org \ + -u https://example.com/templates/template1.yml \ + -u https://example.com/templates/template2.yml +``` + +### Apply templates from both files and URLs +To apply templates from both files and URLs in a single command, include multiple +file or directory paths and URLs, each with the appropriate `-f` or `-u` flag. + +```sh +# Syntax +influx apply -o -u -f + +# Example +influx apply -o example-org \ + -u https://example.com/templates/template1.yml \ + -u https://example.com/templates/template2.yml \ + -f ~/templates/custom-template.yml \ + -f ~/templates/iot/home/ \ + --recurse +``` + +### Define environment references +Some templates include [environment references](/influxdb/v2.6/influxdb-templates/create/#include-user-definable-resource-names) that let you provide custom resource names. +The `influx apply` command prompts you to provide a value for each environment +reference in the template. +You can also provide values for environment references by including an `--env-ref` +flag with a key-value pair comprised of the environment reference key and the +value to replace it. + +```sh +influx apply -o example-org -f /path/to/template.yml \ + --env-ref=bucket-name-1=myBucket + --env-ref=label-name-1=Label1 \ + --env-ref=label-name-2=Label2 +``` + +### Include a secret when installing a template +Some templates use [secrets](/influxdb/v2.6/security/secrets/) in queries. +Secret values are not included in templates. +To define secret values when installing a template, include the `--secret` flag +with the secret key-value pair. + +```sh +# Syntax +influx apply -o -f \ + --secret== + +# Examples +# Define a single secret when applying a template +influx apply -o example-org -f /path/to/template.yml \ + --secret=FOO=BAR + +# Define multiple secrets when applying a template +influx apply -o example-org -f /path/to/template.yml \ + --secret=FOO=bar \ + --secret=BAZ=quz +``` + +_To add a secret after applying a template, see [Add secrets](/influxdb/v2.6/security/secrets/manage-secrets/add/)._ diff --git a/content/influxdb/v2.6/install.md b/content/influxdb/v2.6/install.md new file mode 100644 index 000000000..cc2278459 --- /dev/null +++ b/content/influxdb/v2.6/install.md @@ -0,0 +1,859 @@ +--- +title: Install InfluxDB +description: Download, install, and set up InfluxDB OSS. +menu: influxdb_2_6 +weight: 2 +influxdb/v2.6/tags: [install] +related: +- /influxdb/v2.6/reference/cli/influx/auth/ +- /influxdb/v2.6/reference/cli/influx/config/ +- /influxdb/v2.6/reference/cli/influx/ +- /influxdb/v2.6/security/tokens/ +--- + +The InfluxDB {{< current-version >}} time series platform is purpose-built to collect, store, +process and visualize metrics and events. +Download, install, and set up InfluxDB OSS. + +{{< tabs-wrapper >}} +{{% tabs %}} +[macOS](#) +[Linux](#) +[Windows](#) +[Docker](#) +[Kubernetes](#) +[Raspberry Pi](#) +{{% /tabs %}} + + +{{% tab-content %}} +## Install InfluxDB v{{< current-version >}} + +Do one of the following: + +- [Use Homebrew](#use-homebrew) +- [Manually download and install](#manually-download-and-install) + +{{% note %}} +#### InfluxDB and the influx CLI are separate packages + +The InfluxDB server ([`influxd`](/influxdb/v2.6/reference/cli/influxd/)) and the +[`influx` CLI](/influxdb/v2.6/reference/cli/influx/) are packaged and +versioned separately. +For information about installing the `influx` CLI, see +[Install and use the influx CLI](/influxdb/v2.6/tools/influx-cli/). +{{% /note %}} + +### Use Homebrew + +We recommend using [Homebrew](https://brew.sh/) to install InfluxDB v{{< current-version >}} on macOS: + +```sh +brew update +brew install influxdb +``` + +{{% note %}} +Homebrew also installs `influxdb-cli` as a dependency. +For information about using the `influx` CLI, see the +[`influx` CLI reference documentation](/influxdb/v2.6/reference/cli/influx/). +{{% /note %}} + +### Manually download and install + +To download the InfluxDB v{{< current-version >}} binaries for macOS directly, +do the following: + +1. **Download the InfluxDB package.** + + InfluxDB v{{< current-version >}} (macOS) + + +2. **Unpackage the InfluxDB binary.** + + Do one of the following: + + - Double-click the downloaded package file in **Finder**. + - Run the following command in a macOS command prompt application such + **Terminal** or **[iTerm2](https://www.iterm2.com/)**: + + ```sh + # Unpackage contents to the current working directory + tar zxvf ~/Downloads/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz + ``` + +3. **(Optional) Place the binary in your `$PATH`** + + ```sh + # (Optional) Copy the influxd binary to your $PATH + sudo cp influxdb2-{{< latest-patch >}}-darwin-amd64/influxd /usr/local/bin/ + ``` + + If you do not move the `influxd` binary into your `$PATH`, prefix the executable + `./` to run it in place. + +{{< expand-wrapper >}} +{{% expand "Recommended – Set appropriate directory permissions" %}} + +To prevent unwanted access to data, we recommend setting the permissions on the influxdb `data-dir` to not be world readable. For server installs, it is also recommended to set a umask of 0027 to properly permission all newly created files. + +Example: + +```shell +> chmod 0750 ~/.influxdbv2 +``` + +{{% /expand %}} +{{% expand "Recommended – Verify the authenticity of downloaded binary" %}} + +For added security, use `gpg` to verify the signature of your download. +(Most operating systems include the `gpg` command by default. +If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/) for installation instructions.) + +1. Download and import InfluxData's public key: + + ``` + curl -s https://repos.influxdata.com/influxdb2.key | gpg --import - + ``` + +2. Download the signature file for the release by adding `.asc` to the download URL. +For example: + + ``` + wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz.asc + ``` + +3. Verify the signature with `gpg --verify`: + + ``` + gpg --verify influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz.asc influxdb2-{{< latest-patch >}}-darwin-amd64.tar.gz + ``` + + The output from this command should include the following: + + ``` + gpg: Good signature from "InfluxData " [unknown] + ``` +{{% /expand %}} +{{< /expand-wrapper >}} + +{{% note %}} +Both InfluxDB 1.x and 2.x have associated `influxd` and `influx` binaries. +If InfluxDB 1.x binaries are already in your `$PATH`, run the {{< current-version >}} binaries in place +or rename them before putting them in your `$PATH`. +If you rename the binaries, all references to `influxd` and `influx` in this documentation refer to your renamed binaries. +{{% /note %}} + +#### Networking ports + +By default, InfluxDB uses TCP port `8086` for client-server communication over +the [InfluxDB HTTP API](/influxdb/v2.6/reference/api/). + +### Start and configure InfluxDB + +To start InfluxDB, run the `influxd` daemon: + +```bash +influxd +``` + +{{% note %}} +#### Run InfluxDB on macOS Catalina + +macOS Catalina requires downloaded binaries to be signed by registered Apple developers. +Currently, when you first attempt to run `influxd`, macOS will prevent it from running. +To manually authorize the `influxd` binary: + +1. Attempt to run `influxd`. +2. Open **System Preferences** and click **Security & Privacy**. +3. Under the **General** tab, there is a message about `influxd` being blocked. + Click **Open Anyway**. + +We are in the process of updating our build process to ensure released binaries are signed by InfluxData. +{{% /note %}} + +{{% warn %}} +#### "too many open files" errors + +After running `influxd`, you might see an error in the log output like the +following: + +```sh +too many open files +``` + +To resolve this error, follow the +[recommended steps](https://unix.stackexchange.com/a/221988/471569) to increase +file and process limits for your operating system version then restart `influxd`. + +{{% /warn %}} + +To configure InfluxDB, see [InfluxDB configuration options](/influxdb/v2.6/reference/config-options/), and the [`influxd` documentation](/influxdb/v2.6/reference/cli/influxd) for information about +available flags and options._ + +{{% note %}} +#### InfluxDB "phone home" + +By default, InfluxDB sends telemetry data back to InfluxData. +The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides +information about what data is collected and how it is used. + +To opt-out of sending telemetry data back to InfluxData, include the +`--reporting-disabled` flag when starting `influxd`. + +```bash +influxd --reporting-disabled +``` +{{% /note %}} + +{{% /tab-content %}} + + + +{{% tab-content %}} +## Download and install InfluxDB v{{< current-version >}} + +Do one of the following: + +- [Install InfluxDB as a service with systemd](#install-influxdb-as-a-service-with-systemd) +- [Manually download and install the influxd binary](#manually-download-and-install-the-influxd-binary) + +{{% note %}} +#### InfluxDB and the influx CLI are separate packages + +The InfluxDB server ([`influxd`](/influxdb/v2.6/reference/cli/influxd/)) and the +[`influx` CLI](/influxdb/v2.6/reference/cli/influx/) are packaged and +versioned separately. +For information about installing the `influx` CLI, see +[Install and use the influx CLI](/influxdb/v2.6/tools/influx-cli/). +{{% /note %}} + +### Install InfluxDB as a service with systemd + +1. Download and install the appropriate `.deb` or `.rpm` file using a URL from the + [InfluxData downloads page](https://portal.influxdata.com/downloads/) + with the following commands: + + ```sh + # Ubuntu/Debian + wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-xxx.deb + sudo dpkg -i influxdb2-{{< latest-patch >}}-xxx.deb + + # Red Hat/CentOS/Fedora + wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-xxx.rpm + sudo yum localinstall influxdb2-{{< latest-patch >}}-xxx.rpm + ``` + _Use the exact filename of the download of `.rpm` package (for example, `influxdb2-{{< latest-patch >}}-amd64.rpm`)._ + +2. Start the InfluxDB service: + + ```sh + sudo service influxdb start + ``` + + Installing the InfluxDB package creates a service file at `/lib/systemd/system/influxdb.service` + to start InfluxDB as a background service on startup. + +3. Restart your system and verify that the service is running correctly: + + ``` + $ sudo service influxdb status + ● influxdb.service - InfluxDB is an open-source, distributed, time series database + Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enable> + Active: active (running) + ``` + +For information about where InfluxDB stores data on disk when running as a service, +see [File system layout](/influxdb/v2.6/reference/internals/file-system-layout/?t=Linux#installed-as-a-package). + +To customize your InfluxDB configuration, use either +[command line flags (arguments)](#pass-arguments-to-systemd), environment variables, or an InfluxDB configuration file. +See InfluxDB [configuration options](/influxdb/v2.6/reference/config-options/) for more information. + +#### Pass arguments to systemd + +1. Add one or more lines like the following containing arguments for `influxd` to `/etc/default/influxdb2`: + + ```sh + ARG1="--http-bind-address :8087" + ARG2="" + ``` + +2. Edit the `/lib/systemd/system/influxdb.service` file as follows: + + ```sh + ExecStart=/usr/bin/influxd $ARG1 $ARG2 + ``` + +### Manually download and install the influxd binary + +1. **Download the InfluxDB binary.** + + Download the InfluxDB binary [from your browser](#download-from-your-browser) + or [from the command line](#download-from-the-command-line). + + #### Download from your browser + + InfluxDB v{{< current-version >}} (amd64) + InfluxDB v{{< current-version >}} (arm) + + #### Download from the command line + + ```sh + # amd64 + wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz + + # arm + wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz + ``` + +4. **Extract the downloaded binary.** + + _**Note:** The following commands are examples. Adjust the filenames, paths, and utilities if necessary._ + + ```sh + # amd64 + tar xvzf path/to/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz + + # arm + tar xvzf path/to/influxdb2-{{< latest-patch >}}-linux-arm64.tar.gz + ``` + +3. **(Optional) Place the extracted `influxd` executable binary in your system `$PATH`.** + + ```sh + # amd64 + sudo cp influxdb2-{{< latest-patch >}}-linux-amd64/influxd /usr/local/bin/ + + # arm + sudo cp influxdb2-{{< latest-patch >}}-linux-arm64/influxd /usr/local/bin/ + ``` + + If you do not move the `influxd` binary into your `$PATH`, prefix the executable + `./` to run it in place. + +{{< expand-wrapper >}} +{{% expand "Recommended – Set appropriate directory permissions" %}} + +To prevent unwanted access to data, we recommend setting the permissions on the influxdb `data-dir` to not be world readable. For server installs, it is also recommended to set a umask of 0027 to properly permission all newly created files. This can be done via the UMask directive in a systemd unit file, or by running influxdb under a specific user with the umask properly set. + +Example: + +```shell +> chmod 0750 ~/.influxdbv2 +``` + +{{% /expand %}} +{{% expand "Recommended – Verify the authenticity of downloaded binary" %}} + +For added security, use `gpg` to verify the signature of your download. +(Most operating systems include the `gpg` command by default. +If `gpg` is not available, see the [GnuPG homepage](https://gnupg.org/download/) for installation instructions.) + +1. Download and import InfluxData's public key: + + ``` + curl -s https://repos.influxdata.com/influxdb2.key | gpg --import - + ``` + +2. Download the signature file for the release by adding `.asc` to the download URL. + For example: + + ``` + wget https://dl.influxdata.com/influxdb/releases/influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz.asc + ``` + +3. Verify the signature with `gpg --verify`: + + ``` + gpg --verify influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz.asc influxdb2-{{< latest-patch >}}-linux-amd64.tar.gz + ``` + + The output from this command should include the following: + + ``` + gpg: Good signature from "InfluxData " [unknown] + ``` +{{% /expand %}} +{{< /expand-wrapper >}} + +## Start InfluxDB + +If InfluxDB was installed as a systemd service, systemd manages the `influxd` daemon and no further action is required. +If the binary was manually downloaded and added to the system `$PATH`, start the `influxd` daemon with the following command: + +```bash +influxd +``` + +_See the [`influxd` documentation](/influxdb/v2.6/reference/cli/influxd) for information about +available flags and options._ + +### Networking ports + +By default, InfluxDB uses TCP port `8086` for client-server communication over +the [InfluxDB HTTP API](/influxdb/v2.6/reference/api/). + +{{% note %}} +#### InfluxDB "phone home" + +By default, InfluxDB sends telemetry data back to InfluxData. +The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides +information about what data is collected and how it is used. + +To opt-out of sending telemetry data back to InfluxData, include the +`--reporting-disabled` flag when starting `influxd`. + +```bash +influxd --reporting-disabled +``` +{{% /note %}} + +{{% /tab-content %}} + + + +{{% tab-content %}} +{{% note %}} +#### System requirements +- Windows 10 +- 64-bit AMD architecture +- [Powershell](https://docs.microsoft.com/powershell/) or + [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/) + +#### Command line examples +Use **Powershell** or **WSL** to execute `influx` and `influxd` commands. +The command line examples in this documentation use `influx` and `influxd` as if +installed on the system `PATH`. +If these binaries are not installed on your `PATH`, replace `influx` and `influxd` +in the provided examples with `./influx` and `./influxd` respectively. +{{% /note %}} + +## Download and install InfluxDB v{{< current-version >}} + +{{% note %}} +#### InfluxDB and the influx CLI are separate packages +The InfluxDB server ([`influxd`](/influxdb/v2.6/reference/cli/influxd/)) and the +[`influx` CLI](/influxdb/v2.6/reference/cli/influx/) are packaged and +versioned separately. +For information about installing the `influx` CLI, see +[Install and use the influx CLI](/influxdb/v2.6/tools/influx-cli/). +{{% /note %}} + +InfluxDB v{{< current-version >}} (Windows) + +Expand the downloaded archive into `C:\Program Files\InfluxData\` and rename the files if desired. + +```powershell +> Expand-Archive .\influxdb2-{{< latest-patch >}}-windows-amd64.zip -DestinationPath 'C:\Program Files\InfluxData\' +> mv 'C:\Program Files\InfluxData\influxdb2-{{< latest-patch >}}-windows-amd64' 'C:\Program Files\InfluxData\influxdb' +``` + +{{< expand-wrapper >}} +{{% expand "Recommended – Set appropriate directory permissions" %}} + +To prevent unwanted access to data, we recommend setting the permissions on the influxdb `data-dir` to not be world readable. + +Example: + +````powershell +> $acl = Get-Acl "C:\Users\\.influxdbv2" +> $accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("everyone","Read","Deny") +> $acl.SetAccessRule($accessRule) +> $acl | Set-Acl "C:\Users\\.influxdbv2" + +{{% /expand %}} +{{< /expand-wrapper >}} + + +## Networking ports +By default, InfluxDB uses TCP port `8086` for client-server communication over +the [InfluxDB HTTP API](/influxdb/v2.6/reference/api/). + +## Start InfluxDB +In **Powershell**, navigate into `C:\Program Files\InfluxData\influxdb` and start +InfluxDB by running the `influxd` daemon: + +```powershell +> cd -Path 'C:\Program Files\InfluxData\influxdb' +> ./influxd +``` + +_See the [`influxd` documentation](/influxdb/v2.6/reference/cli/influxd) for information about +available flags and options._ + +{{% note %}} +#### Grant network access +When starting InfluxDB for the first time, **Windows Defender** will appear with +the following message: + +> Windows Defender Firewall has blocked some features of this app. + +1. Select **Private networks, such as my home or work network**. +2. Click **Allow access**. +{{% /note %}} + +{{% note %}} +#### InfluxDB "phone home" + +By default, InfluxDB sends telemetry data back to InfluxData. +The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides +information about what data is collected and how it is used. + +To opt-out of sending telemetry data back to InfluxData, include the +`--reporting-disabled` flag when starting `influxd`. + +```bash +./influxd --reporting-disabled +``` +{{% /note %}} + +{{% /tab-content %}} + + + +{{% tab-content %}} +## Download and run InfluxDB v{{< current-version >}} + +Use `docker run` to download and run the InfluxDB v{{< current-version >}} Docker image. +Expose port `8086`, which InfluxDB uses for client-server communication over +the [InfluxDB HTTP API](/influxdb/v2.6/reference/api/). + +```sh +docker run --name influxdb -p 8086:8086 influxdb:{{< latest-patch >}} +``` +_To run InfluxDB in [detached mode](https://docs.docker.com/engine/reference/run/#detached-vs-foreground), include the `-d` flag in the `docker run` command._ + +## Persist data outside the InfluxDB container + +1. Create a new directory to store your data in and navigate into the directory. + + ```sh + mkdir path/to/influxdb-docker-data-volume && cd $_ + ``` +2. From within your new directory, run the InfluxDB Docker container with the `--volume` flag to + persist data from `/var/lib/influxdb2` _inside_ the container to the current working directory in + the host file system. + + ```sh + docker run \ + --name influxdb \ + -p 8086:8086 \ + --volume $PWD:/var/lib/influxdb2 \ + influxdb:{{< latest-patch >}} + ``` + +## Configure InfluxDB with Docker + +To mount an InfluxDB configuration file and use it from within Docker: + +1. [Persist data outside the InfluxDB container](#persist-data-outside-the-influxdb-container). + +2. Use the command below to generate the default configuration file on the host file system: + + ```sh + docker run \ + --rm influxdb:{{< latest-patch >}} \ + influx server-config > config.yml + ``` + +3. Modify the default configuration, which will now be available under `$PWD`. + +4. Start the InfluxDB container: + + ```sh + docker run -p 8086:8086 \ + -v $PWD/config.yml:/etc/influxdb2/config.yml \ + influxdb:{{< latest-patch >}} + ``` + +(Find more about configuring InfluxDB [here](https://docs.influxdata.com/influxdb/v2.6/reference/config-options/).) + +## Open a shell in the InfluxDB container + +To use the `influx` command line interface, open a shell in the `influxdb` Docker container: + +```sh +docker exec -it influxdb /bin/bash +``` + +{{% note %}} +#### InfluxDB "phone home" + +By default, InfluxDB sends telemetry data back to InfluxData. +The [InfluxData telemetry](https://www.influxdata.com/telemetry) page provides +information about what data is collected and how it is used. + +To opt-out of sending telemetry data back to InfluxData, include the +`--reporting-disabled` flag when starting the InfluxDB container. + +```sh +docker run -p 8086:8086 influxdb:{{< latest-patch >}} --reporting-disabled +``` +{{% /note %}} + +{{% /tab-content %}} + + + +{{% tab-content %}} + +## Install InfluxDB in a Kubernetes cluster + +The instructions below use **minikube** or **kind**, but the steps should be similar in any Kubernetes cluster. +InfluxData also makes [Helm charts](https://github.com/influxdata/helm-charts) available. + +1. Install [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) or + [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). + +2. Start a local cluster: + + ```sh + # with minikube + minikube start + + # with kind + kind create cluster + ``` + +3. Apply the [sample InfluxDB configuration](https://github.com/influxdata/docs-v2/blob/master/static/downloads/influxdb-k8-minikube.yaml) by running: + + ```sh + kubectl apply -f https://raw.githubusercontent.com/influxdata/docs-v2/master/static/downloads/influxdb-k8-minikube.yaml + ``` + + This creates an `influxdb` Namespace, Service, and StatefulSet. + A PersistentVolumeClaim is also created to store data written to InfluxDB. + + **Important**: Always inspect YAML manifests before running `kubectl apply -f `! + +4. Ensure the Pod is running: + + ```sh + kubectl get pods -n influxdb + ``` + +5. Ensure the Service is available: + + ```sh + kubectl describe service -n influxdb influxdb + ``` + + You should see an IP address after `Endpoints` in the command's output. + +6. Forward port 8086 from inside the cluster to localhost: + + ```sh + kubectl port-forward -n influxdb service/influxdb 8086:8086 + ``` + +{{% /tab-content %}} + + +{{% tab-content %}} + +## Install InfluxDB v{{< current-version >}} on Raspberry Pi + +{{% note %}} +#### Requirements + +To run InfluxDB on Raspberry Pi, you need: + +- a Raspberry Pi 4+ or 400 +- a 64-bit operating system. + We recommend installing a [64-bit version of Ubuntu](https://ubuntu.com/download/raspberry-pi) + of Ubuntu Desktop or Ubuntu Server compatible with 64-bit Raspberry Pi. +{{% /note %}} + +### Install Linux binaries + +Follow the [Linux installation instructions](/influxdb/v2.6/install/?t=Linux) +to install InfluxDB on a Raspberry Pi. + +### Monitor your Raspberry Pi +Use the [InfluxDB Raspberry Pi template](/influxdb/cloud/monitor-alert/templates/infrastructure/raspberry-pi/) +to easily configure collecting and visualizing system metrics for the Raspberry Pi. + +#### Monitor 32-bit Raspberry Pi systems +If you have a 32-bit Raspberry Pi, [use Telegraf](/{{< latest "telegraf" >}}/) +to collect and send data to: + +- [InfluxDB OSS](/influxdb/v2.6/), running on a 64-bit system +- InfluxDB Cloud with a [**Free Tier**](/influxdb/cloud/account-management/pricing-plans/#free-plan) account +- InfluxDB Cloud with a paid [**Usage-Based**](/influxdb/cloud/account-management/pricing-plans/#usage-based-plan) account with relaxed resource restrictions. + +{{% /tab-content %}} + + +{{< /tabs-wrapper >}} + +## Download and install the influx CLI +The [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) lets you manage InfluxDB +from your command line. + +Download and install the influx CLI + +## Set up InfluxDB + +The initial setup process for an InfluxDB instance creates the following: +- An organization with the name you provide. +- A primary bucket with the name you provide. +- An admin [authorization](/influxdb/v2.6/security/tokens/) with the following properties: + - The username and password that you provide. + - An API token (_[operator token](/influxdb/v2.6/security/tokens/#operator-token)_). + - Read-write permissions for all resources in the InfluxDB instance. + +To run an interactive setup that prompts you for the required information, +use the InfluxDB user interface (UI) or the `influx` command line interface (CLI). + +To automate the setup--for example, with a script that you write-- +use the `influx` command line interface (CLI) or the InfluxDB `/api/v2` API. + +{{< tabs-wrapper >}} +{{% tabs %}} +[Set up with the UI](#) +[Set up with the CLI](#) +{{% /tabs %}} + + +{{% tab-content %}} +### Set up InfluxDB through the UI + +1. With InfluxDB running, visit [http://localhost:8086](http://localhost:8086). +2. Click **Get Started** + +#### Set up your initial user + +1. Enter a **Username** for your initial user. +2. Enter a **Password** and **Confirm Password** for your user. +3. Enter your initial **Organization Name**. +4. Enter your initial **Bucket Name**. +5. Click **Continue**. + +Your InfluxDB instance is now initialized. + +### (Optional) Set up and use the influx CLI + +To avoid having to pass your InfluxDB +API token with each `influx` command, set up a configuration profile to store your credentials--for example, +enter the following code in your terminal: + + ```sh + # Set up a configuration profile + influx config create -n default \ + -u http://localhost:8086 \ + -o INFLUX_ORG \ + -t INFLUX_API_TOKEN \ + -a + ``` + +Replace the following: + +- **`INFLUX_ORG`**: [your organization name](/influxdb/v2.6/organizations/view-orgs/). +- **`INFLUX_API_TOKEN`**: [your API token](/influxdb/v2.6/security/tokens/view-tokens/). + +This configures a new profile named `default` and makes the profile active +so your `influx` CLI commands run against the specified InfluxDB instance. +For more detail about configuration profiles, see [`influx config`](/influxdb/v2.6/reference/cli/influx/config/). + +Once you have the `default` configuration profile, you're ready to [create All-Access tokens](#create-all-access-tokens) +or get started [collecting and writing data](/influxdb/v2.6/write-data). + +{{% /tab-content %}} + + + +{{% tab-content %}} +### Set up InfluxDB through the influx CLI + +Use the `influx setup` CLI command in interactive or non-interactive (_headless_) mode to initialize +your InfluxDB instance. + +Do one of the following: + +- [Run `influx setup` without user interaction](#run-influx-setup-without-user-interaction) +- [Run `influx setup` with user prompts](#run-influx-setup-with-user-prompts) + +#### Run `influx setup` without user interaction + +To run the InfluxDB setup process with your automation scripts, pass [flags](/influxdb/v2.6/reference/cli/influx/setup/#flags) +with the required information to the `influx setup` command. +Pass the `-f, --force` flag to bypass screen prompts. + +The following example command shows how to set up InfluxDB in non-interactive +mode with an initial admin user, +_[operator token](/influxdb/v2.6/security/tokens/#operator-token)_, +and bucket: + +```sh +influx setup -u USERNAME -p PASSWORD -t TOKEN -o ORGANIZATION_NAME -b BUCKET_NAME -f +``` + +The output is the following: + +```sh +User Organization Bucket +USERNAME ORGANIZATION_NAME BUCKET_NAME +``` + +If you run `influx setup` without the `-t, --token` flag, then InfluxDB +automatically generates an API token for the initial authorization--for example, +the following setup command creates the initial authorization with an +auto-generated API token: + +```sh +influx setup -u USERNAME -p PASSWORD -o ORGANIZATION_NAME -b BUCKET_NAME -f +``` + +Once setup completes, InfluxDB is initialized with the [authorization](/influxdb/v2.6/security/tokens/), [user](/influxdb/v2.6/reference/glossary/#user), [organization](/influxdb/v2.6/reference/glossary/#organization), and [bucket](/influxdb/v2.6/reference/glossary/#bucket). + +InfluxDB creates a `default` configuration profile for you that provides your +InfluxDB URL, organization, and API token to `influx` CLI commands. +For more detail about configuration profiles, see [`influx config`](/influxdb/v2.6/reference/cli/influx/config/). + +Once you have the `default` configuration profile, you're ready to [create All-Access tokens](#create-all-access-tokens) +or get started [collecting and writing data](/influxdb/v2.6/write-data). + +#### Run `influx setup` with user prompts + +To run setup with prompts for the required information, enter the following +command in your terminal: + +```sh +influx setup +``` + +Complete the following steps as prompted by the CLI: + +1. Enter a **primary username**. +2. Enter a **password** for your user. +3. **Confirm your password** by entering it again. +4. Enter a name for your **primary organization**. +5. Enter a name for your **primary bucket**. +6. Enter a **retention period** for your primary bucket—valid units are + nanoseconds (`ns`), microseconds (`us` or `µs`), milliseconds (`ms`), + seconds (`s`), minutes (`m`), hours (`h`), days (`d`), and weeks (`w`). + Enter nothing for an infinite retention period. +7. Confirm the details for your primary user, organization, and bucket. + +Once setup completes, InfluxDB is initialized with the user, organization, bucket, +and _[operator token](/influxdb/v2.6/security/tokens/#operator-token)_. + +InfluxDB creates a `default` configuration profile for you that provides your +InfluxDB URL, organization, and API token to `influx` CLI commands. +For more detail about configuration profiles, see [`influx config`](/influxdb/v2.6/reference/cli/influx/config/). + +Once you have the `default` configuration profile, you're ready to [create All-Access tokens](#create-all-access-tokens) +or get started [collecting and writing data](/influxdb/v2.6/write-data). + +{{% /tab-content %}} + +{{< /tabs-wrapper >}} + +### Create All-Access tokens + +Because [Operator tokens](/influxdb/v2.6/security/tokens/#operator-token) +have full read and write access to all organizations in the database, +we recommend +[creating an All-Access token](/influxdb/v2.6/security/tokens/create-token/) +for each organization and using those tokens to manage InfluxDB. diff --git a/content/influxdb/v2.6/migrate-data/_index.md b/content/influxdb/v2.6/migrate-data/_index.md new file mode 100644 index 000000000..51364cc55 --- /dev/null +++ b/content/influxdb/v2.6/migrate-data/_index.md @@ -0,0 +1,15 @@ +--- +title: Migrate data to InfluxDB +description: > + Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS + and InfluxDB Cloud. +menu: + influxdb_2_6: + name: Migrate data +weight: 9 +--- + +Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS +and InfluxDB Cloud. + +{{< children >}} diff --git a/content/influxdb/v2.6/migrate-data/migrate-cloud-to-oss.md b/content/influxdb/v2.6/migrate-data/migrate-cloud-to-oss.md new file mode 100644 index 000000000..e3f683ea4 --- /dev/null +++ b/content/influxdb/v2.6/migrate-data/migrate-cloud-to-oss.md @@ -0,0 +1,372 @@ +--- +title: Migrate data from InfluxDB Cloud to InfluxDB OSS +description: > + To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data from + InfluxDB Cloud in time-based batches and write the data to InfluxDB OSS. +menu: + influxdb_2_6: + name: Migrate from Cloud to OSS + parent: Migrate data +weight: 102 +--- + +To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data +from InfluxDB Cloud and write the data to InfluxDB OSS. +Because full data migrations will likely exceed your organization's limits and +adjustable quotas, migrate your data in batches. + +The following guide provides instructions for setting up an InfluxDB OSS task +that queries data from an InfluxDB Cloud bucket in time-based batches and writes +each batch to an InfluxDB OSS bucket. + +{{% cloud %}} +All queries against data in InfluxDB Cloud are subject to your organization's +[rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/). +{{% /cloud %}} + +- [Set up the migration](#set-up-the-migration) +- [Migration task](#migration-task) + - [Configure the migration](#configure-the-migration) + - [Migration Flux script](#migration-flux-script) + - [Configuration help](#configuration-help) +- [Monitor the migration progress](#monitor-the-migration-progress) +- [Troubleshoot migration task failures](#troubleshoot-migration-task-failures) + +## Set up the migration +1. [Install and set up InfluxDB OSS](/influxdb/{{< current-version-link >}}/install/). + +2. **In InfluxDB Cloud**, [create an API token](/influxdb/cloud/security/tokens/create-token/) + with **read access** to the bucket you want to migrate. + +3. **In InfluxDB OSS**: + 1. Add your **InfluxDB Cloud API token** as a secret using the key, + `INFLUXDB_CLOUD_TOKEN`. + _See [Add secrets](/influxdb/{{< current-version-link >}}/security/secrets/add/) for more information._ + 2. [Create a bucket](/influxdb/{{< current-version-link >}}/organizations/buckets/create-bucket/) + **to migrate data to**. + 3. [Create a bucket](/influxdb/{{< current-version-link >}}/organizations/buckets/create-bucket/) + **to store temporary migration metadata**. + 4. [Create a new task](/influxdb/{{< current-version-link >}}/process-data/manage-tasks/create-task/) + using the provided [migration task](#migration-task). + Update the necessary [migration configuration options](#configure-the-migration). + 5. _(Optional)_ Set up [migration monitoring](#monitor-the-migration-progress). + 6. Save the task. + + {{% note %}} +Newly-created tasks are enabled by default, so the data migration begins when you save the task. + {{% /note %}} + +**After the migration is complete**, each subsequent migration task execution +will fail with the following error: + +``` +error exhausting result iterator: error calling function "die" @41:9-41:86: +Batch range is beyond the migration range. Migration is complete. +``` + +## Migration task + +### Configure the migration +1. Specify how often you want the task to run using the `task.every` option. + _See [Determine your task interval](#determine-your-task-interval)._ + +2. Define the following properties in the `migration` + [record](/{{< latest "flux" >}}/data-types/composite/record/): + + ##### migration + - **start**: Earliest time to include in the migration. + _See [Determine your migration start time](#determine-your-migration-start-time)._ + - **stop**: Latest time to include in the migration. + - **batchInterval**: Duration of each time-based batch. + _See [Determine your batch interval](#determine-your-batch-interval)._ + - **batchBucket**: InfluxDB OSS bucket to store migration batch metadata in. + - **sourceHost**: [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions) + to migrate data from. + - **sourceOrg**: InfluxDB Cloud organization to migrate data from. + - **sourceToken**: InfluxDB Cloud API token. To keep the API token secure, store + it as a secret in InfluxDB OSS. + - **sourceBucket**: InfluxDB Cloud bucket to migrate data from. + - **destinationBucket**: InfluxDB OSS bucket to migrate data to. + +### Migration Flux script + +```js +import "array" +import "experimental" +import "influxdata/influxdb/secrets" + +// Configure the task +option task = {every: 5m, name: "Migrate data from InfluxDB Cloud"} + +// Configure the migration +migration = { + start: 2022-01-01T00:00:00Z, + stop: 2022-02-01T00:00:00Z, + batchInterval: 1h, + batchBucket: "migration", + sourceHost: "https://cloud2.influxdata.com", + sourceOrg: "example-cloud-org", + sourceToken: secrets.get(key: "INFLUXDB_CLOUD_TOKEN"), + sourceBucket: "example-cloud-bucket", + destinationBucket: "example-oss-bucket", +} + +// batchRange dynamically returns a record with start and stop properties for +// the current batch. It queries migration metadata stored in the +// `migration.batchBucket` to determine the stop time of the previous batch. +// It uses the previous stop time as the new start time for the current batch +// and adds the `migration.batchInterval` to determine the current batch stop time. +batchRange = () => { + _lastBatchStop = + (from(bucket: migration.batchBucket) + |> range(start: migration.start) + |> filter(fn: (r) => r._field == "batch_stop") + |> filter(fn: (r) => r.srcOrg == migration.sourceOrg) + |> filter(fn: (r) => r.srcBucket == migration.sourceBucket) + |> last() + |> findRecord(fn: (key) => true, idx: 0))._value + _batchStart = + if exists _lastBatchStop then + time(v: _lastBatchStop) + else + migration.start + + return {start: _batchStart, stop: experimental.addDuration(d: migration.batchInterval, to: _batchStart)} +} + +// Define a static record with batch start and stop time properties +batch = {start: batchRange().start, stop: batchRange().stop} + +// Check to see if the current batch start time is beyond the migration.stop +// time and exit with an error if it is. +finished = + if batch.start >= migration.stop then + die(msg: "Batch range is beyond the migration range. Migration is complete.") + else + "Migration in progress" + +// Query all data from the specified source bucket within the batch-defined time +// range. To limit migrated data by measurement, tag, or field, add a `filter()` +// function after `range()` with the appropriate predicate fn. +data = () => + from(host: migration.sourceHost, org: migration.sourceOrg, token: migration.sourceToken, bucket: migration.sourceBucket) + |> range(start: batch.start, stop: batch.stop) + +// rowCount is a stream of tables that contains the number of rows returned in +// the batch and is used to generate batch metadata. +rowCount = + data() + |> group(columns: ["_start", "_stop"]) + |> count() + +// emptyRange is a stream of tables that acts as filler data if the batch is +// empty. This is used to generate batch metadata for empty batches and is +// necessary to correctly increment the time range for the next batch. +emptyRange = array.from(rows: [{_start: batch.start, _stop: batch.stop, _value: 0}]) + +// metadata returns a stream of tables representing batch metadata. +metadata = () => { + _input = + if exists (rowCount |> findRecord(fn: (key) => true, idx: 0))._value then + rowCount + else + emptyRange + + return + _input + |> map( + fn: (r) => + ({ + _time: now(), + _measurement: "batches", + srcOrg: migration.sourceOrg, + srcBucket: migration.sourceBucket, + dstBucket: migration.destinationBucket, + batch_start: string(v: batch.start), + batch_stop: string(v: batch.stop), + rows: r._value, + percent_complete: + float(v: int(v: r._stop) - int(v: migration.start)) / float( + v: int(v: migration.stop) - int(v: migration.start), + ) * 100.0, + }), + ) + |> group(columns: ["_measurement", "srcOrg", "srcBucket", "dstBucket"]) +} + +// Write the queried data to the specified InfluxDB OSS bucket. +data() + |> to(bucket: migration.destinationBucket) + +// Generate and store batch metadata in the migration.batchBucket. +metadata() + |> experimental.to(bucket: migration.batchBucket) +``` + +### Configuration help + +{{< expand-wrapper >}} + + +{{% expand "Determine your task interval" %}} + +The task interval determines how often the migration task runs and is defined by +the [`task.every` option](/influxdb/v2.6/process-data/task-options/#every). +InfluxDB Cloud rate limits and quotas reset every five minutes, so +**we recommend a `5m` task interval**. + +You can do shorter task intervals and execute the migration task more often, +but you need to balance the task interval with your [batch interval](#determine-your-batch-interval) +and the amount of data returned in each batch. +If the total amount of data queried in each five-minute interval exceeds your +InfluxDB Cloud organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/), +the batch will fail until rate limits and quotas reset. + +{{% /expand %}} + + + +{{% expand "Determine your migration start time" %}} + +The `migration.start` time should be at or near the same time as the earliest +data point you want to migrate. +All migration batches are determined using the `migration.start` time and +`migration.batchInterval` settings. + +To find time of the earliest point in your bucket, run the following query: + +```js +from(bucket: "example-cloud-bucket") + |> range(start: 0) + |> group() + |> first() + |> keep(columns: ["_time"]) +``` + +{{% /expand %}} + + + +{{% expand "Determine your batch interval" %}} + +The `migration.batchInterval` setting controls the time range queried by each batch. +The "density" of the data in your InfluxDB Cloud bucket and your InfluxDB Cloud +organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/) +determine what your batch interval should be. + +For example, if you're migrating data collected from hundreds of sensors with +points recorded every second, your batch interval will need to be shorter. +If you're migrating data collected from five sensors with points recorded every +minute, your batch interval can be longer. +It all depends on how much data gets returned in a single batch. + +If points occur at regular intervals, you can get a fairly accurate estimate of +how much data will be returned in a given time range by using the `/api/v2/query` +endpoint to execute a query for the time range duration and then measuring the +size of the response body. + +The following `curl` command queries an InfluxDB Cloud bucket for the last day +and returns the size of the response body in bytes. +You can customize the range duration to match your specific use case and +data density. + +```sh +INFLUXDB_CLOUD_ORG= +INFLUXDB_CLOUD_TOKEN= +INFLUXDB_CLOUD_BUCKET= + +curl -so /dev/null --request POST \ + https://cloud2.influxdata.com/api/v2/query?org=$INFLUXDB_CLOUD_ORG \ + --header "Authorization: Token $INFLUXDB_CLOUD_TOKEN" \ + --header "Accept: application/csv" \ + --header "Content-type: application/vnd.flux" \ + --data "from(bucket:\"$INFLUXDB_CLOUD_BUCKET\") |> range(start: -1d, stop: now())" \ + --write-out '%{size_download}' +``` + +{{% note %}} +You can also use other HTTP API tools like [Postman](https://www.postman.com/) +that provide the size of the response body. +{{% /note %}} + +Divide the output of this command by 1000000 to convert it to megabytes (MB). + +``` +batchInterval = (read-rate-limit-mb / response-body-size-mb) * range-duration +``` + +For example, if the response body of your query that returns data from one day +is 8 MB and you're using the InfluxDB Cloud Free Plan with a read limit of +300 MB per five minutes: + +```js +batchInterval = (300 / 8) * 1d +// batchInterval = 37d +``` + +You could query 37 days of data before hitting your read limit, but this is just an estimate. +We recommend setting the `batchInterval` slightly lower than the calculated interval +to allow for variation between batches. +So in this example, **it would be best to set your `batchInterval` to `35d`**. + +##### Important things to note +- This assumes no other queries are running in your InfluxDB Cloud organization. +- You should also consider your network speeds and whether a batch can be fully + downloaded within the [task interval](#determine-your-task-interval). + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Monitor the migration progress +The [InfluxDB Cloud Migration Community template](https://github.com/influxdata/community-templates/tree/master/influxdb-cloud-oss-migration/) +installs the migration task outlined in this guide as well as a dashboard +for monitoring running data migrations. + +{{< img-hd src="/img/influxdb/2-1-migration-dashboard.png" alt="InfluxDB Cloud migration dashboard" />}} + +Install the InfluxDB Cloud Migration template + +## Troubleshoot migration task failures +If the migration task fails, [view your task logs](/influxdb/v2.6/process-data/manage-tasks/task-run-history/) +to identify the specific error. Below are common causes of migration task failures. + +- [Exceeded rate limits](#exceeded-rate-limits) +- [Invalid API token](#invalid-api-token) +- [Query timeout](#query-timeout) + +### Exceeded rate limits +If your data migration causes you to exceed your InfluxDB Cloud organization's +limits and quotas, the task will return an error similar to: + +``` +too many requests +``` + +**Possible solutions**: +- Update the `migration.batchInterval` setting in your migration task to use + a smaller interval. Each batch will then query less data. + +### Invalid API token +If the API token you add as the `INFLUXDB_CLOUD_SECRET` doesn't have read access to +your InfluxDB Cloud bucket, the task will return an error similar to: + +``` +unauthorized access +``` + +**Possible solutions**: +- Ensure the API token has read access to your InfluxDB Cloud bucket. +- Generate a new InfluxDB Cloud API token with read access to the bucket you + want to migrate. Then, update the `INFLUXDB_CLOUD_TOKEN` secret in your + InfluxDB OSS instance with the new token. + +### Query timeout +The InfluxDB Cloud query timeout is 90 seconds. If it takes longer than this to +return the data from the batch interval, the query will time out and the +task will fail. + +**Possible solutions**: +- Update the `migration.batchInterval` setting in your migration task to use + a smaller interval. Each batch will then query less data and take less time + to return results. diff --git a/content/influxdb/v2.6/migrate-data/migrate-oss.md b/content/influxdb/v2.6/migrate-data/migrate-oss.md new file mode 100644 index 000000000..9721c9b51 --- /dev/null +++ b/content/influxdb/v2.6/migrate-data/migrate-oss.md @@ -0,0 +1,64 @@ +--- +title: Migrate data from InfluxDB OSS to other InfluxDB instances +description: > + To migrate data from an InfluxDB OSS bucket to another InfluxDB OSS or InfluxDB + Cloud bucket, export your data as line protocol and write it to your other + InfluxDB bucket. +menu: + influxdb_2_6: + name: Migrate data from OSS + parent: Migrate data +weight: 101 +--- + +To migrate data from an InfluxDB OSS bucket to another InfluxDB OSS or InfluxDB +Cloud bucket, export your data as line protocol and write it to your other +InfluxDB bucket. + +{{% cloud %}} +#### InfluxDB Cloud write limits +If migrating data from InfluxDB OSS to InfluxDB Cloud, you are subject to your +[InfluxDB Cloud organization's rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/). +Consider exporting your data in time-based batches to limit the file size +of exported line protocol to match your InfluxDB Cloud organization's limits. +{{% /cloud %}} + +1. [Find the InfluxDB OSS bucket ID](/influxdb/{{< current-version-link >}}/organizations/buckets/view-buckets/) + that contains data you want to migrate. +2. Use the `influxd inspect export-lp` command to export data in your bucket as + [line protocol](/influxdb/v2.6/reference/syntax/line-protocol/). + Provide the following: + + - **bucket ID**: ({{< req >}}) ID of the bucket to migrate. + - **engine path**: ({{< req >}}) Path to the TSM storage files on disk. + The default engine path [depends on your operating system](/influxdb/{{< current-version-link >}}/reference/internals/file-system-layout/#file-system-layout), + If using a [custom engine-path](/influxdb/{{< current-version-link >}}/reference/config-options/#engine-path) + provide your custom path. + - **output path**: ({{< req >}}) File path to output line protocol to. + - **start time**: Earliest time to export. + - **end time**: Latest time to export. + - **measurement**: Export a specific measurement. By default, the command + exports all measurements. + - **compression**: ({{< req text="Recommended" color="magenta" >}}) + Use Gzip compression to compress the output line protocol file. + + ```sh + influxd inspect export-lp \ + --bucket-id 12ab34cd56ef \ + --engine-path ~/.influxdbv2/engine \ + --output-path path/to/export.lp + --start 2022-01-01T00:00:00Z \ + --end 2022-01-31T23:59:59Z \ + --compress + ``` + +3. Write the exported line protocol to your InfluxDB OSS or InfluxDB Cloud instance. + + Do any of the following: + + - Write line protocol in the **InfluxDB UI**: + - [InfluxDB Cloud UI](/influxdb/cloud/write-data/no-code/load-data/#load-csv-or-line-protocol-in-ui) + - [InfluxDB OSS {{< current-version >}} UI](/influxdb/{{< current-version-link >}}/write-data/no-code/load-data/#load-csv-or-line-protocol-in-ui) + - [Write line protocol using the `influx write` command](/influxdb/{{< current-version-link >}}/reference/cli/influx/write/) + - [Write line protocol using the InfluxDB API](/influxdb/{{< current-version-link >}}/write-data/developer-tools/api/) + - [Bulk ingest data (InfluxDB Cloud)](/influxdb/cloud/write-data/bulk-ingest-cloud/) diff --git a/content/influxdb/v2.6/monitor-alert/_index.md b/content/influxdb/v2.6/monitor-alert/_index.md new file mode 100644 index 000000000..7e709cfaf --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/_index.md @@ -0,0 +1,38 @@ +--- +title: Monitor data and send alerts +seotitle: Monitor data and send alerts +description: > + Monitor your time series data and send alerts by creating checks, notification + rules, and notification endpoints. Or use community templates to monitor supported environments. +menu: + influxdb_2_6: + name: Monitor & alert +weight: 7 +influxdb/v2.6/tags: [monitor, alert, checks, notification, endpoints] +--- + +Monitor your time series data and send alerts by creating checks, notification +rules, and notification endpoints. Or use [community templates to monitor](/influxdb/v2.6/monitor-alert/templates/) supported environments. + +## Overview + +1. A [check](/influxdb/v2.6/reference/glossary/#check) in InfluxDB queries data and assigns a status with a `_level` based on specific conditions. +2. InfluxDB stores the output of a check in the `statuses` measurement in the `_monitoring` system bucket. +3. [Notification rules](/influxdb/v2.6/reference/glossary/#notification-rule) check data in the `statuses` + measurement and, based on conditions set in the notification rule, send a message + to a [notification endpoint](/influxdb/v2.6/reference/glossary/#notification-endpoint). +4. InfluxDB stores notifications in the `notifications` measurement in the `_monitoring` system bucket. + +## Create an alert + +To get started, do the following: + +1. [Create checks](/influxdb/v2.6/monitor-alert/checks/create/) to monitor data and assign a status. +2. [Add notification endpoints](/influxdb/v2.6/monitor-alert/notification-endpoints/create/) + to send notifications to third parties. +3. [Create notification rules](/influxdb/v2.6/monitor-alert/notification-rules/create) to check + statuses and send notifications to your notifications endpoints. + +## Manage your monitoring and alerting pipeline + +{{< children >}} diff --git a/content/influxdb/v2.6/monitor-alert/checks/_index.md b/content/influxdb/v2.6/monitor-alert/checks/_index.md new file mode 100644 index 000000000..ac32c877e --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/checks/_index.md @@ -0,0 +1,19 @@ +--- +title: Manage checks +seotitle: Manage monitoring checks in InfluxDB +description: > + Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions. +menu: + influxdb_2_6: + parent: Monitor & alert +weight: 101 +influxdb/v2.6/tags: [monitor, checks, notifications, alert] +related: + - /influxdb/v2.6/monitor-alert/notification-rules/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +Checks in InfluxDB query data and apply a status or level to each data point based on specified conditions. +Learn how to create and manage checks: + +{{< children >}} diff --git a/content/influxdb/v2.6/monitor-alert/checks/create.md b/content/influxdb/v2.6/monitor-alert/checks/create.md new file mode 100644 index 000000000..29a94456a --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/checks/create.md @@ -0,0 +1,150 @@ +--- +title: Create checks +seotitle: Create monitoring checks in InfluxDB +description: > + Create a check in the InfluxDB UI. +menu: + influxdb_2_6: + parent: Manage checks +weight: 201 +related: + - /influxdb/v2.6/monitor-alert/notification-rules/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +Create a check in the InfluxDB user interface (UI). +Checks query data and apply a status to each point based on specified conditions. + +## Parts of a check +A check consists of two parts – a query and check configuration. + +#### Check query +- Specifies the dataset to monitor. +- May include tags to narrow results. + +#### Check configuration +- Defines check properties, including the check interval and status message. +- Evaluates specified conditions and applies a status (if applicable) to each data point: + - `crit` + - `warn` + - `info` + - `ok` +- Stores status in the `_level` column. + +## Check types +There are two types of checks: + +- [threshold](#threshold-check) +- [deadman](#deadman-check) + +#### Threshold check +A threshold check assigns a status based on a value being above, below, +inside, or outside of defined thresholds. + +#### Deadman check +A deadman check assigns a status to data when a series or group doesn't report +in a specified amount of time. + +## Create a check +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}** and select the [type of check](#check-types) to create. +3. Click **Name this check** in the top left corner and provide a unique name for the check, and then do the following: + - [Configure the check query](#configure-the-check-query) + - [Configure the check](#configure-the-check) +4. _(Optional)_ In the **Name this check** field at the top, enter a unique name for the check. + +#### Configure the check query +1. Select the **bucket**, **measurement**, **field** and **tag sets** to query. +2. If creating a threshold check, select an **aggregate function**. + Aggregate functions aggregate data between the specified check intervals and + return a single value for the check to process. + + In the **Aggregate functions** column, select an interval from the interval drop-down list + (for example, "Every 5 minutes") and an aggregate function from the list of functions. +3. Click **{{< caps >}}Submit{{< /caps >}}** to run the query and preview the results. + To see the raw query results, click the **View Raw Data {{< icon "toggle" >}}** toggle. + +#### Configure the check +1. Click **{{< caps >}}2. Configure Check{{< /caps >}}** near the top of the window. +2. In the **{{< caps >}}Properties{{< /caps >}}** column, configure the following: + + ##### Schedule Every + Select the interval to run the check (for example, "Every 5 minutes"). + This interval matches the aggregate function interval for the check query. + _Changing the interval here will update the aggregate function interval._ + + ##### Offset + Delay the execution of a task to account for any late data. + Offset queries do not change the queried time range. + + {{% note %}}Your offset must be shorter than your [check interval](#schedule-every). + {{% /note %}} + + ##### Tags + Add custom tags to the query output. + Each custom tag appends a new column to each row in the query output. + The column label is the tag key and the column value is the tag value. + + Use custom tags to associate additional metadata with the check. + Common metadata tags across different checks lets you easily group and organize checks. + You can also use custom tags in [notification rules](/influxdb/v2.6/monitor-alert/notification-rules/create/). + +3. In the **{{< caps >}}Status Message Template{{< /caps >}}** column, enter + the status message template for the check. + Use [Flux string interpolation](/{{< latest "flux" >}}/data-types/basic/string/#interpolate-strings) + to populate the message with data from the query. + + Check data is represented as a record, `r`. + Access specific column values using dot notation: `r.columnName`. + + Use data from the following columns: + + - columns included in the query output + - [custom tags](#tags) added to the query output + - `_check_id` + - `_check_name` + - `_level` + - `_source_measurement` + - `_type` + + ###### Example status message template + ``` + From ${r._check_name}: + ${r._field} is ${r._level}. + Its value is ${string(v: r.field_name)}. + ``` + + When a check generates a status, it stores the message in the `_message` column. + +4. Define check conditions that assign statuses to points. + Condition options depend on your check type. + + ##### Configure a threshold check + 1. In the **{{< caps >}}Thresholds{{< /caps >}}** column, click the status name (CRIT, WARN, INFO, or OK) + to define conditions for that specific status. + 2. From the **When value** drop-down list, select a threshold: is above, is below, + is inside of, is outside of. + 3. Enter a value or values for the threshold. + You can also use the threshold sliders in the data visualization to define threshold values. + + ##### Configure a deadman check + 1. In the **{{< caps >}}Deadman{{< /caps >}}** column, enter a duration for the deadman check in the **for** field. + For example, `90s`, `5m`, `2h30m`, etc. + 2. Use the **set status to** drop-down list to select a status to set on a dead series. + 3. In the **And stop checking after** field, enter the time to stop monitoring the series. + For example, `30m`, `2h`, `3h15m`, etc. + +5. Click the green **{{< icon "check" >}}** in the top right corner to save the check. + +## Clone a check +Create a new check by cloning an existing check. + +1. Go to **Alerts > Alerts** in the navigation on the left. + + {{< nav-icon "alerts" >}} + +2. Click the **{{< icon "gear" >}}** icon next to the check you want to clone + and then click **Clone**. diff --git a/content/influxdb/v2.6/monitor-alert/checks/delete.md b/content/influxdb/v2.6/monitor-alert/checks/delete.md new file mode 100644 index 000000000..69a4aa81a --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/checks/delete.md @@ -0,0 +1,33 @@ +--- +title: Delete checks +seotitle: Delete monitoring checks in InfluxDB +description: > + Delete checks in the InfluxDB UI. +menu: + influxdb_2_6: + parent: Manage checks +weight: 204 +related: + - /influxdb/v2.6/monitor-alert/notification-rules/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +If you no longer need a check, use the InfluxDB user interface (UI) to delete it. + +{{% warn %}} +Deleting a check cannot be undone. +{{% /warn %}} + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Click the **{{< icon "delete" >}}** icon, and then click **{{< caps >}}Confirm{{< /caps >}}**. + +After a check is deleted, all statuses generated by the check remain in the `_monitoring` +bucket until the retention period for the bucket expires. + +{{% note %}} +You can also [disable a check](/influxdb/v2.6/monitor-alert/checks/update/#enable-or-disable-a-check) +without having to delete it. +{{% /note %}} diff --git a/content/influxdb/v2.6/monitor-alert/checks/update.md b/content/influxdb/v2.6/monitor-alert/checks/update.md new file mode 100644 index 000000000..e29c1dcdf --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/checks/update.md @@ -0,0 +1,60 @@ +--- +title: Update checks +seotitle: Update monitoring checks in InfluxDB +description: > + Update, rename, enable or disable checks in the InfluxDB UI. +menu: + influxdb_2_6: + parent: Manage checks +weight: 203 +related: + - /influxdb/v2.6/monitor-alert/notification-rules/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +Update checks in the InfluxDB user interface (UI). +Common updates include: + +- [Update check queries and logic](#update-check-queries-and-logic) +- [Enable or disable a check](#enable-or-disable-a-check) +- [Rename a check](#rename-a-check) +- [Add or update a check description](#add-or-update-a-check-description) +- [Add a label to a check](#add-a-label-to-a-check) + +To update checks, select **Alerts > Alerts** in the navigation menu on the left. + +{{< nav-icon "alerts" >}} + + +## Update check queries and logic +1. Click the name of the check you want to update. The check builder appears. +2. To edit the check query, click **{{< caps >}}1. Define Query{{< /caps >}}** at the top of the check builder window. +3. To edit the check logic, click **{{< caps >}}2. Configure Check{{< /caps >}}** at the top of the check builder window. + +_For details about using the check builder, see [Create checks](/influxdb/v2.6/monitor-alert/checks/create/)._ + +## Enable or disable a check +Click the {{< icon "toggle" >}} toggle next to a check to enable or disable it. + +## Rename a check +1. Hover over the name of the check you want to update. +2. Click the **{{< icon "edit" >}}** icon that appears next to the check name. +2. Enter a new name and click out of the name field or press enter to save. + +_You can also rename a check in the [check builder](#update-check-queries-and-logic)._ + +## Add or update a check description +1. Hover over the check description you want to update. +2. Click the **{{< icon "edit" >}}** icon that appears next to the description. +2. Enter a new description and click out of the name field or press enter to save. + +## Add a label to a check +1. Click **{{< icon "add-label" >}} Add a label** next to the check you want to add a label to. + The **Add Labels** box appears. +2. To add an existing label, select the label from the list. +3. To create and add a new label: + - In the search field, enter the name of the new label. The **Create Label** box opens. + - In the **Description** field, enter an optional description for the label. + - Select a color for the label. + - Click **{{< caps >}}Create Label{{< /caps >}}**. +4. To remove a label, click **{{< icon "x" >}}** on the label. diff --git a/content/influxdb/v2.6/monitor-alert/checks/view.md b/content/influxdb/v2.6/monitor-alert/checks/view.md new file mode 100644 index 000000000..942245070 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/checks/view.md @@ -0,0 +1,37 @@ +--- +title: View checks +seotitle: View monitoring checks in InfluxDB +description: > + View check details and statuses and notifications generated by checks in the InfluxDB UI. +menu: + influxdb_2_6: + parent: Manage checks +weight: 202 +related: + - /influxdb/v2.6/monitor-alert/notification-rules/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +View check details and statuses and notifications generated by checks in the InfluxDB user interface (UI). + +- [View a list of all checks](#view-a-list-of-all-checks) +- [View check details](#view-check-details) +- [View statuses generated by a check](#view-statuses-generated-by-a-check) +- [View notifications triggered by a check](#view-notifications-triggered-by-a-check) + +To view checks, click **Alerts > Alerts** in navigation menu on the left. + +{{< nav-icon "alerts" >}} + +## View a list of all checks +The **{{< caps >}}Checks{{< /caps >}}** section of the Alerts landing page displays all existing checks. + +## View check details +Click the name of the check you want to view. +The check builder appears. +Here you can view the check query and logic. + +## View statuses generated by a check +1. Click the **{{< icon "view" >}}** icon on the check. +2. Click **View History**. + The Statuses History page displays statuses generated by the selected check. diff --git a/content/influxdb/v2.6/monitor-alert/custom-checks.md b/content/influxdb/v2.6/monitor-alert/custom-checks.md new file mode 100644 index 000000000..f91f6ae73 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/custom-checks.md @@ -0,0 +1,96 @@ +--- +title: Create custom checks +seotitle: Custom checks +description: > + Create custom checks with a Flux task. +menu: + influxdb_2_6: + parent: Monitor & alert +weight: 201 +influxdb/v2.6/tags: [alerts, checks, tasks, Flux] +--- + +In the UI, you can create two kinds of [checks](/influxdb/v2.6/reference/glossary/#check): +[`threshold`](/influxdb/v2.6/monitor-alert/checks/create/#threshold-check) and +[`deadman`](/influxdb/v2.6/monitor-alert/checks/create/#deadman-check). + +Using a Flux task, you can create a custom check that provides a couple advantages: + +- Customize and transform the data you would like to use for the check. +- Set up custom criteria for your alert (other than `threshold` and `deadman`). + +## Create a task + +1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left. + + {{< nav-icon "tasks" >}} + +2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}**. +3. In the **Name** field, enter a descriptive name, + and then enter how often to run the task in the **Every** field (for example, `10m`). + For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/v2.6/process-data/task-options/). +4. Enter the Flux script for your custom check, including the [`monitor.check`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/monitor/check/) function. + +{{% note %}} +Use the [`/api/v2/checks/{checkID}/query` API endpoint](/influxdb/v2.6/api/#operation/DeleteDashboardsIDOwnersID) +to see the Flux code for a check built in the UI. +This can be useful for constructing custom checks. +{{% /note %}} + +### Example: Monitor failed tasks + +The script below is fairly complex, and can be used as a framework for similar tasks. +It does the following: + +- Import the necessary `influxdata/influxdb/monitor` package, and other packages for data processing. +- Query the `_tasks` bucket to retrieve all statuses generated by your check. +- Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`. +- Create a `check` object that specifies an ID, name, and type for the check. +- Define the `ok` and `crit` statuses. +- Execute the `monitor` function on the `check` using the `task_data`. + +#### Example alert task script + +```js +import "strings" +import "regexp" +import "influxdata/influxdb/monitor" +import "influxdata/influxdb/schema" + +option task = {name: "Failed Tasks Check", every: 1h, offset: 4m} + +task_data = from(bucket: "_tasks") + |> range(start: -task.every) + |> filter(fn: (r) => r["_measurement"] == "runs") + |> filter(fn: (r) => r["_field"] == "logs") + |> map(fn: (r) => ({r with name: strings.split(v: regexp.findString(r: /option task = \{([^\}]+)/, v: r._value), t: "\\\\\\\"")[1]})) + |> drop(columns: ["_value", "_start", "_stop"]) + |> group(columns: ["name", "taskID", "status", "_measurement"]) + |> map(fn: (r) => ({r with _value: if r.status == "failed" then 1 else 0})) + |> last() + +check = { + // 16 characters, alphanumeric + _check_id: "0000000000000001", + // Name string + _check_name: "Failed Tasks Check", + // Check type (threshold, deadman, or custom) + _type: "custom", + tags: {}, +} +ok = (r) => r["logs"] == 0 +crit = (r) => r["logs"] == 1 +messageFn = (r) => "The task: ${r.taskID} - ${r.name} has a status of ${r.status}" + +task_data + |> schema["fieldsAsCols"]() + |> monitor["check"](data: check, messageFn: messageFn, ok: ok, crit: crit) +``` + +{{% note %}} +Creating a custom check does not send a notification email. +For information on how to create notification emails, see +[Create notification endpoints](/influxdb/v2.6/monitor-alert/notification-endpoints/create), +[Create notification rules](/influxdb/v2.6/monitor-alert/notification-rules/create), +and [Send alert email](/influxdb/v2.6/monitor-alert/send-email/) +{{% /note %}} diff --git a/content/influxdb/v2.6/monitor-alert/notification-endpoints/_index.md b/content/influxdb/v2.6/monitor-alert/notification-endpoints/_index.md new file mode 100644 index 000000000..72b0f3285 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-endpoints/_index.md @@ -0,0 +1,19 @@ +--- +title: Manage notification endpoints +list_title: Manage notification endpoints +description: > + Create, read, update, and delete endpoints in the InfluxDB UI. +influxdb/v2.6/tags: [monitor, endpoints, notifications, alert] +menu: + influxdb_2_6: + parent: Monitor & alert +weight: 102 +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-rules/ +--- + +Notification endpoints store information to connect to a third-party service. +Create a connection to a HTTP, Slack, or PagerDuty endpoint. + +{{< children >}} diff --git a/content/influxdb/v2.6/monitor-alert/notification-endpoints/create.md b/content/influxdb/v2.6/monitor-alert/notification-endpoints/create.md new file mode 100644 index 000000000..0df553162 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-endpoints/create.md @@ -0,0 +1,67 @@ +--- +title: Create notification endpoints +description: > + Create notification endpoints to send alerts on your time series data. +menu: + influxdb_2_6: + name: Create endpoints + parent: Manage notification endpoints +weight: 201 +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-rules/ +--- + +To send notifications about changes in your data, start by creating a notification endpoint to a third-party service. After creating notification endpoints, [create notification rules](/influxdb/v2.6/monitor-alert/notification-rules/create) to send alerts to third-party services on [check statuses](/influxdb/v2.6/monitor-alert/checks/create). + +{{% cloud-only %}} + +#### Endpoints available in InfluxDB Cloud +The following endpoints are available for the InfluxDB Cloud Free Plan and Usage-based Plan: + +| Endpoint | Free Plan | Usage-based Plan | +|:-------- |:-------------------: |:----------------------------:| +| **Slack** | **{{< icon "check" >}}** | **{{< icon "check" >}}** | +| **PagerDuty** | | **{{< icon "check" >}}** | +| **HTTP** | | **{{< icon "check" >}}** | + +{{% /cloud-only %}} + +## Create a notification endpoint + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}**. +3. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**. +4. From the **Destination** drop-down list, select a destination endpoint to send notifications to. + {{% cloud-only %}}_See [available endpoints](#endpoints-available-in-influxdb-cloud)._{{% /cloud-only %}} +5. In the **Name** and **Description** fields, enter a name and description for the endpoint. +6. Enter information to connect to the endpoint: + + - **For HTTP**, enter the **URL** to send the notification. + Select the **auth method** to use: **None** for no authentication. + To authenticate with a username and password, select **Basic** and then + enter credentials in the **Username** and **Password** fields. + To authenticate with an API token, select **Bearer**, and then enter the + API token in the **Token** field. + + - **For Slack**, create an [Incoming WebHook](https://api.slack.com/incoming-webhooks#posting_with_webhooks) + in Slack, and then enter your webHook URL in the **Slack Incoming WebHook URL** field. + + - **For PagerDuty**: + - [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service), + [add an integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service), + and then enter the PagerDuty integration key for your new service in the **Routing Key** field. + - The **Client URL** provides a useful link in your PagerDuty notification. + Enter any URL that you'd like to use to investigate issues. + This URL is sent as the `client_url` property in the PagerDuty trigger event. + By default, the **Client URL** is set to your Monitoring & Alerting History + page, and the following included in the PagerDuty trigger event: + + ```json + "client_url": "http://localhost:8086/orgs//alert-history" + ``` + +6. Click **{{< caps >}}Create Notification Endpoint{{< /caps >}}**. diff --git a/content/influxdb/v2.6/monitor-alert/notification-endpoints/delete.md b/content/influxdb/v2.6/monitor-alert/notification-endpoints/delete.md new file mode 100644 index 000000000..ad91a705b --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-endpoints/delete.md @@ -0,0 +1,28 @@ +--- +title: Delete notification endpoints +description: > + Delete a notification endpoint in the InfluxDB UI. +menu: + influxdb_2_6: + name: Delete endpoints + parent: Manage notification endpoints +weight: 204 +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-rules/ +--- + +If notifications are no longer sent to an endpoint, complete the steps below to +delete the endpoint, and then [update notification rules](/influxdb/v2.6/monitor-alert/notification-rules/update) +with a new notification endpoint as needed. + +## Delete a notification endpoint + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}** and find the rule + you want to delete. +3. Click the **{{< icon "trash" >}}** icon on the notification you want to delete + and then click **{{< caps >}}Confirm{{< /caps >}}**. diff --git a/content/influxdb/v2.6/monitor-alert/notification-endpoints/update.md b/content/influxdb/v2.6/monitor-alert/notification-endpoints/update.md new file mode 100644 index 000000000..e791dc614 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-endpoints/update.md @@ -0,0 +1,55 @@ +--- +title: Update notification endpoints +description: > + Update notification endpoints in the InfluxDB UI. +menu: + influxdb_2_6: + name: Update endpoints + parent: Manage notification endpoints +weight: 203 +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-rules/ +--- + +Complete the following steps to update notification endpoint details. +To update the notification endpoint selected for a notification rule, see [update notification rules](/influxdb/v2.6/monitor-alert/notification-rules/update/). + +**To update a notification endpoint** + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}** and then do the following as needed: + + - [Update the name or description for notification endpoint](#update-the-name-or-description-for-notification-endpoint) + - [Change endpoint details](#change-endpoint-details) + - [Disable notification endpoint](#disable-notification-endpoint) + - [Add a label to notification endpoint](#add-a-label-to-notification-endpoint) + +## Update the name or description for notification endpoint +1. Hover over the name or description of the endpoint and click the pencil icon + (**{{< icon "edit" >}}**) to edit the field. +2. Click outside of the field to save your changes. + +## Change endpoint details +1. Click the name of the endpoint to update. +2. Update details as needed, and then click **Edit Notification Endpoint**. + For details about each field, see [Create notification endpoints](/influxdb/v2.6/monitor-alert/notification-endpoints/create/). + +## Disable notification endpoint +Click the {{< icon "toggle" >}} toggle to disable the notification endpoint. + +## Add a label to notification endpoint +1. Click **{{< icon "add-label" >}} Add a label** next to the endpoint you want to add a label to. + The **Add Labels** box opens. +2. To add an existing label, select the label from the list. +3. To create and add a new label: + + - In the search field, enter the name of the new label. The **Create Label** box opens. + - In the **Description** field, enter an optional description for the label. + - Select a color for the label. + - Click **{{< caps >}}Create Label{{< /caps >}}**. + +4. To remove a label, click **{{< icon "x" >}}** on the label. diff --git a/content/influxdb/v2.6/monitor-alert/notification-endpoints/view.md b/content/influxdb/v2.6/monitor-alert/notification-endpoints/view.md new file mode 100644 index 000000000..bd87aeb28 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-endpoints/view.md @@ -0,0 +1,40 @@ +--- +title: View notification endpoint history +seotitle: View notification endpoint details and history +description: > + View notification endpoint details and history in the InfluxDB UI. +menu: + influxdb_2_6: + name: View endpoint history + parent: Manage notification endpoints +weight: 202 +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-rules/ +--- + +View notification endpoint details and history in the InfluxDB user interface (UI). + + +1. In the navigation menu on the left, select **Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Endpoints{{< /caps >}}**. + + - [View notification endpoint details](#view-notification-endpoint-details) + - [View history notification endpoint history](#view-notification-endpoint-history), including statues and notifications sent to the endpoint + +## View notification endpoint details +On the notification endpoints page: + +1. Click the name of the notification endpoint you want to view. +2. View the notification endpoint destination, name, and information to connect to the endpoint. + +## View notification endpoint history +On the notification endpoints page, click the **{{< icon "gear" >}}** icon, +and then click **View History**. +The Check Statuses History page displays: + +- Statuses generated for the selected notification endpoint +- Notifications sent to the selected notification endpoint diff --git a/content/influxdb/v2.6/monitor-alert/notification-rules/_index.md b/content/influxdb/v2.6/monitor-alert/notification-rules/_index.md new file mode 100644 index 000000000..827d903ab --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-rules/_index.md @@ -0,0 +1,17 @@ +--- +title: Manage notification rules +description: > + Manage notification rules in InfluxDB. +weight: 103 +influxdb/v2.6/tags: [monitor, notifications, alert] +menu: + influxdb_2_6: + parent: Monitor & alert +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +The following articles provide information on managing your notification rules: + +{{< children >}} diff --git a/content/influxdb/v2.6/monitor-alert/notification-rules/create.md b/content/influxdb/v2.6/monitor-alert/notification-rules/create.md new file mode 100644 index 000000000..4e7517135 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-rules/create.md @@ -0,0 +1,44 @@ +--- +title: Create notification rules +description: > + Create notification rules to send alerts on your time series data. +weight: 201 +menu: + influxdb_2_6: + parent: Manage notification rules +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +Once you've set up checks and notification endpoints, create notification rules to alert you. +_For details, see [Manage checks](/influxdb/v2.6/monitor-alert/checks/) and +[Manage notification endpoints](/influxdb/v2.6/monitor-alert/notification-endpoints/)._ + + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page. + + - [Create a new notification rule in the UI](#create-a-new-notification-rule-in-the-ui) + - [Clone an existing notification rule in the UI](#clone-an-existing-notification-rule-in-the-ui) + +## Create a new notification rule + +1. On the notification rules page, click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**. +2. Complete the **About** section: + 1. In the **Name** field, enter a name for the notification rule. + 2. In the **Schedule Every** field, enter how frequently the rule should run. + 3. In the **Offset** field, enter an offset time. For example,if a task runs on the hour, a 10m offset delays the task to 10 minutes after the hour. Time ranges defined in the task are relative to the specified execution time. +3. In the **Conditions** section, build a condition using a combination of status and tag keys. + - Next to **When status is equal to**, select a status from the drop-down field. + - Next to **AND When**, enter one or more tag key-value pairs to filter by. +4. In the **Message** section, select an endpoint to notify. +5. Click **{{< caps >}}Create Notification Rule{{< /caps >}}**. + +## Clone an existing notification rule + +On the notification rules page, click the **{{< icon "gear" >}}** icon and select **Clone**. +The cloned rule appears. diff --git a/content/influxdb/v2.6/monitor-alert/notification-rules/delete.md b/content/influxdb/v2.6/monitor-alert/notification-rules/delete.md new file mode 100644 index 000000000..b69d83765 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-rules/delete.md @@ -0,0 +1,24 @@ +--- +title: Delete notification rules +description: > + If you no longer need to receive an alert, delete the associated notification rule. +weight: 204 +menu: + influxdb_2_6: + parent: Manage notification rules +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +If you no longer need to receive an alert, delete the associated notification rule. + +## Delete a notification rule + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page. +3. Click the **{{< icon "trash" >}}** icon on the notification rule you want to delete. +4. Click **{{< caps >}}Confirm{{< /caps >}}**. diff --git a/content/influxdb/v2.6/monitor-alert/notification-rules/update.md b/content/influxdb/v2.6/monitor-alert/notification-rules/update.md new file mode 100644 index 000000000..4805df17a --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-rules/update.md @@ -0,0 +1,50 @@ +--- +title: Update notification rules +description: > + Update notification rules to update the notification message or change the schedule or conditions. +weight: 203 +menu: + influxdb_2_6: + parent: Manage notification rules +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +Update notification rules to update the notification message or change the schedule or conditions. + + +1. In the navigation menu on the left, select **Alerts > Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page. + +- [Update the name or description for notification rules](#update-the-name-or-description-for-notification-rules) +- [Enable or disable notification rules](#enable-or-disable-notification-rules) +- [Add a label to notification rules](#add-a-label-to-notification-rules) + +## Update the name or description for notification rules +On the Notification Rules page: + +1. Hover over the name or description of a rule and click the pencil icon + (**{{< icon "edit" >}}**) to edit the field. +2. Click outside of the field to save your changes. + +## Enable or disable notification rules +On the notification rules page, click the {{< icon "toggle" >}} toggle to +enable or disable the notification rule. + +## Add a label to notification rules +On the notification rules page: + +1. Click **{{< icon "add-label" >}} Add a label** + next to the rule you want to add a label to. + The **Add Labels** box opens. +2. To add an existing label, select the label from the list. +3. To create and add a new label: + - In the search field, enter the name of the new label. The **Create Label** box opens. + - In the **Description** field, enter an optional description for the label. + - Select a color for the label. + - Click **{{< caps >}}Create Label{{< /caps >}}**. +4. To remove a label, click **{{< icon "x" >}}** on the label. diff --git a/content/influxdb/v2.6/monitor-alert/notification-rules/view.md b/content/influxdb/v2.6/monitor-alert/notification-rules/view.md new file mode 100644 index 000000000..6da6d825d --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/notification-rules/view.md @@ -0,0 +1,44 @@ +--- +title: View notification rules +description: > + Update notification rules to update the notification message or change the schedule or conditions. +weight: 202 +menu: + influxdb_2_6: + parent: Manage notification rules +related: + - /influxdb/v2.6/monitor-alert/checks/ + - /influxdb/v2.6/monitor-alert/notification-endpoints/ +--- + +View notification rule details and statuses and notifications generated by notification rules in the InfluxDB user interface (UI). + +- [View a list of all notification rules](#view-a-list-of-all-notification-rules) +- [View notification rule details](#view-notification-rule-details) +- [View statuses generated by a check](#view-statuses-generated-by-a-notification-rule) +- [View notifications triggered by a notification rule](#view-notifications-triggered-by-a-notification-rule) + +**To view notification rules:** + +1. In the navigation menu on the left, select **Alerts**. + + {{< nav-icon "alerts" >}} + +2. Select **{{< caps >}}Notification Rules{{< /caps >}}** near to top of the page. + +## View a list of all notification rules +The **{{< caps >}}Notification Rules{{< /caps >}}** section of the Alerts landing page displays all existing checks. + +## View notification rule details +Click the name of the check you want to view. +The check builder appears. +Here you can view the check query and logic. + +## View statuses generated by a notification rule +Click the **{{< icon "gear" >}}** icon on the notification rule, and then **View History**. +The Statuses History page displays statuses generated by the selected check. + +## View notifications triggered by a notification rule +1. Click the **{{< icon "gear" >}}** icon on the notification rule, and then **View History**. +2. In the top left corner, click **{{< caps >}}Notifications{{< /caps >}}**. + The Notifications History page displays notifications initiated by the selected notification rule. diff --git a/content/influxdb/v2.6/monitor-alert/send-email.md b/content/influxdb/v2.6/monitor-alert/send-email.md new file mode 100644 index 000000000..fd064e11e --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/send-email.md @@ -0,0 +1,295 @@ +--- +title: Send alert email +description: > + Send an alert email. +menu: + influxdb_2_6: + parent: Monitor & alert +weight: 104 +influxdb/v2.6/tags: [alert, email, notifications, check] +related: + - /influxdb/v2.6/monitor-alert/checks/ +--- + +Send an alert email using a third-party service, such as [SendGrid](https://sendgrid.com/), [Amazon Simple Email Service (SES)](https://aws.amazon.com/ses/), [Mailjet](https://www.mailjet.com/), or [Mailgun](https://www.mailgun.com/). To send an alert email, complete the following steps: + +1. [Create a check](/influxdb/v2.6/monitor-alert/checks/create/#create-a-check-in-the-influxdb-ui) to identify the data to monitor and the status to alert on. +2. Set up your preferred email service (sign up, retrieve API credentials, and send test email): + - **SendGrid**: See [Getting Started With the SendGrid API](https://sendgrid.com/docs/API_Reference/api_getting_started.html) + - **AWS Simple Email Service (SES)**: See [Using the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email.html). Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). + - **Mailjet**: See [Getting Started with Mailjet](https://dev.mailjet.com/email/guides/getting-started/) + - **Mailgun**: See [Mailgun Signup](https://signup.mailgun.com/new/signup) +3. [Create an alert email task](#create-an-alert-email-task) to call your email service and send an alert email. + + {{% note %}} + In the procedure below, we use the **Task** page in the InfluxDB UI (user interface) to create a task. Explore other ways to [create a task](/influxdb/v2.6/process-data/manage-tasks/create-task/). + {{% /note %}} + +### Create an alert email task + +1. In the InfluxDB UI, select **Tasks** in the navigation menu on the left. + + {{< nav-icon "tasks" >}} + +2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}**. +3. In the **Name** field, enter a descriptive name, for example, **Send alert email**, + and then enter how often to run the task in the **Every** field, for example, `10m`. + For more detail, such as using cron syntax or including an offset, see [Task configuration options](/influxdb/v2.6/process-data/task-options/). + +4. In the right panel, enter the following detail in your **task script** (see [examples below](#examples)): + - Import the [Flux HTTP package](/{{< latest "flux" >}}/stdlib/http/). + - (Optional) Store your API key as a secret for reuse. + First, [add your API key as a secret](/influxdb/v2.6/security/secrets/manage-secrets/add/), + and then import the [Flux InfluxDB Secrets package](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/secrets/). + - Query the `statuses` measurement in the `_monitoring` bucket to retrieve all statuses generated by your check. + - Set the time range to monitor; use the same interval that the task is scheduled to run. For example, `range (start: -task.every)`. + - Set the `_level` to alert on, for example, `crit`, `warn`, `info`, or `ok`. + - Use the `map()` function to evaluate the criteria to send an alert using `http.post()`. + - Specify your email service `url` (endpoint), include applicable request `headers`, and verify your request `data` format follows the format specified for your email service. + +#### Examples + +{{< tabs-wrapper >}} +{{% tabs %}} +[SendGrid](#) +[AWS SES](#) +[Mailjet](#) +[Mailgun](#) +{{% /tabs %}} + + +{{% tab-content %}} + +The example below uses the SendGrid API to send an alert email when more than 3 critical statuses occur since the previous task run. + +```js +import "http" +import "json" +// Import the Secrets package if you store your API key as a secret. +// For detail on how to do this, see Step 4 above. +import "influxdata/influxdb/secrets" + +// Retrieve the secret if applicable. Otherwise, skip this line +// and add the API key as the Bearer token in the Authorization header. +SENDGRID_APIKEY = secrets.get(key: "SENDGRID_APIKEY") + +numberOfCrits = from(bucket: "_monitoring") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "statuses" and r._level == "crit") + |> count() + +numberOfCrits + |> map( + fn: (r) => if r._value > 3 then + {r with _value: http.post( + url: "https://api.sendgrid.com/v3/mail/send", + headers: {"Content-Type": "application/json", "Authorization": "Bearer ${SENDGRID_APIKEY}"}, + data: json.encode( + v: { + "personalizations": [ + { + "to": [ + { + "email": "jane.doe@example.com" + } + ] + } + ], + "from": { + "email": "john.doe@example.com" + }, + "subject": "InfluxDB critical alert", + "content": [ + { + "type": "text/plain", + "value": "There have been ${r._value} critical statuses." + } + ] + } + ) + )} + else + {r with _value: 0}, + ) +``` + +{{% /tab-content %}} + + +{{% tab-content %}} + +The example below uses the AWS SES API v2 to send an alert email when more than 3 critical statuses occur since the last task run. + +{{% note %}} +Your AWS SES request, including the `url` (endpoint), authentication, and the structure of the request may vary. For more information, see [Amazon SES API requests](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-requests.html) and [Authenticating requests to the Amazon SES API](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). We recommend signing your AWS API requests using the [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html). +{{% /note %}} + +```js +import "http" +import "json" +// Import the Secrets package if you store your API credentials as secrets. +// For detail on how to do this, see Step 4 above. +import "influxdata/influxdb/secrets" + +// Retrieve the secrets if applicable. Otherwise, skip this line +// and add the API key as the Bearer token in the Authorization header. +AWS_AUTH_ALGORITHM = secrets.get(key: "AWS_AUTH_ALGORITHM") +AWS_CREDENTIAL = secrets.get(key: "AWS_CREDENTIAL") +AWS_SIGNED_HEADERS = secrets.get(key: "AWS_SIGNED_HEADERS") +AWS_CALCULATED_SIGNATURE = secrets.get(key: "AWS_CALCULATED_SIGNATURE") + +numberOfCrits = from(bucket: "_monitoring") + |> range(start: -task.every) + |> filter(fn: (r) => r.measurement == "statuses" and r._level == "crit") + |> count() + +numberOfCrits + |> map( + fn: (r) => if r._value > 3 then + {r with _value: http.post( + url: "https://email.your-aws-region.amazonaws.com/sendemail/v2/email/outbound-emails", + headers: { + "Content-Type": "application/json", + "Authorization": "Bearer ${AWS_AUTH_ALGORITHM}${AWS_CREDENTIAL}${AWS_SIGNED_HEADERS}${AWS_CALCULATED_SIGNATURE}"}, + data: json.encode(v: { + "Content": { + "Simple": { + "Body": { + "Text": { + "Charset": "UTF-8", + "Data": "There have been ${r._value} critical statuses." + } + }, + "Subject": { + "Charset": "UTF-8", + "Data": "InfluxDB critical alert" + } + } + }, + "Destination": { + "ToAddresses": [ + "john.doe@example.com" + ] + } + } + ) + )} + else + {r with _value: 0}, + ) +``` + +For details on the request syntax, see [SendEmail API v2 reference](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html). + +{{% /tab-content %}} + + +{{% tab-content %}} + +The example below uses the Mailjet Send API to send an alert email when more than 3 critical statuses occur since the last task run. + +{{% note %}} +To view your Mailjet API credentials, sign in to Mailjet and open the [API Key Management page](https://app.mailjet.com/account/api_keys). +{{% /note %}} + +```js +import "http" +import "json" +// Import the Secrets package if you store your API keys as secrets. +// For detail on how to do this, see Step 4 above. +import "influxdata/influxdb/secrets" + +// Retrieve the secrets if applicable. Otherwise, skip this line +// and add the API keys as Basic credentials in the Authorization header. +MAILJET_APIKEY = secrets.get(key: "MAILJET_APIKEY") +MAILJET_SECRET_APIKEY = secrets.get(key: "MAILJET_SECRET_APIKEY") + +numberOfCrits = from(bucket: "_monitoring") + |> range(start: -task.every) + |> filter(fn: (r) => r.measurement == "statuses" and "r.level" == "crit") + |> count() + +numberOfCrits + |> map( + fn: (r) => if r._value > 3 then + {r with + _value: http.post( + url: "https://api.mailjet.com/v3.1/send", + headers: { + "Content-type": "application/json", + "Authorization": "Basic ${MAILJET_APIKEY}:${MAILJET_SECRET_APIKEY}" + }, + data: json.encode( + v: { + "Messages": [ + { + "From": {"Email": "jane.doe@example.com"}, + "To": [{"Email": "john.doe@example.com"}], + "Subject": "InfluxDB critical alert", + "TextPart": "There have been ${r._value} critical statuses.", + "HTMLPart": "

${r._value} critical statuses

There have been ${r._value} critical statuses.", + }, + ], + }, + ), + ), + } + else + {r with _value: 0}, + ) +``` + +{{% /tab-content %}} + + + +{{% tab-content %}} + +The example below uses the Mailgun API to send an alert email when more than 3 critical statuses occur since the last task run. + +{{% note %}} +To view your Mailgun API keys, sign in to Mailjet and open [Account Security - API security](https://app.mailgun.com/app/account/security/api_keys). Mailgun requires that a domain be specified via Mailgun. A domain is automatically created for you when you first set up your account. You must include this domain in your `url` endpoint (for example, `https://api.mailgun.net/v3/YOUR_DOMAIN` or `https://api.eu.mailgun.net/v3/YOUR_DOMAIN`. If you're using a free version of Mailgun, you can set up a maximum of five authorized recipients (to receive email alerts) for your domain. To view your Mailgun domains, sign in to Mailgun and view the [Domains page](https://app.mailgun.com/app/sending/domains). +{{% /note %}} + +```js +import "http" +import "json" +// Import the Secrets package if you store your API key as a secret. +// For detail on how to do this, see Step 4 above. +import "influxdata/influxdb/secrets" + +// Retrieve the secret if applicable. Otherwise, skip this line +// and add the API key as the Bearer token in the Authorization header. +MAILGUN_APIKEY = secrets.get(key: "MAILGUN_APIKEY") + +numberOfCrits = from(bucket: "_monitoring") + |> range(start: -task.every) + |> filter(fn: (r) => r["_measurement"] == "statuses") + |> filter(fn: (r) => r["_level"] == "crit") + |> count() + +numberOfCrits + |> map( + fn: (r) => if r._value > 1 then + {r with _value: http.post( + url: "https://api.mailgun.net/v3/YOUR_DOMAIN/messages", + headers: { + "Content-type": "application/json", + "Authorization": "Basic api:${MAILGUN_APIKEY}" + }, + data: json.encode(v: { + "from": "Username ", + "to": "email@example.com", + "subject": "InfluxDB critical alert", + "text": "There have been ${r._value} critical statuses." + } + ) + )} + else + {r with _value: 0}, + ) +``` + +{{% /tab-content %}} + +{{< /tabs-wrapper >}} diff --git a/content/influxdb/v2.6/monitor-alert/templates/_index.md b/content/influxdb/v2.6/monitor-alert/templates/_index.md new file mode 100644 index 000000000..2ff21d463 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/_index.md @@ -0,0 +1,14 @@ +--- +title: Monitor with templates +description: > + Use community templates to monitor data in many supported environments. Monitor infrastructure, networking, IoT, software, security, TICK stack, and more. +menu: + influxdb_2_6: + parent: Monitor & alert +weight: 104 +influxdb/v2.6/tags: [monitor, templates] +--- + +Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor data in supported environments. + +{{< children >}} diff --git a/content/influxdb/v2.6/monitor-alert/templates/infrastructure/_index.md b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/_index.md new file mode 100644 index 000000000..080ff6263 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/_index.md @@ -0,0 +1,14 @@ +--- +title: Monitor infrastructure +description: > + Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor your infrastructure. +menu: + influxdb_2_6: + parent: Monitor with templates +weight: 104 +influxdb/v2.6/tags: [monitor, templates, infrastructure] +--- + +Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor your infrastructure. + +{{< children >}} diff --git a/content/influxdb/v2.6/monitor-alert/templates/infrastructure/aws.md b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/aws.md new file mode 100644 index 000000000..f36b0c4ad --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/aws.md @@ -0,0 +1,59 @@ +--- +title: Monitor Amazon Web Services (AWS) +description: > + Use the AWS CloudWatch Monitoring template to monitor data from Amazon Web Services (AWS), Amazon Elastic Compute Cloud (EC2), and Amazon Elastic Load Balancing (ELB) with the AWS CloudWatch Service. +menu: + influxdb_2_6: + parent: Monitor infrastructure + name: AWS CloudWatch +weight: 201 +--- + +Use the [AWS CloudWatch Monitoring template](https://github.com/influxdata/community-templates/tree/master/aws_cloudwatch) to monitor data from [Amazon Web Services (AWS)](https://aws.amazon.com/), [Amazon Elastic Compute Cloud (EC2)](https://aws.amazon.com/ec2/), and [Amazon Elastic Load Balancing (ELB)](https://aws.amazon.com/elasticloadbalancing/) with the [AWS CloudWatch Service](https://aws.amazon.com/cloudwatch/). + +The AWS CloudWatch Monitoring template includes the following: + +- two [dashboards](/influxdb/v2.6/reference/glossary/#dashboard): + - **AWS CloudWatch NLB (Network Load Balancers) Monitoring**: Displays data from the `cloudwatch_aws_network_elb measurement` + - **AWS CloudWatch Instance Monitoring**: Displays data from the `cloudwatch_aws_ec2` measurement +- two [buckets](/influxdb/v2.6/reference/glossary/#bucket): `kubernetes` and `cloudwatch` +- two labels: `inputs.cloudwatch`, `AWS` +- one variable: `v.bucket` +- one [Telegraf configuration](/influxdb/v2.6/telegraf-configs/): [AWS CloudWatch input plugin](/{{< latest "telegraf" >}}/plugins//#cloudwatch) + +## Apply the template + +1. Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) to run the following command: + + ```sh + influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/aws_cloudwatch/aws_cloudwatch.yml + ``` + For more information, see [influx apply](/influxdb/v2.6/reference/cli/influx/apply/). +2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the CloudWatch API and [InfluxDB v2 API](/influxdb/v2.6/reference/api/). +3. In your Telegraf configuration file (`telegraf.conf`), find the following example `influxdb_v2` output plugins, and then **replace** the `urls` to specify the servers to monitor: + + ```sh + ## k8s + [[outputs.influxdb_v2]] + urls = ["http://influxdb.monitoring:8086"] + organization = "InfluxData" + bucket = "kubernetes" + token = "secret-token" + + ## cloudv2 sample + [[outputs.influxdb_v2]] + urls = ["$INFLUX_HOST"] + token = "$INFLUX_TOKEN" + organization = "$INFLUX_ORG" + bucket = “cloudwatch" + ``` +4. [Start Telegraf](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#start-telegraf). + +## View the incoming data + +1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation. + + {{< nav-icon "dashboards" >}} + +2. Open your AWS dashboards, and then set the `v.bucket` variable to specify the + bucket to query data from (`kubernetes` or `cloudwatch`). diff --git a/content/influxdb/v2.6/monitor-alert/templates/infrastructure/docker.md b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/docker.md new file mode 100644 index 000000000..b073252e7 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/docker.md @@ -0,0 +1,57 @@ +--- +title: Monitor Docker +description: > + Use the [Docker Monitoring template](https://github.com/influxdata/community-templates/tree/master/docker) to monitor your Docker containers. +menu: + influxdb_2_6: + parent: Monitor infrastructure + name: Docker +weight: 202 +--- + +Use the [Docker Monitoring template](https://github.com/influxdata/community-templates/tree/master/docker) to monitor your Docker containers. First, [apply the template](#apply-the-template), and then [view incoming data](#view-incoming-data). +This template uses the [Docker input plugin](/{{< latest "telegraf" >}}/plugins//#docker) to collect metrics stored in InfluxDB and display these metrics in a dashboard. + +The Docker Monitoring template includes the following: + +- one [dashboard](/influxdb/v2.6/reference/glossary/#dashboard): **Docker** +- one [bucket](/influxdb/v2.6/reference/glossary/#bucket): `docker, 7d retention` +- labels: Docker input plugin labels +- one [Telegraf configuration](/influxdb/v2.6/telegraf-configs/): Docker input plugin +- one variable: `bucket` +- four [checks](/influxdb/v2.6/reference/glossary/#check): `Container cpu`, `mem`, `disk`, `non-zero exit` +- one [notification endpoint](/influxdb/v2.6/reference/glossary/#notification-endpoint): `Http Post` +- one [notification rule](/influxdb/v2.6/reference/glossary/#notification-rule): `Crit Alert` + +For more information about how checks, notification endpoints, and notifications rules work together, see [monitor data and send alerts](/influxdb/v2.6/monitor-alert/). + +## Apply the template + +1. Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) to run the following command: + + ```sh + influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/docker/docker.yml + ``` + For more information, see [influx apply](/influxdb/v2.6/reference/cli/influx/apply/). + + {{% note %}} +Ensure your `influx` CLI is configured with your account credentials and that configuration is active. For more information, see [influx config](/influxdb/v2.6/reference/cli/influx/config/). + {{% /note %}} + +2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the Docker containers and [InfluxDB v2 API](/influxdb/v2.6/reference/api/). +3. In your [Telegraf configuration file (`telegraf.conf`)](/influxdb/v2.6/telegraf-configs/), do the following: + - Depending on how you run Docker, you may need to customize the [Docker input plugin](/{{< latest "telegraf" >}}/plugins//#docker) configuration, for example, you may need to specify the `endpoint` value. + - Set the following environment variables: + - INFLUX_TOKEN: Token must have permissions to read Telegraf configurations and write data to the `telegraf` bucket. See how to [view tokens](/influxdb/v2.6/security/tokens/view-tokens/). + - INFLUX_ORG: Name of your organization. See how to [view your organization](/influxdb/v2.6/organizations/view-orgs/). + - INFLUX_HOST: Your InfluxDB host URL, for example, localhost, a remote instance, or InfluxDB Cloud. + +4. [Start Telegraf](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#start-telegraf). + +## View incoming data + +1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation. + + {{< nav-icon "dashboards" >}} + +2. Open the **Docker** dashboard to start monitoring. diff --git a/content/influxdb/v2.6/monitor-alert/templates/infrastructure/raspberry-pi.md b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/raspberry-pi.md new file mode 100644 index 000000000..57abd0870 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/raspberry-pi.md @@ -0,0 +1,62 @@ +--- +title: Monitor Raspberry Pi +description: > + Use the Raspberry Pi system template to monitor your Raspberry Pi 4 or 400 Linux system. +menu: + influxdb_2_6: + parent: Monitor infrastructure + name: Raspberry Pi +weight: 201 +--- + +Use the [Raspberry Pi Monitoring template](https://github.com/influxdata/community-templates/tree/master/raspberry-pi) +to monitor your Raspberry Pi 4 or 400 Linux system. + +The Raspberry Pi template includes the following: + +- one [bucket](/influxdb/v2.6/reference/glossary/#bucket): `rasp-pi` (7d retention) +- labels: `raspberry-pi` + Telegraf plugin labels + - [Diskio input plugin](/{{< latest "telegraf" >}}/plugins//#diskio) + - [Mem input plugin](/{{< latest "telegraf" >}}/plugins//#mem) + - [Net input plugin](/{{< latest "telegraf" >}}/plugins//#net) + - [Processes input plugin](/{{< latest "telegraf" >}}/plugins//#processes) + - [Swap input plugin](/{{< latest "telegraf" >}}/plugins//#swap) + - [System input plugin](/{{< latest "telegraf" >}}/plugins//#system) +- one [Telegraf configuration](/influxdb/v2.6/telegraf-configs/) +- one [dashboard](/influxdb/v2.6/reference/glossary/#dashboard): Raspberry Pi System +- two variables: `bucket` and `linux_host` + +## Apply the template + +1. Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) to run the following command: + + ```sh + influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/raspberry-pi/raspberry-pi-system.yml + ``` + For more information, see [influx apply](/influxdb/v2.6/reference/cli/influx/apply/). +2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on + your Raspberry Pi and ensure your Raspberry Pi has network access to the + [InfluxDB {{% cloud-only %}}Cloud{{% /cloud-only %}} API](/influxdb/v2.6/reference/api/). +3. Add the following environment variables to your Telegraf environment: + + - `INFLUX_HOST`: {{% oss-only %}}Your [InfluxDB URL](/influxdb/v2.6/reference/urls/){{% /oss-only %}} + {{% cloud-only %}}Your [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions/){{% /cloud-only %}} + - `INFLUX_TOKEN`: Your [InfluxDB {{% cloud-only %}}Cloud{{% /cloud-only %}} API token](/influxdb/v2.6/security/tokens/) + - `INFLUX_ORG`: Your InfluxDB {{% cloud-only %}}Cloud{{% /cloud-only %}} organization name. + + ```sh + export INFLUX_HOST=http://localhost:8086 + export INFLUX_TOKEN=mY5uP3rS3cr3T70keN + export INFLUX_ORG=example-org + ``` + +4. [Start Telegraf](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#start-telegraf). + +## View the incoming data + +1. In the InfluxDB user interface (UI), select **Boards** (**Dashboards**). + + {{< nav-icon "dashboards" >}} + +2. Click the Raspberry Pi System link to open your dashboard, then select `rasp-pi` +as your bucket and select your linux_host. diff --git a/content/influxdb/v2.6/monitor-alert/templates/infrastructure/vshpere.md b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/vshpere.md new file mode 100644 index 000000000..5e6bfe574 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/vshpere.md @@ -0,0 +1,58 @@ +--- +title: Monitor vSphere +description: > + Use the [vSphere Dashboard for InfluxDB v2 template](https://github.com/influxdata/community-templates/tree/master/vsphere) to monitor your vSphere host. +menu: + influxdb_2_6: + parent: Monitor infrastructure + name: vSphere +weight: 206 +--- + +Use the [vSphere Dashboard for InfluxDB v2 template](https://github.com/influxdata/community-templates/tree/master/vsphere) to monitor your vSphere host. First, [apply the template](#apply-the-template), and then [view incoming data](#view-incoming-data). +This template uses the [Docker input plugin](/{{< latest "telegraf" >}}/plugins//#docker) to collect metrics stored in InfluxDB and display these metrics in a dashboard. + +The Docker Monitoring template includes the following: + +- one [dashboard](/influxdb/v2.6/reference/glossary/#dashboard): **vsphere** +- one [bucket](/influxdb/v2.6/reference/glossary/#bucket): `vsphere` +- label: vsphere +- one [Telegraf configuration](/influxdb/v2.6/telegraf-configs/): InfluxDB v2 output plugin, vSphere input plugin +- one variable: `bucket` + +## Apply the template + +1. Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) to run the following command: + + ```sh + influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/vsphere/vsphere.yml + ``` + For more information, see [influx apply](/influxdb/v2.6/reference/cli/influx/apply/). + + {{% note %}} +Ensure your `influx` CLI is configured with your account credentials and that configuration is active. For more information, see [influx config](/influxdb/v2.6/reference/cli/influx/config/). + {{% /note %}} + +2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the vSphere host and [InfluxDB v2 API](/influxdb/v2.6/reference/api/). +3. In your [Telegraf configuration file (`telegraf.conf`)](/influxdb/v2.6/telegraf-configs/), do the following: + - Set the following environment variables: + - INFLUX_TOKEN: Token must have permissions to read Telegraf configurations and write data to the `telegraf` bucket. See how to [view tokens](/influxdb/v2.6/security/tokens/view-tokens/). + - INFLUX_ORG: Name of your organization. See how to [view your organization](/influxdb/v2.6/organizations/view-orgs/). + - INFLUX_HOST: Your InfluxDB host URL, for example, localhost, a remote instance, or InfluxDB Cloud. + - INFLUX_BUCKET: Bucket to store data in. To use the bucket included, you must export the variable: `export INFLUX_BUCKET=vsphere` +4. - Set the host address to the vSphere and provide the `username` and `password` as variables: + ```sh + vcenters = [ "https://$VSPHERE_HOST/sdk" ] + username = "$vsphere-user" + password = "$vsphere-password" + ``` + +4. [Start Telegraf](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#start-telegraf). + +## View incoming data + +1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation. + + {{< nav-icon "dashboards" >}} + +2. Open the **vsphere** dashboard to start monitoring. diff --git a/content/influxdb/v2.6/monitor-alert/templates/infrastructure/windows.md b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/windows.md new file mode 100644 index 000000000..bd590cfd7 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/infrastructure/windows.md @@ -0,0 +1,55 @@ +--- +title: Monitor Windows +description: > + Use the [Windows System Monitoring template](https://github.com/influxdata/community-templates/tree/master/windows_system) to monitor your Windows system. +menu: + influxdb_2_6: + parent: Monitor infrastructure + name: Windows +weight: 207 +--- + +Use the [Windows System Monitoring template](https://github.com/influxdata/community-templates/tree/master/windows_system) to monitor your Windows system. First, [apply the template](#apply-the-template), and then [view incoming data](#view-incoming-data). + +The Windows System Monitoring template includes the following: + +- one [dashboard](/influxdb/v2.6/reference/glossary/#dashboard): **Windows System** +- one [bucket](/influxdb/v2.6/reference/glossary/#bucket): `telegraf`, 7d retention +- label: `Windows System Template`, Telegraf plugin labels: `outputs.influxdb_v2` +- one [Telegraf configuration](/influxdb/v2.6/telegraf-configs/): InfluxDB v2 output plugin, Windows Performance Counters input plugin +- two variables: `bucket`, `windows_host` + +## Apply the template + +1. Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) to run the following command: + + ```sh + influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/windows_system/windows_system.yml + ``` + For more information, see [influx apply](/influxdb/v2.6/reference/cli/influx/apply/). + + {{% note %}} +Ensure your `influx` CLI is configured with your account credentials and that configuration is active. For more information, see [influx config](/influxdb/v2.6/reference/cli/influx/config/). + {{% /note %}} + +2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the Windows system and [InfluxDB v2 API](/influxdb/v2.6/reference/api/). +3. In your [Telegraf configuration file (`telegraf.conf`)](/influxdb/v2.6/telegraf-configs/), do the following: + - Set the following environment variables: + - INFLUX_TOKEN: Token must have permissions to read Telegraf configurations and write data to the `telegraf` bucket. See how to [view tokens](/influxdb/v2.6/security/tokens/view-tokens/). + - INFLUX_ORG: Name of your organization. See how to [view your organization](/influxdb/v2.6/organizations/view-orgs/). + - INFLUX_URL: Your InfluxDB host URL, for example, localhost, a remote instance, or InfluxDB Cloud. + +4. [Start Telegraf](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#start-telegraf). +5. To monitor multiple Windows systems, repeat steps 1-4 for each system. + +## View incoming data + +1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation. + + {{< nav-icon "dashboards" >}} + +2. Open the **Windows System** dashboard to start monitoring. + + {{% note %}} + If you're monitoring multiple Windows machines, switch between them using the `windows_host` filter at the top of the dashboard. + {{% /note %}} diff --git a/content/influxdb/v2.6/monitor-alert/templates/monitor.md b/content/influxdb/v2.6/monitor-alert/templates/monitor.md new file mode 100644 index 000000000..c3a3fcd9a --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/monitor.md @@ -0,0 +1,175 @@ +--- +title: Monitor InfluxDB OSS using a template +description: > + Monitor your InfluxDB OSS instance using InfluxDB Cloud and + a pre-built InfluxDB template. +menu: + influxdb_2_6: + parent: Monitor with templates + name: Monitor InfluxDB OSS +weight: 102 +influxdb/v2.6/tags: [templates, monitor] +aliases: + - /influxdb/v2.6/influxdb-templates/monitor/ +related: + - /influxdb/v2.6/reference/cli/influx/apply/ + - /influxdb/v2.6/reference/cli/influx/template/ +--- + +Use [InfluxDB Cloud](/influxdb/cloud/), the [InfluxDB Open Source (OSS) Metrics template](https://github.com/influxdata/community-templates/tree/master/influxdb2_oss_metrics), +and [Telegraf](/{{< latest "telegraf" >}}/) to monitor one or more InfluxDB OSS instances. + +Do the following: + +1. [Review requirements](#review-requirements) +2. [Install the InfluxDB OSS Monitoring template](#install-the-influxdb-oss-monitoring-template) +3. [Set up InfluxDB OSS for monitoring](#set-up-influxdb-oss-for-monitoring) +4. [Set up Telegraf](#set-up-telegraf) +5. [View the Monitoring dashboard](#view-the-monitoring-dashboard) +6. (Optional) [Alert when metrics stop reporting](#alert-when-metrics-stop-reporting) +7. (Optional) [Create a notification endpoint and rule](#create-a-notification-endpoint-and-rule) + +## Review requirements + +Before you begin, make sure you have access to the following: + +- InfluxDB Cloud account ([sign up for free here](https://cloud2.influxdata.com/signup)) +- Command line access to a machine [running InfluxDB OSS 2.x](/influxdb/v2.6/install/) and permissions to install Telegraf on this machine +- Internet connectivity from the machine running InfluxDB OSS 2.x and Telegraf to InfluxDB Cloud +- Sufficient resource availability to install the template (InfluxDB Cloud Free + Plan accounts include [resource limits](/influxdb/cloud/account-management/pricing-plans/#resource-limits/influxdb/cloud/account-management/pricing-plans/#resource-limits)) + +## Install the InfluxDB OSS Monitoring template + +The InfluxDB OSS Monitoring template includes a Telegraf configuration that sends +InfluxDB OSS metrics to an InfluxDB endpoint and a dashboard that visualizes the metrics. + +1. [Log into your InfluxDB Cloud account](https://cloud2.influxdata.com/). +2. Go to **Settings > Templates** in the navigation bar on the left + + {{< nav-icon "Settings" >}} + +3. Under **Paste the URL of the Template's resource manifest file**, enter the + following template URL: + + ``` + https://raw.githubusercontent.com/influxdata/community-templates/master/influxdb2_oss_metrics/influxdb2_oss_metrics.yml + ``` + +4. Click **{{< caps >}}Lookup Template{{< /caps >}}**, and then click **{{< caps >}}Install Template{{< /caps >}}**. + InfluxDB Cloud imports the template, which includes the following resources: + + - Dashboard `InfluxDB OSS Metrics` + - Telegraf configuration `scrape-influxdb-oss-telegraf` + - Bucket `oss_metrics` + - Check `InfluxDB OSS Deadman` + - Labels `influxdb2` and `prometheus` + +## Set up InfluxDB OSS for monitoring + +By default, InfluxDB OSS 2.x has a `/metrics` endpoint available, which exports +internal InfluxDB metrics in [Prometheus format](https://prometheus.io/docs/concepts/data_model/). + +1. Ensure the `/metrics` endpoint is [enabled](/{{< latest "influxdb" >}}/reference/config-options/#metrics-disabled). + If you've changed the default settings to disable the `/metrics` endpoint, + [re-enable these settings](/{{< latest "influxdb" >}}/reference/config-options/#metrics-disabled). +2. Navigate to the `/metrics` endpoint of your InfluxDB OSS instance to view the InfluxDB OSS system metrics in your browser: + +## Set up Telegraf + +Set up Telegraf to scrape metrics from InfluxDB OSS to send to your InfluxDB Cloud account. + +On each InfluxDB OSS instance you want to monitor, do the following: + +1. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/). +2. Set the following environment variables in your Telegraf environment: + + - `INFLUX_URL`: Your [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions/) + - `INFLUX_ORG`: Your InfluxDB Cloud organization name + +1. [In the InfluxDB Cloud UI](https://cloud2.influxdata.com/), go to **Load Data > Telegraf** in the left navigation. + + {{< nav-icon "load-data" >}} + +2. Click **Setup Instructions** under **Scrape InfluxDB OSS Metrics**. +3. Complete the Telegraf Setup instructions to start Telegraf using the Scrape InfluxDB OSS Metrics + Telegraf configuration stored in InfluxDB Cloud. + + {{% note %}} +For your API token, generate a new token or use an existing All Access token. If you run Telegraf as a service, edit your init script to set the environment variable and ensure its available to the service. + {{% /note %}} + +Telegraf runs quietly in the background (no immediate output appears), and begins +pushing metrics to the `oss_metrics` bucket in your InfluxDB Cloud account. + +## View the Monitoring dashboard + +To see your data in real time, view the Monitoring dashboard. + +1. Select **Dashboards** in your **InfluxDB Cloud** account. + + {{< nav-icon "dashboards" >}} + +2. Click **InfluxDB OSS Metrics**. Metrics appear in your dashboard. +3. Customize your monitoring dashboard as needed. For example, send an alert in the following cases: + - Users create a new task or bucket + - You're testing machine limits + - [Metrics stop reporting](#alert-when-metrics-stop-reporting) + +## Alert when metrics stop reporting + +The Monitoring template includes a [deadman check](/influxdb/cloud/monitor-alert/checks/create/#deadman-check) to verify metrics are reported at regular intervals. + +To alert when data stops flowing from InfluxDB OSS instances to your InfluxDB Cloud account, do the following: + +1. [Customize the deadman check](#customize-the-deadman-check) to identify the fields you want to monitor. +2. [Create a notification endpoint and rule](#create-a-notification-endpoint-and-rule) to receive notifications when your deadman check is triggered. + +### Customize the deadman check + +1. To view the deadman check, click **Alerts** in the navigation bar of your **InfluxDB Cloud** account. + + {{< nav-icon "alerts" >}} + +2. Choose a InfluxDB OSS field or create a new OSS field for your deadman alert: + 1. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}** and select **Deadman Check** in the dropdown menu. + 2. Define your query with at least one field. + 3. Click **{{< caps >}}Submit{{< /caps >}}** and **{{< caps >}}Configure Check{{< /caps >}}**. + When metrics stop reporting, you'll receive an alert. +3. Start under **Schedule Every**, set the amount of time to check for data. +4. Set the amount of time to wait before switching to a critical alert. +5. Click **{{< icon "check" >}}** to save the check. + +## Create a notification endpoint and rule + +To receive a notification message when your deadman check is triggered, create a [notification endpoint](#create-a-notification-endpoint) and [rule](#create-a-notification-rule). + +### Create a notification endpoint + +InfluxData supports different endpoints: Slack, PagerDuty, and HTTP. Slack is free for all users, while PagerDuty and HTTP are exclusive to the Usage-Based Plan. + +#### Send a notification to Slack + +1. Create a [Slack Webhooks](https://api.slack.com/messaging/webhooks). +2. Go to **Alerts > Alerts** in the left navigation menu and then click **{{< caps >}}Notification Endpoints{{< /caps >}}**. + + {{< nav-icon "alerts" >}} + +4. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**, and enter a name and description for your Slack endpoint. +3. Enter your Slack Webhook under **Incoming Webhook URL** and click **{{< caps >}}Create Notification Endpoint{{< /caps >}}**. + +#### Send a notification to PagerDuty or HTTP + +Send a notification to PagerDuty or HTTP endpoints (other webhooks) by [upgrading your InfluxDB Cloud account](/influxdb/cloud/account-management/billing/#upgrade-to-usage-based-plan). + +### Create a notification rule + +[Create a notification rule](/influxdb/cloud/monitor-alert/notification-rules/create/) to set rules for when to send a deadman alert message to your notification endpoint. + +1. Go to **Alerts > Alerts** in the left navigation menu and then click **{{< caps >}}Notification Rules{{< /caps >}}**. + + {{< nav-icon "alerts" >}} + +4. Click **{{< caps >}}{{< icon "plus" >}} Create{{< /caps >}}**, and then provide + the required information. +3. Click **{{< caps >}}Create Notification Rule{{< /caps >}}**. diff --git a/content/influxdb/v2.6/monitor-alert/templates/networks/_index.md b/content/influxdb/v2.6/monitor-alert/templates/networks/_index.md new file mode 100644 index 000000000..2dd665bba --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/networks/_index.md @@ -0,0 +1,14 @@ +--- +title: Monitor networks +description: > + Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor your networks. +menu: + influxdb_2_6: + parent: Monitor with templates +weight: 104 +influxdb/v2.6/tags: [monitor, templates, networks, networking] +--- + +Use one of our community templates to quickly set up InfluxDB (with a bucket and dashboard) to collect, analyze, and monitor your networks. + +{{< children >}} \ No newline at end of file diff --git a/content/influxdb/v2.6/monitor-alert/templates/networks/haproxy.md b/content/influxdb/v2.6/monitor-alert/templates/networks/haproxy.md new file mode 100644 index 000000000..3cad8bb66 --- /dev/null +++ b/content/influxdb/v2.6/monitor-alert/templates/networks/haproxy.md @@ -0,0 +1,49 @@ +--- +title: Monitor HAProxy +description: > + Use the [HAProxy for InfluxDB v2 template](https://github.com/influxdata/community-templates/tree/master/haproxy) to monitor your HAProxy instance. +menu: + influxdb_2_6: + parent: Monitor networks + name: HAproxy +weight: 201 +--- + +Use the [HAProxy for InfluxDB v2 template](https://github.com/influxdata/community-templates/tree/master/haproxy) to monitor your HAProxy instances. First, [apply the template](#apply-the-template), and then [view incoming data](#view-incoming-data). +This template uses the [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins//#haproxy) to collect metrics stored in an HAProxy instance and display these metrics in a dashboard. + +The HAProxy for InfluxDB v2 template includes the following: + +- one [dashboard](/influxdb/v2.6/reference/glossary/#dashboard): **HAProxy** +- one [bucket](/influxdb/v2.6/reference/glossary/#bucket): `haproxy` +- label: `haproxy` +- one [Telegraf configuration](/influxdb/v2.6/telegraf-configs/): HAProxy input plugin, InfluxDB v2 output plugin +- one variable: `bucket` + +## Apply the template + +1. Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) to run the following command: + + ```sh + influx apply -f https://raw.githubusercontent.com/influxdata/community-templates/master/haproxy/haproxy.yml + ``` + For more information, see [influx apply](/influxdb/v2.6/reference/cli/influx/apply/). + + > **Note:** Ensure your `influx` CLI is configured with your account credentials and that configuration is active. For more information, see [influx config](/influxdb/v2.6/reference/cli/influx/config/). + +2. [Install Telegraf](/{{< latest "telegraf" >}}/introduction/installation/) on a server with network access to both the HAProxy instances and [InfluxDB v2 API](/influxdb/v2.6/reference/api/). +3. In your [Telegraf configuration file (`telegraf.conf`)](/influxdb/v2.6/telegraf-configs/), do the following: + - Set the following environment variables: + - INFLUX_TOKEN: Token must have permissions to read Telegraf configurations and write data to the `haproxy` bucket. See how to [view tokens](/influxdb/v2.6/security/tokens/view-tokens/). + - INFLUX_ORG: Name of your organization. See how to [view your organization](/influxdb/v2.6/organizations/view-orgs/). + - INFLUX_HOST: Your InfluxDB host URL, for example, localhost, a remote instance, or InfluxDB Cloud. + +4. [Start Telegraf](/influxdb/v2.6/write-data/no-code/use-telegraf/auto-config/#start-telegraf). + +## View incoming data + +1. In the InfluxDB user interface (UI), select **Dashboards** in the left navigation. + + {{< nav-icon "dashboards" >}} + +2. Open the **HAProxy** dashboard to start monitoring. diff --git a/content/influxdb/v2.6/notebooks/_index.md b/content/influxdb/v2.6/notebooks/_index.md new file mode 100644 index 000000000..ec9dd7e0b --- /dev/null +++ b/content/influxdb/v2.6/notebooks/_index.md @@ -0,0 +1,17 @@ +--- +title: Notebooks +seotitle: Build notebooks in InfluxDB Cloud +description: > + Use notebooks to build and annotate processes and data flows for time series data. +menu: + influxdb_2_6: + name: Notebooks +weight: 6 +influxdb/v2.6/tags: [notebooks] +--- + +Notebooks are a way to build and annotate processes and data flows for time series data. Notebooks include cells and controls to transform the data in your bucket and other countless possibilities. + +To learn how to use notebooks, check out the following articles: + +{{< children >}} diff --git a/content/influxdb/v2.6/notebooks/clean-data.md b/content/influxdb/v2.6/notebooks/clean-data.md new file mode 100644 index 000000000..c736c88e8 --- /dev/null +++ b/content/influxdb/v2.6/notebooks/clean-data.md @@ -0,0 +1,163 @@ +--- +title: Normalize data with notebooks +description: > + Learn how to create a notebook that normalizes or cleans data to make it + easier to work with. +weight: 105 +influxdb/v2.6/tags: [notebooks] +menu: + influxdb_2_6: + name: Normalize data + parent: Notebooks +--- + +Learn how to create a notebook that normalizes data. +Data normalization is the process of modifying or cleaning data to make it easier to +work with. Examples include adjusting numeric values to a uniform scale and modifying strings. + +Walk through the following example to create a notebook that queries +[NOAA NDBC sample data](/influxdb/v2.0/reference/sample-data/#noaa-ndbc-data), +normalizes degree-based wind directions to cardinal directions, and then writes +the normalized data to a bucket. + +{{< cloud-only >}} +{{% cloud %}} +**Note**: Using sample data counts towards your total InfluxDB Cloud usage. +{{% /cloud %}} +{{< /cloud-only >}} + +1. [Create a new notebook](/influxdb/v2.6/notebooks/create-notebook/). +2. In the **Build a Query** cell: + + 1. In the **FROM** column under **{{% caps %}}Sample{{% /caps %}}**, + select **NOAA National Buoy Data**. + 2. In the next **FILTER** column, select **_measurement** from the drop-down list + and select the **ndbc** measurement in the list of measurements. + 3. In the next **FILTER** column, select **_field** from the drop-down list, + and select the **wind\_dir\_degt** field from the list of fields. + +3. Click {{% icon "notebook-add-cell" %}} after your **Build a Query** cell to + add a new cell and select **{{% caps %}}Flux Script{{% /caps %}}**. + +4. In the Flux script cell: + + 1. Define a custom function (`cardinalDir()`) that converts a numeric degree + value to a cardinal direction (N, NNE, NE, etc.). + 2. Use `__PREVIOUS_RESULT__` to load the output of the previous notebook + cell into the Flux script. + 3. Use [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to iterate + over each input row, update the field key to `wind_dir_cardinal`, and + normalize the `_value` column to a cardinal direction using the custom + `cardinalDir()` function. + 4. {{% cloud-only %}} + + Use [`to()`](/{{< latest "flux">}}/stdlib/influxdata/influxdb/to/) + to write the normalized data back to InfluxDB. + Specify an existing bucket to write to or + [create a new bucket](/influxdb/v2.6/organizations/buckets/create-bucket/). + + {{% /cloud-only %}} + + {{% oss-only %}} + + ```js + import "array" + + cardinalDir = (d) => { + _cardinal = if d >= 348.7 or d < 11.25 then "N" + else if d >= 11.25 and d < 33.75 then "NNE" + else if d >= 33.75 and d < 56.25 then "NE" + else if d >= 56.25 and d < 78.75 then "ENE" + else if d >= 78.75 and d < 101.25 then "E" + else if d >= 101.25 and d < 123.75 then "ESE" + else if d >= 123.75 and d < 146.25 then "SE" + else if d >= 146.25 and d < 168.75 then "SSE" + else if d >= 168.75 and d < 191.25 then "S" + else if d >= 191.25 and d < 213.75 then "SSW" + else if d >= 213.75 and d < 236.25 then "SW" + else if d >= 236.25 and d < 258.75 then "WSW" + else if d >= 258.75 and d < 281.25 then "W" + else if d >= 281.25 and d < 303.75 then "WNW" + else if d >= 303.75 and d < 326.25 then "NW" + else if d >= 326.25 and d < 348.75 then "NNW" + else "" + + return _cardinal + } + + __PREVIOUS_RESULT__ + |> map(fn: (r) => ({r with + _field: "wind_dir_cardinal", + _value: cardinalDir(d: r._value), + })) + ``` + {{% /oss-only %}} + + {{% cloud-only %}} + + ```js + import "array" + + cardinalDir = (d) => { + _cardinal = if d >= 348.7 or d < 11.25 then "N" + else if d >= 11.25 and d < 33.75 then "NNE" + else if d >= 33.75 and d < 56.25 then "NE" + else if d >= 56.25 and d < 78.75 then "ENE" + else if d >= 78.75 and d < 101.25 then "E" + else if d >= 101.25 and d < 123.75 then "ESE" + else if d >= 123.75 and d < 146.25 then "SE" + else if d >= 146.25 and d < 168.75 then "SSE" + else if d >= 168.75 and d < 191.25 then "S" + else if d >= 191.25 and d < 213.75 then "SSW" + else if d >= 213.75 and d < 236.25 then "SW" + else if d >= 236.25 and d < 258.75 then "WSW" + else if d >= 258.75 and d < 281.25 then "W" + else if d >= 281.25 and d < 303.75 then "WNW" + else if d >= 303.75 and d < 326.25 then "NW" + else if d >= 326.25 and d < 348.75 then "NNW" + else "" + + return _cardinal + } + + __PREVIOUS_RESULT__ + |> map(fn: (r) => ({r with + _field: "wind_dir_cardinal", + _value: cardinalDir(d: r._value), + })) + |> to(bucket: "example-bucket") + ``` + {{% /cloud-only %}} + +4. {{% oss-only %}} + + Click {{% icon "notebook-add-cell" %}} after your **Flux Script** cell to + add a new cell and select **{{% caps %}}Output to Bucket{{% /caps %}}**. + Select a bucket from the **{{% icon "bucket" %}} Choose a bucket** + drop-down list. + + {{% /oss-only %}} + +5. _(Optional)_ Click {{% icon "notebook-add-cell" %}} and select **Note** to + add a cell containing notes about what this notebook does. For example, the + cell might say, "This notebook converts decimal degree wind direction values + to cardinal directions." +6. {{% oss-only %}} + + Click **Preview** in the upper left to verify that your notebook runs and previews the output. + + {{% /oss-only %}} +6. Click **Run** to run the notebook and write the normalized data to your bucket. + +## Continuously run a notebook +To continuously run your notebook, export the notebook as a task: + +1. Click {{% icon "notebook-add-cell" %}} to add a new cell and then select + **{{% caps %}}Task{{% /caps %}}**. +2. Provide the following: + + - **Every**: Interval that the task should run at. + - **Offset**: _(Optional)_ Time to wait after the defined interval to execute the task. + This allows the task to capture late-arriving data. + +3. Click **{{% icon "export" %}} Export as Task**. diff --git a/content/influxdb/v2.6/notebooks/create-notebook.md b/content/influxdb/v2.6/notebooks/create-notebook.md new file mode 100644 index 000000000..9fd0f5676 --- /dev/null +++ b/content/influxdb/v2.6/notebooks/create-notebook.md @@ -0,0 +1,180 @@ +--- +title: Create a notebook +description: > + Create a notebook to explore, visualize, and process your data. +weight: 102 +influxdb/v2.6/tags: [notebooks] +menu: + influxdb_2_6: + name: Create a notebook + parent: Notebooks +--- + +Create a notebook to explore, visualize, and process your data. +Learn how to add and configure cells to customize your notebook. +To learn the benefits and concepts of notebooks, see [Overview of Notebooks](/influxdb/v2.6/notebooks/overview/). + +- [Create a notebook from a preset](#create-a-notebook-from-a-preset) +- [Use data source cells](#use-data-source-cells) +- [Use visualization cells](#use-visualization-cells) +- [Add a data source cell](#add-a-data-source-cell) +- [Add a validation cell](#add-a-validation-cell) +- [Add a visualization cell](#add-a-visualization-cell) + +## Create a notebook from a preset + +To create a new notebook, do the following: + +1. In the navigation menu on the left, click **Notebooks**. + + {{< nav-icon "notebooks" >}} +2. In the **Notebooks** page, select one of the following options under **Create a Notebook**: + - **New Notebook**: includes a [query builder cell](#add-a-data-source-cell), a [validation cell](#add-a-validation-cell), and a [visualization cell](#add-a-visualization-cell). + - **Set an Alert**: includes a [query builder cell](#add-a-data-source-cell), a [validation cell](#add-a-validation-cell), a [visualization cell](#add-a-visualization-cell), and an [alert builder cell](#add-an-action-cell). + - **Schedule a Task**: includes a [Flux Script editor cell](#add-a-data-source-cell), a [validation cell](#add-a-validation-cell), and a [task schedule cell](#add-an-action-cell). + - **Write a Flux Script**: includes a [Flux script editor cell](#add-a-data-source-cell), and a [validation cell](#add-a-validation-cell). + +3. Enter a name for your notebook in the **Untitled Notebook** field. +4. Do the following at the top of the page: + - Select your local time zone or UTC. + - Choose a time [range](/{{% latest "flux" %}}/stdlib/universe/range/) for your data. +5. Your notebook should have a **Data Source** cell as the first cell. **Data Source** cells provide data to subsequent cells. The presets (listed in step 2) include either a **Query Builder** or a **Flux Script** as the first cell. +6. To define your data source query, do one of the following: + - If your notebook uses a **Query Builder** cell, select your bucket and any additional filters for your query. + - If your notebook uses a **Flux Script** cell, enter or paste a [Flux script](/influxdb/v2.6/query-data/flux/). +7. {{% oss-only %}} + + Select and click **Preview** (or press **CTRL + Enter**) under the notebook title. + InfluxDB displays query results in **Validate the Data** and **Visualize the Result** *without writing data or + running actions*. + + {{% /oss-only %}} +8. (Optional) Change your visualization settings with the drop-down menu and the {{< icon "gear" >}} **Configure** button at the top of the **Visualize the Result** cell. +9. (Optional) Toggle the **Presentation** switch to display visualization cells and hide all other cells. +10. (Optional) Configure notebook actions {{% oss-only %}}(**Alert**, **Task**, or **Output to Bucket**){{% /oss-only %}}{{% cloud-only %}}(**Alert** or **Task**){{% /cloud-only %}}. +11. (Optional) To run your notebook actions, select and click **Run** under the notebook title. +12. (Optional) To add a new cell, follow the steps for one of the cell types: + + - [Add a data source cell](#add-a-data-source-cell) + - [Add a validation cell](#add-a-validation-cell) + - [Add a visualization cell](#add-a-visualization-cell) + - [Add an action cell](#add-an-action-cell) +13. (Optional) [Convert a query builder cell into raw Flux script](#convert-a-query-builder-to-flux) to view and edit the code. + +## Use Data Source cells + +### Convert a Query Builder to Flux +To edit the raw Flux script of a **Query Builder** cell, convert the cell to Flux. + +{{% warn %}} +You can't convert a **Flux Script** editor cell to a **Query Builder** cell. +Once you convert a **Query Builder** cell to a **Flux Script** editor cell, you can't convert it back. +{{% /warn %}} + +1. Click the {{% icon "more" %}} icon in the **Query Builder** cell you want to edit as Flux, and then select **Convert to |> Flux**. +You won't be able to undo this step. + + A **Flux Script** editor cell containing the raw Flux script replaces the **Query Builder** cell. + +2. View and edit the Flux script as needed. + +## Use visualization cells + +- To change your [visualization type](/influxdb/v2.6/visualize-data/visualization-types/), select a new type from the drop-down list at the top of the cell. +- (For histogram only) To specify values, click **Select**. +- To configure the visualization, click **Configure**. +- To download results as an annotated CSV file, click the **CSV** button. +- To export to the dashboard, click **Export to Dashboard**. + +## Add a data source cell + +Add a [data source cell](/influxdb/v2.6/notebooks/overview/#data-source) to pull information into your notebook. + +To add a data source cell, do the following: +1. Click {{< icon "notebook-add-cell" >}}. +2. Select **{{< caps >}}Flux Script{{< /caps >}}** or **{{< caps >}}Query Builder{{< /caps >}}** as your input, and then select or enter the bucket to pull data from. +3. Select filters to narrow your data. +4. Select {{% oss-only %}}**Preview** (**CTRL + Enter**) or {{% /oss-only %}}**Run** in the upper left drop-down list. + +## Add a validation cell + +A validation cell uses the **Table** [visualization type](/influxdb/v2.6/visualize-data/visualization-types/) to display query results from a data source cell. + +To add a **Table** visualization cell, do the following: + +1. Click {{< icon "notebook-add-cell" >}}. +2. Under **Visualization**, click **{{< caps >}}Table{{< /caps >}}**. + +## Add a visualization cell + +Add a visualization cell to render query results as a [Visualization type](/influxdb/v2.6/visualize-data/visualization-types/). + +To add a Table visualization cell, do the following: + +1. Click {{< icon "notebook-add-cell" >}}. +2. Under **Visualization**, select one of the following visualization cell-types: + + - **{{< caps >}}Table{{< /caps >}}**: Display data in tabular format. + - **{{< caps >}}Graph{{< /caps >}}**: Visualize data using InfluxDB visualizations. + - **{{< caps >}}Note{{< /caps >}}**: Use Markdown to add notes or other information to your notebook. + +To modify a visualization cell, see [use visualization cells](#use-visualization-cells). +For detail on available visualization types and how to use them, see [Visualization types](/influxdb/v2.6/visualize-data/visualization-types/). + +## Add an action cell + +Add an [action cell](/influxdb/v2.6/notebooks/overview/#action) to create an [alert](/influxdb/v2.6/monitor-alert/) +{{% cloud-only %}}or{{% /cloud-only %}}{{% oss-only %}},{{% /oss-only %}} process data with a [task](/influxdb/v2.6/process-data/manage-tasks/) +{{% oss-only %}}, or output data to a bucket{{% /oss-only %}}. + +{{% oss-only %}} + +{{% warn %}} +If your cell contains a custom script that uses any output function to write data to InfluxDB (for example: the `to()` function) or sends data to a third-party service, clicking Preview will write data. +{{% /warn %}} + +{{% /oss-only %}} + +- [Add an Alert cell](#add-an-alert-cell) +- {{% oss-only %}}[Add an Output to Bucket cell](#add-an-output-to-bucket-cell){{% /oss-only %}} +- [Add a Task cell](#add-a-task-cell) + +### Add an Alert cell + +To add an [alert](/influxdb/v2.6/monitor-alert/) to your notebook, do the following: + +1. Enter a time range to automatically check the data and enter your query offset. +2. Customize the conditions to send an alert. +3. Select an endpoint to receive an alert: + - Slack and a Slack Channel + - HTTP post + - PagerDuty +4. (Optional) Personalize your message. By default, the message is: + ``` + ${strings.title(v: r._type)} for ${r._source_measurement} triggered at ${time(v: r._source_timestamp)}! + ``` +5. Click **{{< caps >}}Test Alert{{< /caps >}}** to send a test message to your configured **Endpoint**. The test will not schedule the new alert. +6. Click **{{< icon "export" >}} {{< caps >}}Export Alert Task{{< /caps >}}** to create your alert. + +{{% oss-only %}} + +### Add an Output to Bucket cell + +To write **Data Source** results to a bucket, do the following: + +1. Click {{% icon "notebook-add-cell" %}}. +2. Click **{{< caps >}}Output to Bucket{{< /caps >}}**. +3. In the **{{< icon "bucket" >}} Choose a bucket** drop-down list, select or create a bucket. +4. Click **Preview** to view the query result in validation cells. +5. Select and click **Run** in the upper left to write the query result to the bucket. + +{{% /oss-only %}} + +### Add a Task cell + +To add a [task](/influxdb/v2.6/process-data/manage-tasks/) to your notebook, do the following: + +1. Click {{% icon "notebook-add-cell" %}}. +2. Click **{{< caps >}}Task{{< /caps >}}**. +3. Enter a time and an offset to schedule the task. +4. Click **{{< icon "task" >}} {{< caps >}}Export as Task{{< /caps >}}** to save. diff --git a/content/influxdb/v2.6/notebooks/downsample.md b/content/influxdb/v2.6/notebooks/downsample.md new file mode 100644 index 000000000..907ce6b0b --- /dev/null +++ b/content/influxdb/v2.6/notebooks/downsample.md @@ -0,0 +1,111 @@ +--- +title: Downsample data with notebooks +description: > + Create a notebook to downsample data. Downsampling aggregates or summarizes data + within specified time intervals, reducing the overall disk usage as data + collects over time. +weight: 104 +influxdb/v2.6/tags: [notebooks] +menu: + influxdb_2_6: + name: Downsample data + identifier: notebooks-downsample + parent: Notebooks +--- + +Create a notebook to downsample data. Downsampling aggregates or summarizes data +within specified time intervals, reducing the overall disk usage as data +collects over time. + +The following example creates a notebook that queries **Coinbase bitcoin price +sample data** from the last hour, downsamples the data into ten minute summaries, +and then writes the downsampled data to an InfluxDB bucket. + +1. If you do not have an existing bucket to write the downsampled data to, + [create a new bucket](/influxdb/v2.6/organizations/buckets/create-bucket/). +2. [Create a new notebook](/influxdb/v2.6/notebooks/create-notebook/). +3. Select **Past 1h** from the time range drop-down list at the top of your notebook. +4. In the **Build a Query** cell: + + 1. In the **FROM** column under **{{% caps %}}Sample{{% /caps %}}**, + select **Coinbase bitcoin price**. + 2. In the next **FILTER** column, select **_measurement** from the drop-down list + and select the **coindesk** measurement in the list of measurements. + 3. In the next **FILTER** column, select **_field** from the drop-down list, + and select the **price** field from the list of fields. + 4. In the next **FILTER** column, select **code** from the drop-down list, + and select a currency code. + +5. Click {{% icon "notebook-add-cell" %}} after your **Build a Query** cell to + add a new cell and select **{{% caps %}}Flux Script{{% /caps %}}**. + +6. In the Flux script cell: + + 1. Use `__PREVIOUS_RESULT__` to load the output of the previous notebook + cell into the Flux script. + 2. Use [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) + to window data into ten minute intervals and return the average of each interval. + Specify the following parameters: + + - **every**: Window interval _(should be less than or equal to the duration of the queried time range)_. + For this example, use `10m`. + - **fn**: [Aggregate](/{{< latest "flux" >}}/function-types/#aggregates) + or [selector](/{{< latest "flux" >}}/function-types/#selectors) function + to apply to each window. + For this example, use `mean`. + + 3. {{% cloud-only %}} + + Use [`to()`](/{{< latest "flux">}}/stdlib/influxdata/influxdb/to/) + to write the downsampled data back to an InfluxDB bucket. + + {{% /cloud-only %}} + + {{% oss-only %}} + + ```js + __PREVIOUS_RESULT__ + |> aggregateWindow(every: 10m, fn: mean) + ``` + {{% /oss-only %}} + + {{% cloud-only %}} + + ```js + __PREVIOUS_RESULT__ + |> aggregateWindow(every: 10m, fn: mean) + |> to(bucket: "example-bucket") + ``` + {{% /cloud-only %}} + +7. {{% oss-only %}} + + Click {{% icon "notebook-add-cell" %}} after your **Flux Script** cell to + add a new cell and select **{{% caps %}}Output to Bucket{{% /caps %}}**. + Select a bucket from the **{{% icon "bucket" %}} Choose a bucket** + drop-down list. + + {{% /oss-only %}} + +8. _(Optional)_ Click {{% icon "notebook-add-cell" %}} and select **Note** to + add a note to describe your notebook, for example, + "Downsample Coinbase bitcoin prices into hourly averages." +9. {{% oss-only %}} + + Click **Preview** in the upper left to verify that your notebook runs and displays the output. + + {{% /oss-only %}} +10. Click **Run** to run the notebook and write the downsampled data to your bucket. + +## Continuously run a notebook +To continuously run your notebook, export the notebook as a task: + +1. Click {{% icon "notebook-add-cell" %}} to add a new cell, and then select + **{{% caps %}}Task{{% /caps %}}**. +2. Provide the following: + + - **Every**: Interval that the task should run at. + - **Offset**: _(Optional)_ Time to wait after the defined interval to execute the task. + This allows the task to capture late-arriving data. + +3. Click **{{% icon "export" %}} Export as Task**. diff --git a/content/influxdb/v2.6/notebooks/manage-notebooks.md b/content/influxdb/v2.6/notebooks/manage-notebooks.md new file mode 100644 index 000000000..a11eed336 --- /dev/null +++ b/content/influxdb/v2.6/notebooks/manage-notebooks.md @@ -0,0 +1,56 @@ +--- +title: Manage notebooks +description: View, update, and delete notebooks. +weight: 103 +influxdb/v2.6/tags: [notebooks] +menu: + influxdb_2_6: + name: Manage notebooks + parent: Notebooks +--- + +Manage your notebooks in the UI: + +- [View or update a notebook](#view-or-update-notebooks) +- {{% cloud-only %}}[Share a notebook](#share-a-notebook){{% /cloud-only %}} +- {{% cloud-only %}}[Unshare a notebook](#unshare-a-notebook){{% /cloud-only %}} +- [Delete a notebook](#delete-a-notebook) + +## View or update notebooks + +1. In the navigation menu on the left, click **Notebooks**. + + {{< nav-icon "notebooks" >}} + + A list of notebooks appears. +2. Click a notebook to open it. +3. To update, edit the notebook's cells and content. Changes are saved automatically. + +{{% cloud-only %}} + +## Share a notebook + +1. In the navigation menu on the left, click **Notebooks**. + +{{< nav-icon "notebooks" >}} + +2. Click the notebook to open it, and then click the **{{< icon "share" >}}** icon. +3. Select an API token with read-access to all resources in the notebook, + and then click the **{{< icon "check" >}}** icon. +4. Share the generated notebook URL as needed. + +## Unshare a notebook + +To stop sharing a notebook, select **{{< icon "trash" >}}** next to the shared notebook URL. + +{{% /cloud-only %}} + +## Delete a notebook + +1. In the navigation menu on the left, click **Notebooks**. + + {{< nav-icon "notebooks" >}} + +2. Hover over a notebook in the list that appears. +3. Click **Delete Notebook**. +4. Click **Confirm**. diff --git a/content/influxdb/v2.6/notebooks/overview.md b/content/influxdb/v2.6/notebooks/overview.md new file mode 100644 index 000000000..895650eb5 --- /dev/null +++ b/content/influxdb/v2.6/notebooks/overview.md @@ -0,0 +1,97 @@ +--- +title: Overview of notebooks +description: > + Learn about the building blocks of a notebook. +weight: 101 +influxdb/v2.6/tags: [notebooks] +menu: + influxdb_2_6: + name: Overview of notebooks + parent: Notebooks +--- + +Learn how notebooks can help to streamline and simplify your day-to-day business processes. + +See an overview of [notebook concepts](/influxdb/v2.6/notebooks/overview/#notebook-concepts), [notebook controls](/influxdb/v2.6/notebooks/overview/#notebook-controls), and [notebook cell types](/influxdb/v2.6/notebooks/overview/#notebook-cell-types) also know as the basic building blocks of a notebook. + +## Notebook concepts + +You can think of an InfluxDB notebook as a collection of sequential data processing steps. Each step is represented by a "cell" that performs an action such as querying, visualizing, processing, or writing data to your buckets. Notebooks help you do the following: + +- Create snippets of live code, equations, visualizations, and explanatory notes. +- Create alerts or scheduled tasks. +- Downsample and normalize data. +- Build runbooks to share with your teams. +- Output data to buckets. + +## Notebook controls + +The following options appear at the top of each notebook. + +{{% oss-only %}} + +### Preview/Run mode + +- Select **Preview** (or press **Control+Enter**) to display results of each cell without writing data. Helps to verify that cells return expected results before writing data. + +{{% /oss-only %}} + +{{% cloud-only %}} + +### Run + +Select {{< caps >}}Run{{< /caps >}} (or press **Control+Enter**) to display results of each cell and write data to the selected bucket. + +{{% /cloud-only %}} + +### Save Notebook (appears before first save) + +Select {{< caps >}}Save Notebook{{< /caps >}} to save all notebook cells. Once you've saved the notebook, this button disappears and the notebook automatically saves as subsequent changes are made. + +{{% note %}} +Saving the notebook does not save cell results. When you open a saved notebook, click {{< caps >}}**Run**{{< /caps >}} to update cell results. +{{% /note %}} + +### Local or UTC timezone + +Click the timezone drop-down list to select a timezone to use for the notebook. Select either the local time (default) or UTC. + +### Time range + +Select from the options in the dropdown list or select **Custom Time Range** to enter a custom time range with precision up to nanoseconds, and then click **{{< caps >}}Apply Time Range{{< /caps >}}**. + +{{% cloud-only %}} + +### Share notebook + +To generate a URL for the notebook, click the **{{< icon "share" >}}** icon. +For more detail, see how to [share a notebook](/influxdb/cloud/notebooks/manage-notebooks/#share-a-notebook). + +{{% /cloud-only %}} + +## Notebook cell types + +The following cell types are available for your notebook: +- [Data source](#data-source) +- [Visualization](#visualization) +- [Action](#action) + +### Data source + +At least one data source (input) cell is required in a notebook for other cells to run. + +- **{{< caps >}}Query Builder{{< /caps >}}**: Build a query with the Flux query builder. +- **{{< caps >}}Flux Script{{< /caps >}}**: Enter a raw Flux script. + + Data source cells work like the **Query Builder** or **Script Editor** in Data Explorer. For more information, see how to [query data with Flux and the Data Explorer](/influxdb/v2.6/query-data/execute-queries/data-explorer/#query-data-with-flux-and-the-data-explorer). + +### Visualization + +- **{{< caps >}}Table{{< /caps >}}**: View your data in a table. +- **{{< caps >}}Graph{{< /caps >}}**: View your data in a graph. +- **{{< caps >}}Note{{< /caps >}}**: Create explanatory notes or other information for yourself or your team members. + +### Action + +- **{{< caps >}}Alert{{< /caps >}}**: Set up alerts. See how to [monitor data and send alerts](/influxdb/v2.6/monitor-alert/). +- **{{< caps >}}Tasks{{< /caps >}}**: Use the notebook to set up and export a task. See how to [manage tasks in InfluxDB](/influxdb/v2.6/process-data/manage-tasks/). diff --git a/content/influxdb/v2.6/notebooks/troubleshoot-notebooks.md b/content/influxdb/v2.6/notebooks/troubleshoot-notebooks.md new file mode 100644 index 000000000..2f414217d --- /dev/null +++ b/content/influxdb/v2.6/notebooks/troubleshoot-notebooks.md @@ -0,0 +1,14 @@ +--- +title: Troubleshoot notebooks +description: Common issues with the notebooks feature. +weight: 106 +influxdb/v2.6/tags: [notebooks] +menu: + influxdb_2_6: + name: Troubleshoot notebooks + parent: Notebooks +--- + +### No measurements appear in my bucket even though there's data in it. + +Try changing the time range. You might have measurements prior to the time range you selected. For example, if the selected time range is `Past 1h` and the last write happened 16 hours ago, you'd need to change the time range to `Past 24h` (or more) to see your data. diff --git a/content/influxdb/v2.6/organizations/_index.md b/content/influxdb/v2.6/organizations/_index.md new file mode 100644 index 000000000..88f9438f2 --- /dev/null +++ b/content/influxdb/v2.6/organizations/_index.md @@ -0,0 +1,17 @@ +--- +title: Manage organizations +seotitle: Manage organizations in InfluxDB +description: Manage organizations in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: Manage organizations +weight: 10 +influxdb/v2.6/tags: [organizations] +--- + +An **organization** is a workspace for a group of users. +All dashboards, tasks, buckets, members, etc., belong to an organization. + +The following articles provide information about managing organizations: + +{{< children >}} diff --git a/content/influxdb/v2.6/organizations/buckets/_index.md b/content/influxdb/v2.6/organizations/buckets/_index.md new file mode 100644 index 000000000..d6adaaa32 --- /dev/null +++ b/content/influxdb/v2.6/organizations/buckets/_index.md @@ -0,0 +1,20 @@ +--- +title: Manage buckets +seotitle: Manage buckets in InfluxDB +description: Manage buckets in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: Manage buckets + parent: Manage organizations +weight: 105 +influxdb/v2.6/tags: [buckets] +--- + +A **bucket** is a named location where time series data is stored. +All buckets have a **retention period**, a duration of time that each data point persists. +InfluxDB drops all points with timestamps older than the bucket's retention period. +A bucket belongs to an organization. + +The following articles provide information about managing buckets: + +{{< children >}} diff --git a/content/influxdb/v2.6/organizations/buckets/create-bucket.md b/content/influxdb/v2.6/organizations/buckets/create-bucket.md new file mode 100644 index 000000000..86bec8735 --- /dev/null +++ b/content/influxdb/v2.6/organizations/buckets/create-bucket.md @@ -0,0 +1,112 @@ +--- +title: Create a bucket +seotitle: Create a bucket in InfluxDB +description: Create buckets to store time series data in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: Create a bucket + parent: Manage buckets +weight: 201 +--- + +Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI) +to create a bucket. + +{{% note %}} +#### Bucket limits +A single InfluxDB {{< current-version >}} OSS instance supports approximately 20 buckets actively being +written to or queried across all organizations depending on the use case. +Any more than that can adversely affect performance. +{{% /note %}} + +## Create a bucket in the InfluxDB UI + +There are two places you can create a bucket in the UI. + +### Create a bucket from the Load Data menu + +1. In the navigation menu on the left, select **Data (Load Data)** > **Buckets**. + + {{< nav-icon "data" >}} + +2. Click **{{< icon "plus" >}} Create Bucket** in the upper right. +3. Enter a **Name** for the bucket. +4. Select when to **Delete Data**: + - **Never** to retain data forever. + - **Older than** to choose a specific retention period. +5. Click **Create** to create the bucket. + +### Create a bucket in the Data Explorer + +1. In the navigation menu on the left, select **Explore* (**Data Explorer**). + + {{< nav-icon "data-explorer" >}} + +2. In the **From** panel in the Flux Builder, select `+ Create Bucket`. +3. Enter a **Name** for the bucket. +4. Select when to **Delete Data**: + - **Never** to retain data forever. + - **Older than** to choose a specific retention period. +5. Click **Create** to create the bucket. + +## Create a bucket using the influx CLI + +Use the [`influx bucket create` command](/influxdb/v2.6/reference/cli/influx/bucket/create) +to create a new bucket. A bucket requires the following: + +- bucket name +- organization name or ID +- retention period (duration to keep data) in one of the following units: + - nanoseconds (`ns`) + - microseconds (`us` or `µs`) + - milliseconds (`ms`) + - seconds (`s`) + - minutes (`m`) + - hours (`h`) + - days (`d`) + - weeks (`w`) + + {{% note %}} + The minimum retention period is **one hour**. + {{% /note %}} + +```sh +# Syntax +influx bucket create -n -o -r + +# Example +influx bucket create -n my-bucket -o my-org -r 72h +``` + +## Create a bucket using the InfluxDB API + +Use the InfluxDB API to create a bucket. + +{{% note %}} +#### Bucket limits +A single InfluxDB {{< current-version >}} OSS instance supports approximately 20 buckets actively being +written to or queried across all organizations depending on the use case. +Any more than that can adversely affect performance. +{{% /note %}} + +Create a bucket in InfluxDB using an HTTP request to the InfluxDB API `/buckets` endpoint. +Use the `POST` request method and include the following in your request: + +| Requirement | Include by | +|:----------- |:---------- | +| Organization | Use `orgID` in the JSON payload. | +| Bucket | Use `name` in the JSON payload. | +| Retention Rules | Use `retentionRules` in the JSON payload. | +| API token | Use the `Authorization: Token` header. | + +#### Example + +The URL depends on the version and location of your InfluxDB {{< current-version >}} +instance _(see [InfluxDB URLs](/influxdb/v2.6/reference/urls/))_. + +```sh +{{% get-shared-text "api/v2.0/buckets/oss/create.sh" %}} +``` + +_For information about **InfluxDB API options and response codes**, see +[InfluxDB API Buckets documentation](/influxdb/v2.6/api/#operation/PostBuckets)._ diff --git a/content/influxdb/v2.6/organizations/buckets/delete-bucket.md b/content/influxdb/v2.6/organizations/buckets/delete-bucket.md new file mode 100644 index 000000000..5db34a2b8 --- /dev/null +++ b/content/influxdb/v2.6/organizations/buckets/delete-bucket.md @@ -0,0 +1,72 @@ +--- +title: Delete a bucket +seotitle: Delete a bucket from InfluxDB +description: Delete a bucket from InfluxDB using the InfluxDB UI or the influx CLI +menu: + influxdb_2_6: + name: Delete a bucket + parent: Manage buckets +weight: 203 +--- + +Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI) +to delete a bucket. + +## Delete a bucket in the InfluxDB UI + +{{% oss-only %}} + +1. In the navigation menu on the left, select **Data (Load Data)** > **Buckets**. + +{{< nav-icon "data" >}} + +2. Hover over the bucket you would like to delete. +3. Click the **{{< icon "delete" >}}** icon located far right of the bucket name. +4. Click **Delete** to delete the bucket. +{{% /oss-only %}} + +{{% cloud-only %}} + +1. In the navigation menu on the left, select **Load Data** > **Buckets**. + +{{< nav-icon "data" >}} + +2. Find the bucket that you would like to delete. +3. Click the **{{< icon "delete" >}}** icon located far right of the bucket name. +4. Click **{{< caps >}}Confirm{{< /caps >}}** to delete the bucket. + +{{% /cloud-only %}} + +## Delete a bucket using the influx CLI + +Use the [`influx bucket delete` command](/influxdb/v2.6/reference/cli/influx/bucket/delete) +to delete a bucket a bucket by name or ID. + +### Delete a bucket by name +**To delete a bucket by name, you need:** + +- Bucket name +- Bucket's organization name or ID + + +```sh +# Syntax +influx bucket delete -n -o + +# Example +influx bucket delete -n my-bucket -o my-org +``` + +### Delete a bucket by ID +**To delete a bucket by ID, you need:** + +- Bucket ID _(provided in the output of `influx bucket list`)_ + + +```sh +# Syntax +influx bucket delete -i + +# Example +influx bucket delete -i 034ad714fdd6f000 +``` diff --git a/content/influxdb/v2.6/organizations/buckets/update-bucket.md b/content/influxdb/v2.6/organizations/buckets/update-bucket.md new file mode 100644 index 000000000..529107ffa --- /dev/null +++ b/content/influxdb/v2.6/organizations/buckets/update-bucket.md @@ -0,0 +1,88 @@ +--- +title: Update a bucket +seotitle: Update a bucket in InfluxDB +description: Update a bucket's name or retention period in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: Update a bucket + parent: Manage buckets +weight: 202 +--- + +Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update a bucket. + +Note that updating an bucket's name will affect any assets that reference the bucket by name, including the following: + + - Queries + - Dashboards + - Tasks + - Telegraf configurations + - Templates + +If you change a bucket name, be sure to update the bucket in the above places as well. + +## Update a bucket's name in the InfluxDB UI + +1. In the navigation menu on the left, select **Data (Load Data)** > **Buckets**. + + {{< nav-icon "data" >}} + +2. Click **Settings** under the bucket you want to rename. +3. Click **Rename**. +3. Review the information in the window that appears and click **I understand, let's rename my bucket**. +4. Update the bucket's name and click **Change Bucket Name**. + +## Update a bucket's retention period in the InfluxDB UI + +1. In the navigation menu on the left, select **Data (Load Data)** > **Buckets**. + + {{< nav-icon "data" >}} + +2. Click **Settings** next to the bucket you want to update. +3. In the window that appears, edit the bucket's retention period. +4. Click **Save Changes**. + +## Update a bucket using the influx CLI + +Use the [`influx bucket update` command](/influxdb/v2.6/reference/cli/influx/bucket/update) +to update a bucket. Updating a bucket requires the following: + +- The bucket ID _(provided in the output of `influx bucket list`)_ +- The name or ID of the organization the bucket belongs to. + +{{< cli/influx-creds-note >}} + +##### Update the name of a bucket + +```sh +# Syntax +influx bucket update -i -n + +# Example +influx bucket update -i 034ad714fdd6f000 -n my-new-bucket +``` + +##### Update a bucket's retention period + +Valid retention period duration units: + +- nanoseconds (`ns`) +- microseconds (`us` or `µs`) +- milliseconds (`ms`) +- seconds (`s`) +- minutes (`m`) +- hours (`h`) +- days (`d`) +- weeks (`w`) + +{{% note %}} +The minimum retention period is **one hour**. +{{% /note %}} + +```sh +# Syntax +influx bucket update -i -r + +# Example +influx bucket update -i 034ad714fdd6f000 -r 1209600000000000ns +``` diff --git a/content/influxdb/v2.6/organizations/buckets/view-buckets.md b/content/influxdb/v2.6/organizations/buckets/view-buckets.md new file mode 100644 index 000000000..9704d2345 --- /dev/null +++ b/content/influxdb/v2.6/organizations/buckets/view-buckets.md @@ -0,0 +1,34 @@ +--- +title: View buckets +seotitle: View buckets in InfluxDB +description: View a list of all the buckets for an organization in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: View buckets + parent: Manage buckets +weight: 202 +--- + +## View buckets in the InfluxDB UI + +1. In the navigation menu on the left, select **Data (Load Data)** > **Buckets**. + + {{< nav-icon "data" >}} + + A list of buckets with their retention policies and IDs appears. + +2. Click a bucket to open it in the **Data Explorer**. +3. Click the **bucket ID** to copy it to the clipboard. + +## View buckets using the influx CLI + +Use the [`influx bucket list` command](/influxdb/v2.6/reference/cli/influx/bucket/list) +to view a buckets in an organization. + +```sh +influx bucket list +``` + +Other filtering options such as filtering by organization, name, or ID are available. +See the [`influx bucket list` documentation](/influxdb/v2.6/reference/cli/influx/bucket/list) +for information about other available flags. diff --git a/content/influxdb/v2.6/organizations/create-org.md b/content/influxdb/v2.6/organizations/create-org.md new file mode 100644 index 000000000..39c8e2e6a --- /dev/null +++ b/content/influxdb/v2.6/organizations/create-org.md @@ -0,0 +1,47 @@ +--- +title: Create an organization +seotitle: Create an organization in InfluxDB +description: Create an organization in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: Create an organization + parent: Manage organizations +weight: 101 +products: [oss] +--- + +Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI) +to create an organization. + +{{% note %}} +#### Organization and bucket limits +A single InfluxDB {{< current-version >}} OSS instance supports approximately 20 buckets actively being +written to or queried across all organizations depending on the use case. +Any more than that can adversely affect performance. +Because each organization is created with a bucket, we do not recommend more than +20 organizations in a single InfluxDB OSS instance. +{{% /note %}} + +## Create an organization in the InfluxDB UI + +1. In the navigation menu on the left, click the **Account dropdown**. + + {{< nav-icon "account" >}} + +2. Select **Create Organization**. +3. In the window that appears, enter an **Organization Name** and **Bucket Name** and click **Create**. + +## Create an organization using the influx CLI + +Use the [`influx org create` command](/influxdb/v2.6/reference/cli/influx/org/create) +to create a new organization. A new organization requires the following: + +- A name for the organization + +```sh +# Syntax +influx org create -n + +# Example +influx org create -n my-org +``` diff --git a/content/influxdb/v2.6/organizations/delete-org.md b/content/influxdb/v2.6/organizations/delete-org.md new file mode 100644 index 000000000..8ba31af76 --- /dev/null +++ b/content/influxdb/v2.6/organizations/delete-org.md @@ -0,0 +1,41 @@ +--- +title: Delete an organization +seotitle: Delete an organization from InfluxDB +description: Delete an existing organization from InfluxDB using the influx CLI. +menu: + influxdb_2_6: + name: Delete an organization + parent: Manage organizations +weight: 104 +products: [oss] +--- + +Use the `influx` command line interface (CLI) +to delete an organization. + + + +## Delete an organization using the influx CLI + +Use the [`influx org delete` command](/influxdb/v2.6/reference/cli/influx/org/delete) +to delete an organization. Deleting an organization requires the following: + +- The organization ID _(provided in the output of `influx org list`)_ + +```sh +# Syntax +influx org delete -i + +# Example +influx org delete -i 034ad714fdd6f000 +``` diff --git a/content/influxdb/v2.6/organizations/members/_index.md b/content/influxdb/v2.6/organizations/members/_index.md new file mode 100644 index 000000000..414f8d54f --- /dev/null +++ b/content/influxdb/v2.6/organizations/members/_index.md @@ -0,0 +1,16 @@ +--- +title: Manage organization members +seotitle: Manage members of an organization in InfluxDB +description: Manage members of an organization in InfluxDB using the InfluxDB UI or CLI. +menu: + influxdb_2_6: + name: Manage members + parent: Manage organizations +weight: 106 +influxdb/v2.6/tags: [members] +--- + +A **member** is a user that belongs to an organization. +The following articles provide information about managing users: + +{{< children >}} diff --git a/content/influxdb/v2.6/organizations/members/add-member.md b/content/influxdb/v2.6/organizations/members/add-member.md new file mode 100644 index 000000000..7cfbebb99 --- /dev/null +++ b/content/influxdb/v2.6/organizations/members/add-member.md @@ -0,0 +1,55 @@ +--- +title: Add a member +seotitle: Add a member to an organization in InfluxDB +description: > + Use the `influx` command line interface (CLI) to add a member to an organization + and optionally make that member an owner across all organizations. +menu: + influxdb_2_6: + name: Add a member + parent: Manage members +weight: 201 +--- + +Use the `influx` command line interface (CLI) to add a member to an organization +and optionally make that member an owner across all organizations. + +## Add a member to an organization using the influx CLI + +1. Get a list of users and their IDs by running the following: + + ```sh + influx user list + ``` + +2. To add a user as a member of an organization, use the `influx org members add command`. + Provide the following: + + - Organization name + - User ID + - _(Optional)_ `--owner` flag to add the user as an owner + _(requires an [operator token](/influxdb/v2.6/security/tokens/#operator-token))_ + + {{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Add member](#) +[Add owner](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```sh +influx org members add \ + -n \ + -m +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```sh +influx org members add \ + -n \ + -m \ + --owner +``` +{{% /code-tab-content %}} + {{< /code-tabs-wrapper >}} + +For more information, see the [`influx org members add` command](/influxdb/v2.6/reference/cli/influx/org/members/add). diff --git a/content/influxdb/v2.6/organizations/members/remove-member.md b/content/influxdb/v2.6/organizations/members/remove-member.md new file mode 100644 index 000000000..a68c2fc69 --- /dev/null +++ b/content/influxdb/v2.6/organizations/members/remove-member.md @@ -0,0 +1,44 @@ +--- +title: Remove a member +seotitle: Remove a member from an organization in InfluxDB +description: Remove a member from an organization. +menu: + influxdb_2_6: + name: Remove a member + parent: Manage members +weight: 203 +--- + +Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI) +to remove a member from an organization. + +{{% note %}} +Removing a member from an organization removes all permissions associated with the organization, +but it does not delete the user from the system entirely. +For information about deleting a user from InfluxDB, see [Delete a user](/influxdb/v2.6/users/delete-user/). +{{% /note %}} + +## Remove a member from an organization in the InfluxDB UI + +1. In the navigation menu on the left, click your **Account avatar** and select **Members**. + + {{< nav-icon "account" >}} + +2. Click the **{{< icon "delete" >}}** icon next to the member you want to delete. +3. Click **Delete** to confirm and remove the user from the organization. + +## Remove a member from an organization using the influx CLI + +Use the [`influx org members remove` command](/influxdb/v2.6/reference/cli/influx/org/members/remove) +to remove a member from an organization. Removing a member requires the following: + +- The organization name or ID _(provided in the output of [`influx org list`](/influxdb/v2.6/reference/cli/influx/org/list/))_ +- The member ID _(provided in the output of [`influx org members list`](/influxdb/v2.6/reference/cli/influx/org/members/list/))_ + +```sh +# Syntax +influx org members remove -o -i + +# Example +influx org members remove -o 00xXx0x00xXX0000 -i x0xXXXx00x0x000X +``` diff --git a/content/influxdb/v2.6/organizations/members/view-members.md b/content/influxdb/v2.6/organizations/members/view-members.md new file mode 100644 index 000000000..1c5065353 --- /dev/null +++ b/content/influxdb/v2.6/organizations/members/view-members.md @@ -0,0 +1,35 @@ +--- +title: View members +seotitle: View members of an organization in InfluxDB +description: Review a list of members for an organization. +menu: + influxdb_2_6: + name: View members + parent: Manage members +weight: 202 +--- + +Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI) +to view members of an organization. + +## View members of organization in the InfluxDB UI + +In the navigation menu on the left, click your **Account avatar** and select **Members**. + +{{< nav-icon "account" >}} + + +## View members of organization using the influx CLI + +Use the [`influx org members list` command](/influxdb/v2.6/reference/cli/influx/org/members/list) +to list members of an organization. Listing an organization's members requires the following: + +- The name or ID of the organization + +```sh +# Syntax +influx org members list -n + +# Example +influx org members list -n my-org +``` diff --git a/content/influxdb/v2.6/organizations/switch-org.md b/content/influxdb/v2.6/organizations/switch-org.md new file mode 100644 index 000000000..db961f3f7 --- /dev/null +++ b/content/influxdb/v2.6/organizations/switch-org.md @@ -0,0 +1,22 @@ +--- +title: Switch organizations +seotitle: Switch organizations in InfluxDB +description: Switch from one organization to another in the InfluxDB UI +menu: + influxdb_2_6: + name: Switch organizations + parent: Manage organizations +weight: 105 +products: [oss] +--- + +Use the InfluxDB user interface (UI) to switch from one organization to another. The organization you're currently viewing determines what dashboards, tasks, buckets, members, and other assets you can access. + +## Switch organizations in the InfluxDB UI + +1. In the navigation menu on the left, click the **Account dropdown**. + + {{< nav-icon "account" >}} + +2. Select **Switch Organizations**. +3. Click the organization you want to switch to. diff --git a/content/influxdb/v2.6/organizations/update-org.md b/content/influxdb/v2.6/organizations/update-org.md new file mode 100644 index 000000000..c3b9b53ab --- /dev/null +++ b/content/influxdb/v2.6/organizations/update-org.md @@ -0,0 +1,49 @@ +--- +title: Update an organization +seotitle: Update an organization in InfluxDB +description: Update an organization's name and assets in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: Update an organization + parent: Manage organizations +weight: 103 +--- + +Use the `influx` command line interface (CLI) or the InfluxDB user interface (UI) to update an organization. + +Note that updating an organization's name will affect any assets that reference the organization by name, including the following: + + - Queries + - Dashboards + - Tasks + - Telegraf configurations + - Templates + +If you change an organization name, be sure to update the organization in the above places as well. + +## Update an organization in the InfluxDB UI + +1. In the navigation menu on the left, click the user icon > **About**. + + {{< img-hd src="/img/influxdb/user-icon.png" alt="User Icon" />}} + +2. Click **{{< icon "edit" >}} Rename**. A verification window appears. +3. Review the information, and then click **I understand, let's rename my organization**. +4. Enter a new name for your organization, and then click **Change organization name**. + +## Update an organization using the influx CLI + +Use the [`influx org update` command](/influxdb/v2.6/reference/cli/influx/org/update) +to update an organization. Updating an organization requires the following: + +- The org ID _(provided in the output of `influx org list`)_ + +##### Update the name of a organization + +```sh +# Syntax +influx org update -i -n + +# Example +influx org update -i 034ad714fdd6f000 -n my-new-org +``` diff --git a/content/influxdb/v2.6/organizations/view-orgs.md b/content/influxdb/v2.6/organizations/view-orgs.md new file mode 100644 index 000000000..f64fcaf78 --- /dev/null +++ b/content/influxdb/v2.6/organizations/view-orgs.md @@ -0,0 +1,61 @@ +--- +title: View organizations +seotitle: View organizations in InfluxDB +description: Review a list of organizations in InfluxDB using the InfluxDB UI or the influx CLI. +menu: + influxdb_2_6: + name: View organizations + parent: Manage organizations +weight: 102 +--- + +Use the InfluxDB user interface (UI) or the `influx` command line interface (CLI) +to view organizations. + +## View organizations in the InfluxDB UI + +1. In the navigation menu on the left, click the **Account dropdown**. + + {{< nav-icon "account" >}} + +2. Select **Switch Organizations**. The list of organizations appears. + +## View organizations using the influx CLI + +Use the [`influx org list` command](/influxdb/v2.6/reference/cli/influx/org/list) +to view organizations. + +```sh +influx org list +``` + +Filtering options such as filtering by name or ID are available. +See the [`influx org list` documentation](/influxdb/v2.6/reference/cli/influx/org/list) +for information about other available flags. + +## View your organization ID + +Use the InfluxDB UI or `influx` CLI to view your organization ID. + +### Organization ID in the UI + +After logging in to the InfluxDB UI, your organization ID appears in the URL. + +{{< code-callout "03a2bbf46249a000" >}} +```sh +http://localhost:8086/orgs/03a2bbf46249a000/... +``` +{{< /code-callout >}} + + +### Organization ID in the CLI + +Use [`influx org list`](#view-organizations-using-the-influx-cli) to view your organization ID. + +```sh +> influx org list + +ID Name +03a2bbf46249a000 org-1 +03ace3a859669000 org-2 +``` diff --git a/content/influxdb/v2.6/process-data/_index.md b/content/influxdb/v2.6/process-data/_index.md new file mode 100644 index 000000000..461404f0a --- /dev/null +++ b/content/influxdb/v2.6/process-data/_index.md @@ -0,0 +1,28 @@ +--- +title: Process data with InfluxDB tasks +seotitle: Process data with InfluxDB tasks +description: > + InfluxDB's task engine runs scheduled Flux tasks that process and analyze data. + This collection of articles provides information about creating and managing InfluxDB tasks. +menu: + influxdb_2_6: + name: Process data +weight: 6 +influxdb/v2.6/tags: [tasks] +related: + - /resources/videos/influxdb-tasks/ +--- + +Process and analyze your data with tasks in the InfluxDB **task engine**. +Use tasks (scheduled Flux queries) +to input a data stream and then analyze, modify, and act on the data accordingly. + +Discover how to create and manage tasks using the InfluxDB user interface (UI) +the `influx` command line interface (CLI), and the InfluxDB `/api/v2` API. +Find examples of data downsampling and other common tasks. + +{{% note %}} +Tasks replace InfluxDB v1.x continuous queries. +{{% /note %}} + +{{< children >}} diff --git a/content/influxdb/v2.6/process-data/common-tasks/_index.md b/content/influxdb/v2.6/process-data/common-tasks/_index.md new file mode 100644 index 000000000..0d5ac6ba9 --- /dev/null +++ b/content/influxdb/v2.6/process-data/common-tasks/_index.md @@ -0,0 +1,17 @@ +--- +title: Common data processing tasks +seotitle: Common data processing tasks performed with with InfluxDB +description: > + InfluxDB Tasks process data on specified schedules. + This collection of articles walks through common use cases for InfluxDB tasks. +influxdb/v2.6/tags: [tasks] +menu: + influxdb_2_6: + name: Common tasks + parent: Process data +weight: 104 +--- + +The following articles walk through common task use cases. + +{{< children >}} diff --git a/content/influxdb/v2.6/process-data/common-tasks/calculate_weekly_mean.md b/content/influxdb/v2.6/process-data/common-tasks/calculate_weekly_mean.md new file mode 100644 index 000000000..ce0ca05e5 --- /dev/null +++ b/content/influxdb/v2.6/process-data/common-tasks/calculate_weekly_mean.md @@ -0,0 +1,51 @@ +--- +title: Calculate a weekly mean +description: > + Calculate a weekly mean and add it to a new bucket. +menu: + influxdb_2_6: + name: Calculate a weekly mean + parent: Common tasks +weight: 202 +influxdb/v2.6/tags: [tasks] +--- + +{{% note %}} +This example uses [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). +{{% /note %}} + +This example calculates a temperature weekly mean and stores it in a separate bucket. + +The sample query performs the following operations: + +- Uses [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to select records with the `average_temperature` measurement. +- Uses [`range()`](/{{< latest "flux" >}}/stdlib/universe/range/) to define the start time. +- Uses [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) to group records by week and compute the mean. +- Sends the weekly mean to a new bucket (`weekly_means`). + +```js +option task = { + name: "weekly-means", + every: 1w, +} + +from(bucket: "noaa") + |> filter(fn: (r) => r._measurement == "average_temperature") + |> range(start: 2019-09-01T11:24:00Z) + |> aggregateWindow(every: 1w, fn: mean) + |> to(bucket: "weekly_means") +``` + +### Example results + +| _start | _stop | _field | _measurement | location | _value | _time | +|:------ |:----- |:------ |:------------ |:-------- | ------: |:----- | +| 2019-09-01T11:24:00Z | 2020-10-19T20:39:49Z | degrees | average_temperature | coyote_creek | 80.31005917159763 | 2019-09-05T00:00:00Z | +| 2019-09-01T11:24:00Z | 2020-10-19T20:39:49Z | degrees | average_temperature | coyote_creek | 79.8422619047619 | 2019-09-12T00:00:00Z | +| 2019-09-01T11:24:00Z | 2020-10-19T20:39:49Z | degrees | average_temperature | coyote_creek | 79.82710622710623 | 2019-09-19T00:00:00Z | + +| _start | _stop | _field | _measurement | location | _value | _time | +|:------ |:----- |:------ |:------------ |:-------- | ------: |:----- | +| 2019-09-01T11:24:00Z | 2020-10-19T20:39:49Z | degrees | average_temperature | santa_monica | 80.19952494061758 | 2019-09-05T00:00:00Z | +| 2019-09-01T11:24:00Z | 2020-10-19T20:39:49Z | degrees | average_temperature | santa_monica | 80.01964285714286 | 2019-09-12T00:00:00Z | +| 2019-09-01T11:24:00Z | 2020-10-19T20:39:49Z | degrees | average_temperature | santa_monica | 80.20451 diff --git a/content/influxdb/v2.6/process-data/common-tasks/convert_results_to_json.md b/content/influxdb/v2.6/process-data/common-tasks/convert_results_to_json.md new file mode 100644 index 000000000..8de9efbc5 --- /dev/null +++ b/content/influxdb/v2.6/process-data/common-tasks/convert_results_to_json.md @@ -0,0 +1,45 @@ +--- +title: Convert results to JSON +seotitle: Convert results to JSON and send them to a URL +description: > + Use `json.encode()` to convert query results to JSON and `http.post()` to send them + to a URL endpoint. +menu: + influxdb_2_6: + name: Convert results to JSON + parent: Common tasks +weight: 203 +influxdb/v2.6/tags: [tasks] +--- +{{% note %}} +This example uses [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). +{{% /note %}} + +Send each record to a URL endpoint using the HTTP POST method. This example uses [`json.encode()`](/{{< latest "flux" >}}/stdlib/json/encode/) to convert a value into JSON bytes, then uses [`http.post()`](/{{< latest "flux" >}}/stdlib/http/post/) to send them to a URL endpoint. + +The following query: + - Uses [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to filter the `average_temperature` measurement. + - Uses [`mean()`](/{{< latest "flux" >}}/stdlib/universe/mean/) to calculate the average value from results. + - Uses [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to create a new column, `jsonStr`, and build a JSON object using column values from the query. It then byte-encodes the JSON object and stores it as a string in the `jsonStr` column. + - Uses [`http.post()`](/{{< latest "flux" >}}/stdlib/http/post/) to send the `jsonStr` value from each record to an HTTP endpoint. + + +```js +import "http" +import "json" + +from(bucket: "noaa") + |> filter(fn: (r) => r._measurement == "average_temperature") + |> mean() + |> map(fn: (r) => ({r with jsonStr: string(v: json.encode(v: {"location": r.location, "mean": r._value}))})) + |> map( + fn: (r) => ({ + r with + status_code: http.post( + url: "http://somehost.com/", + headers: {x: "a", y: "b"}, + data: bytes(v: r.jsonStr) + ) + }) + ) +``` diff --git a/content/influxdb/v2.6/process-data/common-tasks/downsample-data.md b/content/influxdb/v2.6/process-data/common-tasks/downsample-data.md new file mode 100644 index 000000000..6116796a1 --- /dev/null +++ b/content/influxdb/v2.6/process-data/common-tasks/downsample-data.md @@ -0,0 +1,76 @@ +--- +title: Downsample data with InfluxDB +seotitle: Downsample data in an InfluxDB task +description: > + How to create a task that downsamples data much like continuous queries + in previous versions of InfluxDB. +menu: + influxdb_2_6: + name: Downsample data + parent: Common tasks +weight: 201 +influxdb/v2.6/tags: [tasks] +--- + +One of the most common use cases for InfluxDB tasks is downsampling data to reduce +the overall disk usage as data collects over time. +In previous versions of InfluxDB, continuous queries filled this role. + +This article walks through creating a continuous-query-like task that downsamples +data by aggregating data within windows of time, then storing the aggregate value in a new bucket. + +### Requirements +To perform a downsampling task, you need to the following: + +##### A "source" bucket +The bucket from which data is queried. + +##### A "destination" bucket +A separate bucket where aggregated, downsampled data is stored. + +##### Some type of aggregation +To downsample data, it must be aggregated in some way. +What specific method of aggregation you use depends on your specific use case, +but examples include mean, median, top, bottom, etc. +View [Flux's aggregate functions](/{{< latest "flux" >}}/function-types/#aggregates) +for more information and ideas. + +## Example downsampling task script +The example task script below is a very basic form of data downsampling that does the following: + +1. Defines a task named "cq-mem-data-1w" that runs once a week. +2. Defines a `data` variable that represents all data from the last 2 weeks in the + `mem` measurement of the `system-data` bucket. +3. Uses the [`aggregateWindow()` function](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) + to window the data into 1 hour intervals and calculate the average of each interval. +4. Stores the aggregated data in the `system-data-downsampled` bucket under the + `my-org` organization. + +```js +// Task Options +option task = {name: "cq-mem-data-1w", every: 1w} + +// Defines a data source +data = from(bucket: "system-data") + |> range(start: -duration(v: int(v: task.every) * 2)) + |> filter(fn: (r) => r._measurement == "mem") + +data + // Windows and aggregates the data in to 1h averages + |> aggregateWindow(fn: mean, every: 1h) + // Stores the aggregated data in a new bucket + |> to(bucket: "system-data-downsampled", org: "my-org") +``` + +Again, this is a very basic example, but it should provide you with a foundation +to build more complex downsampling tasks. + +## Add your task +Once your task is ready, see [Create a task](/influxdb/v2.6/process-data/manage-tasks/create-task) for information about adding it to InfluxDB. + +## Things to consider +- If there is a chance that data may arrive late, specify an `offset` in your + task options long enough to account for late-data. +- If running a task against a bucket with a finite retention period, + schedule tasks to run prior to the end of the retention period to let + downsampling tasks complete before data outside of the retention period is dropped. diff --git a/content/influxdb/v2.6/process-data/get-started.md b/content/influxdb/v2.6/process-data/get-started.md new file mode 100644 index 000000000..df233d9de --- /dev/null +++ b/content/influxdb/v2.6/process-data/get-started.md @@ -0,0 +1,277 @@ +--- +title: Get started with InfluxDB tasks +list_title: Get started with tasks +description: > + Learn the basics of writing an InfluxDB task that processes data, and then performs an action, + such as storing the modified data in a new bucket or sending an alert. +aliases: + - /influxdb/v2.6/process-data/write-a-task/ +influxdb/v2.6/tags: [tasks] +menu: + influxdb_2_6: + name: Get started with tasks + parent: Process data +weight: 101 +related: + - /influxdb/v2.6/process-data/manage-tasks/ + - /influxdb/v2.6/process-data/manage-tasks/create-task/ + - /resources/videos/influxdb-tasks/ +--- + +An **InfluxDB task** is a scheduled Flux script that takes a stream of input data, +modifies or analyzes it in some way, then writes the modified data back to InfluxDB +or performs other actions. + +This article walks through writing a basic InfluxDB task that downsamples +data and stores it in a new bucket. + +## Components of a task + +Every InfluxDB task needs the following components. +Their form and order can vary, but they are all essential parts of a task. + +- [Task options](#define-task-options) +- [A data source](#define-a-data-source) +- [Data processing or transformation](#process-or-transform-your-data) +- [A destination](#define-a-destination) + +_[Skip to the full example task script](#full-example-flux-task-script)_ + +## Define task options + +Task options define the schedule, name, and other information about the task. +The following example shows how to set task options in a Flux script: + +```js +option task = {name: "downsample_5m_precision", every: 1h, offset: 0m} +``` + +_See [Task configuration options](/influxdb/v2.6/process-data/task-options) for detailed information +about each option._ + +_Note that InfluxDB doesn't guarantee that a task will run at the scheduled time. +See [View task run logs for a task](/influxdb/v2.6/process-data/manage-tasks/task-run-history) +for detailed information on task service-level agreements (SLAs)._ + +{{% note %}} +The InfluxDB UI provides a form for defining task options. +{{% /note %}} + + +{{% cloud-only %}} + +### Task options for invokable scripts + +Use the InfluxDB Cloud API to create tasks that reference and run [invokable scripts](influxdb/cloud/api-guide/api-invokable-scripts/). +When you create or update the task, pass task options as properties in the request body--for example: + +```json + { + "name": "30-day-avg-temp", + "description": "IoT Center 30d environment average.", + "every": "1d", + "offset": "0m" + ... + } +``` + +To learn more about creating tasks that run invokable scripts, see how to [create a task that references a script](/influxdb/cloud/process-data/manage-tasks/create-task/#create-a-task-that-references-a-script). + +{{% /cloud-only %}} + +## Retrieve and filter data + +A minimal Flux script uses the following functions to retrieve a specified amount +of data from a data source +and then filter the data based on time or column values: + +1. [`from()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/): + queries data from InfluxDB {{% cloud-only %}}Cloud{{% /cloud-only %}}. +2. [`range()`](/{{< latest "flux" >}}/stdlib/universe/range/): defines the time + range to return data from. +3. [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/): filters + data based on column values. + +The following sample Flux retrieves data from an InfluxDB bucket and then filters by +the `_measurement` and `host` columns: + +```js +from(bucket: "example-bucket") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost") +``` + +_To retrieve data from other sources, see [Flux input functions](/{{< latest "flux" >}}/function-types/#inputs)._ + +{{% note %}} + +#### Use task options in your Flux script + +InfluxDB stores options in a `task` option record that you can reference in your Flux script. +The following sample Flux uses the time range `-task.every`: + +```js +from(bucket: "example-bucket") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost") +``` + +`task.every` is dot notation that references the `every` property of the `task` option record. +`every` is defined as `1h`, therefore `-task.every` equates to `-1h`. + +Using task options to define values in your Flux script can make reusing your task easier. +{{% /note %}} + +## Process or transform your data + +Tasks run scripts automatically at regular intervals. +Scripts process or transform data in some way--for example: downsampling, detecting +anomalies, or sending notifications. + +Consider a task that runs hourly and downsamples data by calculating the average of set intervals. +It uses [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +to group points into 5-minute (`5m`) windows and calculate the average of each +window with [`mean()`](/{{< latest "flux" >}}/stdlib/universe/mean/). + +The following sample code shows the Flux script with task options: + +```js +option task = {name: "downsample_5m_precision", every: 1h, offset: 0m} + +from(bucket: "example-bucket") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost") + |> aggregateWindow(every: 5m, fn: mean) +``` + +{{% note %}} +#### Use offset to account for latent data + +Use the `offset` task option to account for potentially latent data (like data from edge devices). +A task that runs at one hour intervals (`every: 1h`) with an offset of five minutes (`offset: 5m`) +executes 5 minutes after the hour, but queries data from the original one-hour interval. +{{% /note %}} + +_See [Common tasks](/influxdb/v2.6/process-data/common-tasks) for examples of tasks commonly used with InfluxDB._ + +{{% cloud-only %}} + +### Process data with invokable scripts + +In InfluxDB Cloud, you can create tasks that run invokable scripts. +You can use invokable scripts to manage and reuse scripts for your organization. +You can use tasks to schedule script runs with options and parameters. + +The following sample `POST /api/v2/scripts` request body defines a new invokable script with the Flux from the previous example: + +```json +{ + "name": "aggregate-intervals", + "description": "Group points into 5 minute windows and calculate the average of each + window.", + "script": "from(bucket: "example-bucket")\ + |> range(start: -task.every)\ + |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")\ + |> aggregateWindow(every: 5m, fn: mean)", + "language": "flux" +} +``` + +Note that the script doesn't contain task options. +Once you create the invokable script, you can use `POST /api/v2/tasks` to create a task that runs the script. +The following sample request body defines a task with the script ID and options: + +```json +{ + "every": "1h", + "description": "Downsample host with 5 min precision.", + "name": "downsample_5m_precision", + "scriptID": "09b2136232083000" +} +``` + +To create a script and a task that use parameters, see how to [create a task to run an invokable script](/influxdb/cloud/process-data/manage-tasks/create-task/). + +{{% /cloud-only %}} + +## Define a destination + +In most cases, you'll want to send and store data after the task has transformed it. +The destination could be a separate InfluxDB measurement or bucket. + +The example below uses [`to()`](/{{< latest "flux" >}}/stdlib/universe/to) +to write the transformed data back to another InfluxDB bucket: + +```js +// ... + |> to(bucket: "example-downsampled", org: "my-org") +``` + +To write data into InfluxDB, `to()` requires the following columns: + +- `_time` +- `_measurement` +- `_field` +- `_value` + +_To write data to other destinations, see +[Flux output functions](/{{< latest "flux" >}}/function-types/#outputs)._ + +## Full example Flux task script + +The following sample Flux combines all the components described in this guide: + +```js +// Task options +option task = {name: "downsample_5m_precision", every: 1h, offset: 0m} + +// Data source +from(bucket: "example-bucket") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost") + // Data processing + |> aggregateWindow(every: 5m, fn: mean) + // Data destination + |> to(bucket: "example-downsampled") +``` + +{{% cloud-only %}} + +## Full example task with invokable script + +The following sample code shows a `POST /api/v2/scripts` request body that +combines the components described in this guide: + +```json +{ + "name": "aggregate-intervals-and-export", + "description": "Group points into 5 minute windows and calculate the average of each + window.", + "script": "from(bucket: "example-bucket")\ + |> range(start: -task.every)\ + |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")\ + // Data processing\ + |> aggregateWindow(every: 5m, fn: mean)\ + // Data destination\ + |> to(bucket: "example-downsampled")", + "language": "flux" +} +``` + +The following sample code shows a `POST /api/v2/tasks` request body to +schedule the script: + +```json +{ + "every": "1h", + "description": "Downsample host with 5 min precision.", + "name": "downsample_5m_precision", + "scriptID": "SCRIPT_ID" +} +``` + +{{% /cloud-only %}} + +To learn more about InfluxDB tasks and how they work, watch the following video: + +{{< youtube zgCmdtZaH9M >}} diff --git a/content/influxdb/v2.6/process-data/manage-tasks/_index.md b/content/influxdb/v2.6/process-data/manage-tasks/_index.md new file mode 100644 index 000000000..6b810ece6 --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/_index.md @@ -0,0 +1,20 @@ +--- +title: Manage tasks in InfluxDB +seotitle: Manage data processing tasks in InfluxDB +list_title: Manage tasks +description: > + InfluxDB provides options for creating, reading, updating, and deleting tasks + using the `influx` CLI, the InfluxDB UI, and the InfluxDB API. +influxdb/v2.6/tags: [tasks] +menu: + influxdb_2_6: + name: Manage tasks + parent: Process data +weight: 102 +--- + +InfluxDB provides multiple options for creating, reading, updating, and deleting (CRUD) tasks. +The following articles walk through managing tasks with the +InfluxDB user interface (UI), the `influx` command line interface (CLI), and the InfluxDB API. + +{{< children >}} diff --git a/content/influxdb/v2.6/process-data/manage-tasks/create-task.md b/content/influxdb/v2.6/process-data/manage-tasks/create-task.md new file mode 100644 index 000000000..e0f0c8283 --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/create-task.md @@ -0,0 +1,305 @@ +--- +title: Create a task +seotitle: Create a task for processing data in InfluxDB +description: > + Create a data processing task in InfluxDB using the InfluxDB UI or the `influx` CLI. +menu: + influxdb_2_6: + name: Create a task + parent: Manage tasks +weight: 201 +related: + - /influxdb/v2.6/reference/cli/influx/task/create +--- + +Create tasks with the InfluxDB user interface (UI), `influx` command line interface (CLI), or `/api/v2` API. + +_Before creating a task, review the [basics for writing a task](/influxdb/v2.6/process-data/get-started)._ + +- [InfluxDB UI](#create-a-task-in-the-influxdb-ui) +- [`influx` CLI](#create-a-task-using-the-influx-cli) +- [InfluxDB API](#create-a-task-using-the-influxdb-api) + +## Create a task in the InfluxDB UI + +The InfluxDB UI provides multiple ways to create a task: + +- [Create a task from the Data Explorer](#create-a-task-from-the-data-explorer) +- [Create a task in the Task UI](#create-a-task-in-the-task-ui) +- [Import a task](#import-a-task) +- [Create a task from a template](#create-a-task-from-a-template) +- [Clone a task](#clone-a-task) + +### Create a task from the Data Explorer + +1. In the navigation menu on the left, select **Data Explorer**. + + {{< nav-icon "data-explorer" >}} + +2. Build a query and click **Save As** in the upper right. +3. Select the **{{< caps >}}Task{{< /caps >}}** heading. +4. Specify the task options. See [Task options](/influxdb/v2.6/process-data/task-options) + for detailed information about each option. +5. Click **{{< caps >}}Save as Task{{< /caps >}}**. + +### Create a task in the Task UI + +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}** in the upper right. +3. In the left panel, specify the task options. + See [Task options](/influxdb/v2.6/process-data/task-options) for detailed information about each option. +4. In the right panel, enter your task script. + + {{% note %}} + +##### Leave out the option tasks assignment + +When creating a _new_ task in the InfluxDB Task UI, leave the code editor empty. +When you save the task, the Task UI uses the [task options](/influxdb/v2.6/process-data/task-options/) you specify in the **Task options** form to populate `option task = {task_options}` for you. + +When you edit the saved task, you'll see the injected `option task = {task_options}`. + {{% /note %}} + +7. Click **Save** in the upper right. + +### Import a task + +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. Click **{{< caps >}}{{< icon "plus" >}} Create Task{{< /caps >}}** in the upper right. +3. In the left panel, specify the task options. + See [Task options](/influxdb/v2.6/process-data/task-options) for detailed information about each option. +4. Paste a raw Flux task in the code editor to the right of the task options fields. +5. Click **{{< caps >}}Save{{< /caps >}}** in the upper right. + +### Create a task from a template + +1. In the navigation menu on the left, select **Settings** > **Templates**. + + {{< nav-icon "Settings" >}} + +2. Find the template you want to use and click its **Resources** list to expand the list of resources. +3. In the **Resources** list, click the task you want to use. + +### Clone a task + +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. Find the task you would like to clone and click the **{{< icon "settings" >}}** icon located far right of the task name. +3. Click **Clone**. + +## Create a task using the influx CLI + +Use the `influx task create` command to create a new task. +It accepts either a file path or raw Flux. + +### Create a task using a file + +```sh +# Syntax +influx task create --org -f + +# Example +influx task create --org my-org -f /tasks/cq-mean-1h.flux +``` + +### Create a task using raw Flux + +```sh +influx task create --org my-org - # to open stdin pipe + +option task = { + name: "task-name", + every: 6h +} + +# ... Task script ... + +# Linux & macOS: to close the pipe and submit the command +# Windows: , then , then to close the pipe and submit the command +``` + +## Create a task using the InfluxDB API + +{{% oss-only %}} + +Use the [`/api/v2/tasks` InfluxDB API endpoint](/influxdb/v2.6/api/#operation/PostTasks) to create a task. + +{{< api-endpoint method="POST" endpoint="http://localhost:8086/api/v2/tasks/" >}} + +Provide the following in your API request: +##### Request headers + +- **Content-Type**: application/json +- **Authorization**: Token *`INFLUX_API_TOKEN`* + +##### Request body + +JSON object with the following fields: + +- **flux**: raw Flux task string that contains a [`task` option](/flux/v0.x/spec/options/) and a query. +- **orgID**: your [InfluxDB organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) +- **status**: task status ("active" or "inactive") +- **description**: task description + +```sh +curl --request POST 'http://localhost:8086/api/v2/tasks' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Token INFLUX_API_TOKEN' \ + --data-raw '{ + "flux": "option task = {name: \"CPU Total 1 Hour New\", every: 1h}\n\nfrom(bucket: \"telegraf\")\n\t|> range(start: -1h)\n\t|> filter(fn: (r) =>\n\t\t(r._measurement == \"cpu\"))\n\t|> filter(fn: (r) =>\n\t\t(r._field == \"usage_system\"))\n\t|> filter(fn: (r) =>\n\t\t(r.cpu == \"cpu-total\"))\n\t|> aggregateWindow(every: 1h, fn: max)\n\t|> to(bucket: \"cpu_usage_user_total_1h\", org: \"INFLUX_ORG\")", + "orgID": "INFLUX_ORG_ID", + "status": "active", + "description": "This task downsamples CPU data every hour" +}' +``` + +{{% /oss-only %}} + +{{% cloud-only %}} + +An InfluxDB Cloud task can run either an [invokable script](/influxdb/cloud/api-guide/api-invokable-scripts/) or raw Flux stored in the task. + +- [Create a task that references a script](#create-a-task-that-references-a-script) +- [Create a task that contains a Flux script](#create-a-task-that-contains-a-flux-script) + +### Create a task that references a script + +With InfluxDB Cloud invokable scripts, you can manage, reuse, and invoke scripts as API endpoints. +You can use tasks to pass script parameters and schedule runs. + +Use the [`/api/v2/tasks` InfluxDB API endpoint](/influxdb/cloud/api/#operation/PostTasks) to create a task +that references a script ID. + +{{< api-endpoint method="POST" endpoint="http://localhost:8086/api/v2/tasks/" >}} + +Provide the following in your API request: + +#### Request headers + +- **Content-Type**: application/json +- **Authorization**: Token *`INFLUX_API_TOKEN`* + +#### Request body + +JSON object with the following fields: + +- **cron** or **every**: task schedule +- **name**: task name +- **scriptID**: [invokable script](/influxdb/cloud/api-guide/api-invokable-scripts/) ID + +```sh +curl --request POST 'https://cloud2.influxdata.com/api/v2/tasks' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Token INFLUX_API_TOKEN' \ + "cron": "0 * * * *", + "name": "downsample cpu", + "scriptID": "085a2960eaa20000", + "description": "This task downsamples CPU data every hour" +}' +``` + +To create a task that passes parameters when invoking the script, pass the _`scriptParameters`_ +property in the request body. +The following sample code creates a script with parameters, and then creates a +task to run the new script daily: + +```sh +SCRIPT_ID=$( +curl https://cloud2.influxdata.com/api/v2/scripts \ + --header "Authorization: Token INFLUX_API_TOKEN" \ + --header 'Accept: application/json' \ + --header 'Content-Type: application/json' \ + --data-binary @- << EOF | jq -r '.id' + { + "name": "filter-and-group19", + "description": "Returns filtered and grouped points from a bucket.", + "script": "from(bucket: params.bucket)\ + |> range(start: duration(v: params.rangeStart))\ + |> filter(fn: (r) => r._field == params.filterField)\ + |> group(columns: [params.groupColumn])", + "language": "flux" + } +EOF +) + +echo $SCRIPT_ID + +curl https://cloud2.influxdata.com/api/v2/tasks \ +--header "Content-type: application/json" \ +--header "Authorization: Token INFLUX_API_TOKEN" \ +--data @- << EOF + { + "name": "30-day-avg-temp", + "description": "IoT Center 30d temperature average.", + "every": "1d", + "scriptID": "${SCRIPT_ID}", + "scriptParameters": + { + "rangeStart": "-30d", + "bucket": "air_sensor", + "filterField": "temperature", + "groupColumn": "_time" + } + } +EOF +``` + +Replace **`INFLUX_API_TOKEN`** with your InfluxDB API token. + +### Create a task that contains a Flux script + +Use the [`/api/v2/tasks` InfluxDB API endpoint](/influxdb/cloud/api/#operation/PostTasks) to create a task that contains a Flux script with task options. + +{{< api-endpoint method="POST" endpoint="https://cloud2.influxdata.com/api/v2/tasks/" >}} + +Provide the following in your API request: + +#### Request headers + +- **Content-Type**: application/json +- **Authorization**: Token **`INFLUX_API_TOKEN`** + +#### Request body + +JSON object with the following fields: + +- **flux**: raw Flux task string that contains [`options`](/flux/v0.x/spec/options/) and the query. +- **status**: task status ("active" or "inactive") +- **description**: task description + +```sh +curl --request POST 'https://cloud2.influxdata.com/api/v2/tasks' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Token INFLUX_API_TOKEN' \ + --data-binary @- << EOF + { + "flux": "option task = {name: \"CPU Total 1 Hour New\", every: 1h}\ + from(bucket: \"telegraf\") + |> range(start: -1h) + |> filter(fn: (r) => (r._measurement == \"cpu\")) + |> filter(fn: (r) =>\n\t\t(r._field == \"usage_system\")) + |> filter(fn: (r) => (r.cpu == \"cpu-total\")) + |> aggregateWindow(every: 1h, fn: max) + |> to(bucket: \"cpu_usage_user_total_1h\", org: \"INFLUX_ORG\")", + "orgID": "INFLUX_ORG_ID", + "status": "active", + "description": "This task downsamples CPU data every hour" + } +EOF +``` + +Replace the following: + +- **`INFLUX_API_TOKEN`**: your InfluxDB [API token](/influxdb/cloud/security/tokens/view-tokens/) +- **`INFLUX_ORG`**: your InfluxDB organization name +- **`INFLUX_ORG_ID`**: your InfluxDB organization ID + +{{% /cloud-only %}} diff --git a/content/influxdb/v2.6/process-data/manage-tasks/delete-task.md b/content/influxdb/v2.6/process-data/manage-tasks/delete-task.md new file mode 100644 index 000000000..fcfa06752 --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/delete-task.md @@ -0,0 +1,48 @@ +--- +title: Delete a task +seotitle: Delete a task for processing data in InfluxDB +description: > + Delete a task from InfluxDB using the InfluxDB UI or the `influx` CLI. +menu: + influxdb_2_6: + name: Delete a task + parent: Manage tasks +weight: 206 +related: + - /influxdb/v2.6/reference/cli/influx/task/delete +--- + +## Delete a task in the InfluxDB UI +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. In the list of tasks, hover over the task you want to delete. +3. Click **Delete** on the far right. +4. Click **Confirm**. + +## Delete a task with the influx CLI +Use the `influx task delete` command to delete a task. + +```sh +# Syntax +influx task delete -i + +# Example +influx task delete -i 0343698431c35000 +``` + +_To find the task ID, see [how to view tasks](/influxdb/v2.6/process-data/manage-tasks/view-tasks/)_ + +## Delete a task using the InfluxDB API + +Use the [`/tasks/TASK_ID` InfluxDB API endpoint](/influxdb/v2.6/api/#operation/DeleteTasksID) to delete a task and all associated records (task runs, logs, and labels). + +{{< api-endpoint method="DELETE" endpoint="http://localhost:8086/api/v2/tasks/TASK_ID" >}} + +_To find the task ID, see [how to view tasks](/influxdb/v2.6/process-data/manage-tasks/view-tasks/)_ + +Once the task is deleted, InfluxDB cancels all scheduled runs of the task. + +If you want to disable a task instead of delete it, see how to +[update the task status](/influxdb/v2.6/process-data/manage-tasks/update-task/) to `inactive`. diff --git a/content/influxdb/v2.6/process-data/manage-tasks/export-task.md b/content/influxdb/v2.6/process-data/manage-tasks/export-task.md new file mode 100644 index 000000000..98a288661 --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/export-task.md @@ -0,0 +1,26 @@ +--- +title: Export a task +seotitle: Export an InfluxDB task +description: Export a data processing task from InfluxDB using the InfluxDB UI. +menu: + influxdb_2_6: + name: Export a task + parent: Manage tasks +weight: 205 +--- + +InfluxDB lets you export tasks from the InfluxDB user interface (UI). +Tasks are exported as downloadable JSON files. + +## Export a task in the InfluxDB UI +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. In the list of tasks, hover over the task you would like to export and click + the **{{< icon "gear" >}}** icon that appears. +3. Select **Export**. +4. Downloading or save the task export file using one of the following options: + - Click **Download JSON** to download the exported JSON file. + - Click **Save as template** to save the export file as a task template. + - Click **Copy to Clipboard** to copy the raw JSON content to your machine's clipboard. diff --git a/content/influxdb/v2.6/process-data/manage-tasks/run-task.md b/content/influxdb/v2.6/process-data/manage-tasks/run-task.md new file mode 100644 index 000000000..9b3171fab --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/run-task.md @@ -0,0 +1,81 @@ +--- +title: Run a task +seotitle: Run an InfluxDB task +description: > + Run a data processing task using the InfluxDB UI or the `influx` CLI. +menu: + influxdb_2_6: + name: Run a task + parent: Manage tasks +weight: 203 +related: + - /influxdb/v2.6/reference/cli/influx/task/run + - /influxdb/v2.6/reference/cli/influx/task/run/retry + - /influxdb/v2.6/reference/cli/influx/task/retry-failed + - /influxdb/v2.6/api/#operation/PostTasksIDRuns + - /influxdb/v2.6/api/#operation/PostTasksIDRunsIDRetry +--- + +InfluxDB data processing tasks generally run in defined intervals or at a specific time, +however, you can manually run a task from the InfluxDB user interface (UI), +the `influx` command line interface (CLI), +or the InfluxDB `/api/v2` API. + +## Run a task from the InfluxDB UI +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. Hover over the task you want to run and click the **{{< icon "gear" >}}** icon. +3. Select **Run Task**. + +## Run a task with the influx CLI +Use the `influx task run retry` command to run a task. + +{{% note %}} +To run a task from the `influx` CLI, the task must have already run at least once. +{{% /note %}} + +{{< cli/influx-creds-note >}} + +```sh +# List all tasks to find the ID of the task to run +influx task list + +# Use the task ID to list previous runs of the task +influx task run list --task-id=0000000000000000 + +# Use the task ID and run ID to retry a run +influx task run retry --task-id=0000000000000000 --run-id=0000000000000000 +``` + +### Retry failed task runs +Use the [`influx task retry-failed` command](/influxdb/v2.6/reference/cli/influx/task/retry-failed/) +to retry failed task runs. + +```sh +# Retry failed tasks for a specific task +influx task retry-failed \ + --id 0000000000000000 + +# Print information about runs that will be retried +influx task retry-failed \ + --dry-run + +# Retry failed task runs that occurred in a specific time range +influx task retry-failed \ + --after 2021-01-01T00:00:00Z \ + --before 2021-01-01T23:59:59Z +``` + +## Run a task with the InfluxDB API +Use the [`/tasks/TASK_ID/runs` +InfluxDB API endpoint](/influxdb/v2.6/api/#operation/PostTasksIDRuns) to manually start a task run. + +{{< api-endpoint method="POST" endpoint="http://localhost:8086/api/v2/tasks/TASK_ID/runs" >}} + +### Retry failed task runs +Use the [`/tasks/TASK_ID/runs/RUN_ID/retry` +InfluxDB API endpoint](/influxdb/v2.6/api/#operation/PostTasksIDRunsIDRetry) to retry a task run. + +{{< api-endpoint method="POST" endpoint="http://localhost:8086/api/v2/tasks/TASK_ID/runs/RUN_ID/retry" >}} diff --git a/content/influxdb/v2.6/process-data/manage-tasks/task-run-history.md b/content/influxdb/v2.6/process-data/manage-tasks/task-run-history.md new file mode 100644 index 000000000..a31338eca --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/task-run-history.md @@ -0,0 +1,86 @@ +--- +title: View task run history and logs +description: > + View task run histories and logs using the InfluxDB UI or the `influx` CLI. +menu: + influxdb_2_6: + name: View run history + parent: Manage tasks +weight: 203 +related: + - /influxdb/v2.6/reference/cli/influx/task/list + - /influxdb/v2.6/reference/cli/influx/task/run/list + - /influxdb/v2.6/reference/cli/influx/task/retry-failed +--- + +When an InfluxDB task runs, a _run_ record is created in the task's history. +Logs associated with each run provide relevant log messages, timestamps, +and the exit status of the run attempt. + +Use the InfluxDB user interface (UI), the `influx` command line interface (CLI), +or the InfluxDB `/api/v2` API to view task run histories and associated logs. + +{{% warn %}} +InfluxDB doesn’t guarantee that a task will run at the scheduled time. During busy +periods, tasks are added to the run queue and processed in order of submission. +The scheduled start time and actual start time can be viewed in the logs under +`scheduledFor` and `startedAt`. + +Task execution time doesn't affect the time range queried. Tasks will query +over the set time range as if executed on schedule regardless of delay. +{{% /warn %}} + +## View a task's run history in the InfluxDB UI + +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" >}} + +2. Hover over the task you want to run and click the **{{< icon "gear" >}}** icon. +3. Select **View Task Runs**. + +### View task run logs + +To view logs associated with a run, click **View Logs** next to the run in the task's run history. + +## View a task's run history with the influx CLI + +Use the `influx task run list` command to view a task's run history. + +```sh +# List all tasks to find the ID of the task to run +influx task list + +# Use the task ID to view the run history of a task +influx task run list --task-id=0000000000000000 +``` + +{{% note %}} +Detailed run logs are not currently available in the `influx` CLI. +{{% /note %}} + +To retry failed task runs, see how to [run tasks](/influxdb/v2.6/process-data/manage-tasks/run-task/). + +## View logs for a task with the InfluxDB API + +Use the [`/api/v2/tasks/TASK_ID/logs` +InfluxDB API endpoint](/influxdb/v2.6/api/#operation/GetTasksIDLogs) to view the log events for a task and exclude additional task metadata. + +{{< api-endpoint method="GET" endpoint="http://localhost:8086/api/v2/tasks/TASK_ID/logs" >}} + +## View a task's run history with the InfluxDB API + +Use the [`/tasks/TASK_ID/runs` +InfluxDB API endpoint](/influxdb/v2.6/api/#operation/GetTasksIDRuns) to view a task's run history. + +{{< api-endpoint method="GET" endpoint="http://localhost:8086/api/v2/tasks/{taskID}/runs" >}} + +### View task run logs with the InfluxDB API + +To view logs associated with a run, use the +[`/api/v2/tasks/TASK_ID/runs/RUN_ID/logs` InfluxDB API +endpoint](/influxdb/v2.6/api/#operation/GetTasksIDRunsIDLogs). + +{{< api-endpoint method="GET" endpoint="http://localhost:8086/api/v2/tasks/TASK_ID/runs/RUN_ID/logs" >}} + +To retry failed task runs, see how to [run tasks](/influxdb/v2.6/process-data/manage-tasks/run-task/). diff --git a/content/influxdb/v2.6/process-data/manage-tasks/update-task.md b/content/influxdb/v2.6/process-data/manage-tasks/update-task.md new file mode 100644 index 000000000..85b9c6639 --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/update-task.md @@ -0,0 +1,87 @@ +--- +title: Update a task +seotitle: Update a task for processing data in InfluxDB +description: > + Update a data processing task in InfluxDB using the InfluxDB UI or the `influx` CLI. +menu: + influxdb_2_6: + name: Update a task + parent: Manage tasks +weight: 204 +related: + - /influxdb/v2.6/reference/cli/influx/task/update +--- + +## Update a task in the InfluxDB UI +1. In the navigation menu on the left, select **Tasks**. + + {{< nav-icon "tasks" "v2" >}} + +2. Find the task you would like to edit and click the **{{< icon "settings" >}}** icon located far right of the task name. +3. Click **Edit**. +4. Click **{{< caps >}}Save{{< /caps >}}** in the upper right. + +#### Update a task Flux script +1. In the list of tasks, click the **Name** of the task you want to update. +2. In the left panel, modify the task options. +3. In the right panel, modify the task script. +4. Click **{{< caps >}}Save{{< /caps >}}** in the upper right. + +#### Update the status of a task +In the list of tasks, click the {{< icon "toggle" >}} toggle to the left of the +task you want to activate or inactivate. + +#### Update a task description +1. In the list of tasks, hover over the name of the task you want to update. +2. Click the pencil icon {{< icon "pencil" >}}. +3. Click outside of the field or press `RETURN` to update. + +## Update a task with the influx CLI +Use the `influx task update` command to update or change the status of an existing task. + +_This command requires a task ID, which is available in the output of `influx task list`._ + +#### Update a task Flux script +Pass the file path of your updated Flux script to the `influx task update` command +with the ID of the task you want to update. +Modified [task options](/influxdb/v2.6/process-data/task-options) defined in the Flux +script are also updated. + +```sh +# Syntax +influx task update -i -f +``` + +```sh +# Example +influx task update -i 0343698431c35000 -f /tasks/cq-mean-1h.flux +``` + +#### Update the status of a task +Pass the ID of the task you want to update to the `influx task update` +command with the `--status` flag. + +_Possible arguments of the `--status` flag are `active` or `inactive`._ + +```sh +# Syntax +influx task update -i --status < active | inactive > +``` + +```sh +# Example +influx task update -i 0343698431c35000 --status inactive +``` + +## Update a task with the InfluxDB API +Use the [`/tasks/TASK_ID` +InfluxDB API endpoint](/influxdb/v2.6/api/#operation/PatchTasksID) to update properties of a task. + +{{< api-endpoint method="PATCH" endpoint="http://localhost:8086/api/v2/tasks/TASK_ID" >}} + +In your request, pass the task ID and an object that contains the updated key-value pairs. +To activate or inactivate a task, set the `status` property. +`"status": "inactive"` cancels scheduled runs and prevents manual runs of the task. +_To find the task ID, see [how to view tasks](/influxdb/v2.6/process-data/manage-tasks/view-tasks/)._ + +Once InfluxDB applies the update, it cancels all previously scheduled runs of the task. diff --git a/content/influxdb/v2.6/process-data/manage-tasks/view-tasks.md b/content/influxdb/v2.6/process-data/manage-tasks/view-tasks.md new file mode 100644 index 000000000..8ee294787 --- /dev/null +++ b/content/influxdb/v2.6/process-data/manage-tasks/view-tasks.md @@ -0,0 +1,44 @@ +--- +title: View tasks +seotitle: View created tasks that process data in InfluxDB +description: > + View existing data processing tasks using the InfluxDB UI or the `influx` CLI. +menu: + influxdb_2_6: + name: View tasks + parent: Manage tasks +weight: 202 +related: + - /influxdb/v2.6/reference/cli/influx/task/list +--- + +## View tasks in the InfluxDB UI +Click the **Tasks** icon in the left navigation to view the lists of tasks. + +{{< nav-icon "tasks" >}} + +### Filter the list of tasks + +1. Click the **Show Inactive** {{< icon "toggle" >}} toggle to include or exclude + inactive tasks in the list. +2. Enter text in the **Filter tasks** field to search for tasks by name or label. +3. Click the heading of any column to sort by that field. + +## View tasks with the influx CLI +Use the `influx task list` command to return a list of tasks. + +```sh +influx task list +``` + +#### Filter tasks using the CLI +Other filtering options such as filtering by organization or user, +or limiting the number of tasks returned, are available. +See the [`influx task list` documentation](/influxdb/v2.6/reference/cli/influx/task/list) +for information about other available flags. + +## View tasks with the InfluxDB API +Use the [`/tasks` InfluxDB API endpoint](/influxdb/v2.6/api/#operation/GetTasks) +to return a list of tasks. + +{{< api-endpoint method="GET" endpoint="http://localhost:8086/api/v2/tasks" >}} \ No newline at end of file diff --git a/content/influxdb/v2.6/process-data/task-options.md b/content/influxdb/v2.6/process-data/task-options.md new file mode 100644 index 000000000..206fcef29 --- /dev/null +++ b/content/influxdb/v2.6/process-data/task-options.md @@ -0,0 +1,156 @@ +--- +title: Task configuration options +seotitle: InfluxDB task configuration options +description: > + Task options define specific information about a task such as its name, + the schedule on which it runs, execution delays, and others. +menu: + influxdb_2_6: + name: Task options + parent: Process data +weight: 105 +influxdb/v2.6/tags: [tasks, flux] +--- + +Task options define specific information about a task. +They are set in a Flux script {{% cloud-only %}}, in the InfluxDB API, {{% /cloud-only %}} or in the InfluxDB user interface (UI). +The following task options are available: + +- [name](#name) +- [every](#every) +- [cron](#cron) +- [offset](#offset) + +{{% note %}} +`every` and `cron` are mutually exclusive, but at least one is required. +{{% /note %}} + +## name + +The name of the task. _**Required**_. + +_**Data type:** String_ + +In Flux: + +```js +option task = { + name: "taskName", + // ... +} +``` + +{{% cloud-only %}} +In a `/api/v2/tasks` request body with `scriptID`: + +```json +{ + "scriptID": "SCRIPT_ID", + "name": "TASK_NAME" + ... +} +``` + +Replace `SCRIPT_ID` with the ID of your InfluxDB invokable script. +{{% /cloud-only %}} + +## every + +The interval at which the task runs. This option also determines when the task first starts to run, depending on the specified time (in [duration literal](/{{< latest "flux" >}}/spec/lexical-elements/#duration-literals)). + +_**Data type:** Duration_ + +For example, if you save or schedule a task at 2:30 and run the task every hour (`1h`): + +`option task = {name: "aggregation", every: 1h}` + +The task first executes at 3:00pm, and subsequently every hour after that. + +In Flux: + +```js +option task = { + // ... + every: 1h, +} +``` + +{{% cloud-only %}} +In a `/api/v2/tasks` request body with `scriptID`: + +```json +{ + "scriptID": "SCRIPT_ID", + "every": "1h" + ... +} +``` + +{{% /cloud-only %}} + +{{% note %}} +In the InfluxDB UI, use the **Interval** field to set this option. +{{% /note %}} + +## cron + +The [cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that +defines the schedule on which the task runs. +Cron scheduling is based on system time. + +_**Data type:** String_ + +In Flux: + +```js +option task = { + // ... + cron: "0 * * * *", +} +``` + +{{% cloud-only %}} +In a `/api/v2/tasks` request body with `scriptID`: + +```json +{ + "scriptID": "SCRIPT_ID", + "cron": "0 * * * *", + ... +} +``` + +{{% /cloud-only %}} + +## offset + +Delays the execution of the task but preserves the original time range. +For example, if a task is to run on the hour, a `10m` offset will delay it to 10 +minutes after the hour, but all time ranges defined in the task are relative to +the specified execution time. +A common use case is offsetting execution to account for data that may arrive late. + +_**Data type:** Duration_ + +In Flux: + +```js +option task = { + // ... + offset: 10m, +} +``` + +{{% cloud-only %}} + +In a `/api/v2/tasks` request body with `scriptID`: + +```json +{ + "scriptID": "SCRIPT_ID", + "offset": "10m", + ... +} +``` + +{{% /cloud-only %}} \ No newline at end of file diff --git a/content/influxdb/v2.6/query-data/_index.md b/content/influxdb/v2.6/query-data/_index.md new file mode 100644 index 000000000..0f96264ec --- /dev/null +++ b/content/influxdb/v2.6/query-data/_index.md @@ -0,0 +1,21 @@ +--- +title: Query data in InfluxDB +seotitle: Query data stored in InfluxDB +description: > + Learn to query data stored in InfluxDB using Flux and tools such as the InfluxDB + user interface and the 'influx' command line interface. +aliases: + - /influxdb/v2.6/query_language/data_exploration/ +menu: + influxdb_2_6: + name: Query data +weight: 5 +influxdb/v2.6/tags: [query, flux] +--- + +Learn to query data stored in InfluxDB using Flux and tools such as the InfluxDB +user interface and the 'influx' command line interface. + +{{< children >}} + +{{< influxdbu "influxdb-101" >}} diff --git a/content/influxdb/v2.6/query-data/common-queries/_index.md b/content/influxdb/v2.6/query-data/common-queries/_index.md new file mode 100644 index 000000000..4366764db --- /dev/null +++ b/content/influxdb/v2.6/query-data/common-queries/_index.md @@ -0,0 +1,22 @@ +--- +title: Common queries +seotitle: Common queries with Flux +description: > + This collection of articles walks through common use cases for Flux queries. +influxdb/v2.6/tags: [queries] +menu: + influxdb_2_6: + name: Common queries + parent: Query data +weight: 104 +--- + +The following articles walk through common queries using the +[NOAA water database data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). + +{{< children >}} + +{{% note %}} +This list will continue to grow. +If you have suggestions, please [submit them to the InfluxData Community](https://community.influxdata.com/c/influxdb2). +{{% /note %}} diff --git a/content/influxdb/v2.6/query-data/common-queries/compare-values.md b/content/influxdb/v2.6/query-data/common-queries/compare-values.md new file mode 100644 index 000000000..a1578228b --- /dev/null +++ b/content/influxdb/v2.6/query-data/common-queries/compare-values.md @@ -0,0 +1,48 @@ +--- +title: Comparing values from different buckets +seotitle: Compare the last measurement to a mean stored in another bucket +description: > + Compare the value from the latest point to an average value stored in another bucket. This is useful when using the average value to calculate a threshold check. +influxdb/v2.6/tags: [queries] +menu: + influxdb_2_6: + name: Compare values from different buckets + parent: Common queries +weight: 104 +--- + +{{% note %}} +This example uses [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). +{{% /note %}} + +This example compares the value from the latest point to an average value stored in another bucket. This is useful when using the average value to calculate a [threshold check](/influxdb/v2.6/monitor-alert/checks/create/#threshold-check). + +The following query: + +- Uses [`range()`](/{{< latest "flux" >}}/stdlib/universe/range/) to define a time range. +- Gets the last value in the `means` bucket and compares it to the last value in the `noaa` bucket using [`last()`](/{{< latest "flux" >}}/stdlib/universe/last/). +- Uses [`join()`](/{{< latest "flux" >}}/stdlib/universe/join/) to combine the results +- Uses [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to calculate the differences + +```js +means = from(bucket: "weekly_means") + |> range(start: 2019-09-01T00:00:00Z) + |> last() + |> keep(columns: ["_value", "location"]) + +latest = from(bucket: "noaa") + |> range(start: 2019-09-01T00:00:00Z) + |> filter(fn: (r) => r._measurement == "average_temperature") + |> last() + |> keep(columns: ["_value", "location"]) + +join(tables: {mean: means, reading: latest}, on: ["location"]) + |> map(fn: (r) => ({r with deviation: r._value_reading - r._value_mean})) +``` + +### Example results + +| location | _value_mean | _value_reading | deviation | +|:-------- | -----------: | --------------:| ---------: | +| coyote_creek | 79.82710622710623 | 89 | 9.172893772893772 | +| santa_monica | 80.20451339915374 | 85 | 4.79548660084626 | diff --git a/content/influxdb/v2.6/query-data/common-queries/iot-common-queries.md b/content/influxdb/v2.6/query-data/common-queries/iot-common-queries.md new file mode 100644 index 000000000..c8c81eafc --- /dev/null +++ b/content/influxdb/v2.6/query-data/common-queries/iot-common-queries.md @@ -0,0 +1,213 @@ +--- +title: IoT sensor common queries +description: > + Use Flux to address common IoT use cases that query data collected from sensors. +influxdb/v2.6/tags: [queries] +menu: + influxdb_2_6: + name: IoT common queries + parent: Common queries +weight: 205 +--- + +The following scenarios illustrate common queries used to extract information from IoT sensor data: + +- [Calculate time in state](#calculate-time-in-state) +- [Calculate time weighted average](#calculate-time-weighted-average) +- [Calculate value between events](#calculate-value-between-events) +- [Determine a state within existing values](#determine-a-state-within-existing-values) + +All scenarios below use the `machineProduction` sample dataset provided by the [InfluxDB `sample` package](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/sample/). +For more information, see [Sample data](/influxdb/cloud/reference/sample-data/). + +## Calculate time in state + +In this scenario, we look at whether a production line is running smoothly (`state`=`OK`) and what percentage of time the production line is running smoothly or not (`state`=`NOK`). If no points are recorded during the interval (`state`=`NaN`), you may opt to retrieve the last state prior to the interval. + +To visualize the time in state, see the [Mosaic visualization](#mosaic-visualization). + +**To calculate the percentage of time a machine spends in each state** + +1. Import the [`contrib/tomhollingworth/events` package](/{{< latest "flux" >}}/stdlib/contrib/tomhollingworth/events/). +1. Query the `state` field. +2. Use `events.duration()` to return the amount of time (in a specified unit) between each data point, and store the interval in the `duration` column. +3. Group columns by the status value column (in this case `_value`), `_start`, `_stop`, and other relevant dimensions. +4. Sum the `duration` column to calculate the total amount of time spent in each state. +5. Pivot the summed durations into the `_value` column. +6. Use `map()` to calculate the percentage of time spent in each state. + +```js +import "contrib/tomhollingworth/events" + +from(bucket: "machine") + |> range(start: 2021-08-01T00:00:00Z, stop: 2021-08-02T00:30:00Z) + |> filter(fn: (r) => r["_measurement"] == "machinery") + |> filter(fn: (r) => r["_field"] == "state") + |> events.duration(unit: 1h, columnName: "duration") + |> group(columns: ["_value", "_start", "_stop", "station_id"]) + |> sum(column: "duration") + |> pivot(rowKey: ["_stop"], columnKey: ["_value"], valueColumn: "duration") + |> map( + fn: (r) => { + totalTime = float(v: r.NOK + r.OK) + + return {r with NOK: float(v: r.NOK) / totalTime * 100.0, OK: float(v: r.OK) / totalTime * 100.0} + }, + ) +``` + +The query above focuses on a specific time range of state changes reported in the production line. + +- `range()` defines the time range to query. +- `filter()` defines the field (`state`) and measurement (`machinery`) to filter by. +- `events.duration()` calculates the time between points. +- `group()` regroups the data by the field value, so points with `OK` and `NOK` field values are grouped into separate tables. +- `sum()` returns the sum of durations spent in each state. + +The output of the query at this point is: + +| _value | duration | +| ------ | -------: | +| NOK | 22 | + +| _value | duration | +| ------ | -------: | +| OK | 172 | + +`pivot()` creates columns for each unique value in the `_value` column, and then assigns the associated duration as the column value. +The output of the pivot operation is: + +| NOK | OK | +| :-- | :-- | +| 22 | 172 | + +Given the output above, `map()` does the following: + +1. Adds the `NOK` and `OK` values to calculate `totalTime`. +2. Divides `NOK` by `totalTime`, and then multiplies the quotient by 100. +3. Divides `OK` by `totalTime`, and then multiplies the quotient by 100. + +This returns: + +| NOK | OK | +| :---------------- | :----------------- | +| 11.34020618556701 | 88.65979381443299 | + +The result shows that 88.66% of time production is in the `OK` state, and that 11.34% of time, production is in the `NOK` state. + +#### Mosaic visualization + +The [mosaic visualization](/influxdb/v2.6/visualize-data/visualization-types/mosaic/) displays state changes over time. In this example, the mosaic visualization displays different colored tiles based on the `state` field. + +```js +from(bucket: "machine") + |> range(start: 2021-08-01T00:00:00Z, stop: 2021-08-02T00:30:00Z) + |> filter(fn: (r) => r._measurement == "machinery") + |> filter(fn: (r) => r._field == "state") + |> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false) +``` + +When visualizing data, it is possible to have more data points than available pixels. To divide data into time windows that span a single pixel, use `aggregateWindow` with the `every` parameter set to `v.windowPeriod`. +Use `last` as the aggregate `fn` to return the last value in each time window. +Set `createEmpty` to `false` so results won't include empty time windows. + + +## Calculate time weighted average + +To calculate the time-weighted average of data points, use the [`timeWeightedAvg()` function](/{{< latest "flux" >}}/stdlib/universe/timeweightedavg/). + +The example below queries the `oil_temp` field in the `machinery` measurement. The `timeWeightedAvg()` function returns the time-weighted average of oil temperatures based on 5 second intervals. + +```js +from(bucket: "machine") + |> range(start: 2021-08-01T00:00:00Z, stop: 2021-08-01T00:00:30Z) + |> filter(fn: (r) => r._measurement == "machinery" and r._field == "oil_temp") + |> timeWeightedAvg(unit: 5s) +``` + +##### Output data + +| stationID | _start | _stop | _value | +|:----- | ----- | ----- | ------:| +| g1 | 2021-08-01T01:00:00.000Z | 2021-08-01T00:00:30.000Z | 40.25396118491921 | +| g2 | 2021-08-01T01:00:00.000Z | 2021-08-01T00:00:30.000Z | 40.6 | +| g3 | 2021-08-01T01:00:00.000Z | 2021-08-01T00:00:30.000Z | 41.384505595567866 | +| g4 | 2021-08-01T01:00:00.000Z | 2021-08-01T00:00:30.000Z | 41.26735518634935 | + + +## Calculate value between events + +Calculate the value between events by getting the average value during a specific time range. + +The following scenario queries data starting when four production lines start and end. +The following query calculates the average oil temperature for each grinding station during that period. + +```js +batchStart = 2021-08-01T00:00:00Z +batchStop = 2021-08-01T00:00:20Z + +from(bucket: "machine") + |> range(start: batchStart, stop: batchStop) + |> filter(fn: (r) => r._measurement == "machinery" and r._field == "oil_temp") + |> mean() +``` + +##### Output + +| stationID | _start | _stop | _value | +|:----- | ----- | ----- | ------:| +| g1 | 2021-08-01T01:00:00.000Z | 2021-08-02T00:00:00.000Z | 40 | +| g2 | 2021-08-01T01:00:00.000Z | 2021-08-02T00:00:00.000Z | 40.6 | +| g3 | 2021-08-01T01:00:00.000Z | 2021-08-02T00:00:00.000Z | 41.379999999999995 | +| g4 | 2021-08-01T01:00:00.000Z | 2021-08-02T00:00:00.000Z | 41.2 | + + +## Determine a state with existing values + +Use multiple existing values to determine a state. +The following example calculates a state based on the difference between the `pressure` and `pressure-target` fields in the machine-production sample data. +To determine a state by comparing existing fields: + +1. Query the fields to compare (in this case, `pressure` and `pressure_target`). +2. (Optional) Use `aggregateWindow()` to window data into time-based windows and + apply an aggregate function (like `mean()`) to return values that represent larger windows of time. +3. Use `pivot()` to shift field values into columns. +4. Use `map()` to compare or operate on the different field column values. +5. Use `map()` to assign a status (in this case, `needsMaintenance` based on the relationship of the field column values. + +```js +import "math" + +from(bucket: "machine") + |> range(start: 2021-08-01T00:00:00Z, stop: 2021-08-02T00:00:00Z) + |> filter(fn: (r) => r["_measurement"] == "machinery") + |> filter(fn: (r) => r["_field"] == "pressure" or r["_field"] == "pressure_target") + |> aggregateWindow(every: 12h, fn: mean) + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({ r with pressureDiff: r.pressure - r.pressure_target })) + |> map(fn: (r) => ({ r with needsMaintenance: if math.abs(x: r.pressureDiff) >= 15.0 then true else false })) +``` + +##### Output + +| _time | needsMaintenance | pressure | pressure_target | pressureDiff | stationID | +| :----------------------- | :--------------- | -----------------: | -----------------: | ------------------: | --------: | +| 2021-08-01T12:00:00.000Z | false | 101.83929080014092 | 104.37786394078252 | -2.5385731406416028 | g1 | +| 2021-08-02T00:00:00.000Z | false | 96.04368008245874 | 102.27698650674662 | -6.233306424287889 | g1 | + +| _time | needsMaintenance | pressure | pressure_target | pressureDiff | stationID | +| :----------------------- | :--------------- | -----------------: | -----------------: | ------------------: | --------: | +| 2021-08-01T12:00:00.000Z | false | 101.62490431541765 | 104.83915260886623 | -3.214248293448577 | g2 | +| 2021-08-02T00:00:00.000Z | false | 94.52039415465273 | 105.90869375273046 | -11.388299598077722 | g2 | + +| _time | needsMaintenance | pressure | pressure_target | pressureDiff | stationID | +| :----------------------- | :--------------- | -----------------: | -----------------: | ------------------: | --------: | +| 2021-08-01T12:00:00.000Z | false | 92.23774168403503 | 104.81867444768653 | -12.580932763651504 | g3 | +| 2021-08-02T00:00:00.000Z | true | 89.20867846153847 | 108.2579185520362 | -19.049240090497733 | g3 | + +| _time | needsMaintenance | pressure | pressure_target | pressureDiff | stationID | +| :----------------------- | :--------------- | -----------------: | -----------------: | ------------------: | --------: | +| 2021-08-01T12:00:00.000Z | false | 94.40834093349847 | 107.6827757125155 | -13.274434779017028 | g4 | +| 2021-08-02T00:00:00.000Z | true | 88.61785638936534 | 108.25471698113208 | -19.636860591766734 | g4 | + +The table reveals that the `pressureDiff` value `-19.636860591766734` from station g4 and `-19.049240090497733` from station g3 are higher than 15, therefore there is a change in state that marks the `needMaintenance` value as "true" and would require that station to need work to turn that value back to `false`. \ No newline at end of file diff --git a/content/influxdb/v2.6/query-data/common-queries/multiple-fields-in-calculations.md b/content/influxdb/v2.6/query-data/common-queries/multiple-fields-in-calculations.md new file mode 100644 index 000000000..f9b67c56f --- /dev/null +++ b/content/influxdb/v2.6/query-data/common-queries/multiple-fields-in-calculations.md @@ -0,0 +1,106 @@ +--- +title: Use multiple fields in a calculation +description: > + Query multiple fields, pivot results, and use multiple field values to + calculate new values in query results. +influxdb/v2.6/tags: [queries] +menu: + influxdb_2_6: + parent: Common queries +weight: 103 +--- + +To use values from multiple fields in a mathematic calculation, complete the following steps: + +1. [Filter by fields required in your calculation](#filter-by-fields) +2. [Pivot fields into columns](#pivot-fields-into-columns) +3. [Perform the mathematic calculation](#perform-the-calculation) + +## Filter by fields +Use [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) +to return only the fields necessary for your calculation. +Use the [`or` logical operator](/{{< latest "flux" >}}/spec/operators/#logical-operators) +to filter by multiple fields. + +The following example queries two fields, `A` and `B`: + +```js +from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._field == "A" or r._field == "B") +``` + +This query returns one or more tables for each field. For example: + +{{< flex >}} +{{% flex-content %}} +| _time | _field | _value | +|:----- |:------:| ------:| +| 2021-01-01T00:00:00Z | A | 12.4 | +| 2021-01-01T00:00:15Z | A | 12.2 | +| 2021-01-01T00:00:30Z | A | 11.6 | +| 2021-01-01T00:00:45Z | A | 11.9 | +{{% /flex-content %}} +{{% flex-content %}} +| _time | _field | _value | +|:----- |:------:| ------:| +| 2021-01-01T00:00:00Z | B | 3.1 | +| 2021-01-01T00:00:15Z | B | 4.8 | +| 2021-01-01T00:00:30Z | B | 2.2 | +| 2021-01-01T00:00:45Z | B | 3.3 | +{{% /flex-content %}} +{{< /flex >}} + +## Pivot fields into columns +Use [`pivot()`](/{{< latest "flux" >}}/stdlib/universe/pivot/) +to align multiple fields by time. + +{{% note %}} +To correctly pivot on `_time`, points for each field must have identical timestamps. +If timestamps are irregular or do not align perfectly, see +[Normalize irregular timestamps](/influxdb/v2.6/query-data/flux/manipulate-timestamps/#normalize-irregular-timestamps). +{{% /note %}} + +```js +// ... + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") +``` + +Using the queried data [above](#filter-by-fields), this `pivot()` function returns: + +| _time | A | B | +|:----- | ------:| ------:| +| 2021-01-01T00:00:00Z | 12.4 | 3.1 | +| 2021-01-01T00:00:15Z | 12.2 | 4.8 | +| 2021-01-01T00:00:30Z | 11.6 | 2.2 | +| 2021-01-01T00:00:45Z | 11.9 | 3.3 | + +## Perform the calculation +Use [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) +to perform the mathematic operation using column values as operands. + +The following example uses values in the `A` and `B` columns to calculate a new `_value` column: + +```js +// ... + |> map(fn: (r) => ({ r with _value: r.A * r.B })) +``` + +Using the pivoted data above, this `map()` function returns: + +| _time | A | B | _value | +|:----- | ------:| ------:| ------:| +| 2021-01-01T00:00:00Z | 12.4 | 3.1 | 38.44 | +| 2021-01-01T00:00:15Z | 12.2 | 4.8 | 58.56 | +| 2021-01-01T00:00:30Z | 11.6 | 2.2 | 25.52 | +| 2021-01-01T00:00:45Z | 11.9 | 3.3 | 39.27 | + +## Full example query + +```js +from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._field == "A" or r._field == "B") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: r.A * r.B})) +``` diff --git a/content/influxdb/v2.6/query-data/common-queries/operate-on-columns.md b/content/influxdb/v2.6/query-data/common-queries/operate-on-columns.md new file mode 100644 index 000000000..b449b33ad --- /dev/null +++ b/content/influxdb/v2.6/query-data/common-queries/operate-on-columns.md @@ -0,0 +1,137 @@ +--- +title: Operate on columns +description: > + Find and count unique values, recalculate the `_value` column, and use values to calculate a new column. +influxdb/v2.6/tags: [queries] +aliases: + - /influxdb/v2.6/query-data/common-queries/count_unique_values_for_column/ + - /influxdb/v2.6/query-data/common-queries/recalculate_value_column/ + - /influxdb/v2.6/query-data/common-queries/calculate_new_column/ +menu: + influxdb_2_6: + name: Operate on columns + parent: Common queries +weight: 100 +--- + +Use the following common queries to operate on columns: + +- [Find and count unique values in a column](#find-and-count-unique-values-in-a-column) +- [Recalculate the _values column](#recalculate-the-_values-column) +- [Calculate a new column](#calculate-a-new-column) + +{{% note %}} +These examples use [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). +{{% /note %}} + +## Find and count unique values in a column + +Find and count the number of unique values in a specified column. +The following examples find and count unique locations where data was collected. + +### Find unique values + +This query: + + - Uses [`group()`](/{{< latest "flux" >}}/stdlib/universe/group/) to ungroup data and return results in a single table. + - Uses [`keep()`](/{{< latest "flux" >}}/stdlib/universe/keep/) and [`unique()`](/{{< latest "flux" >}}/stdlib/universe/selectors/unique/) to return unique values in the specified column. + +```js +from(bucket: "noaa") + |> range(start: -30d) + |> group() + |> keep(columns: ["location"]) + |> unique(column: "location") +``` + +#### Example results +| location | +|:-------- | +| coyote_creek | +| santa_monica | + +### Count unique values + +This query: + +- Uses [`group()`](/{{< latest "flux" >}}/stdlib/universe/group/) to ungroup data and return results in a single table. +- Uses [`keep()`](/{{< latest "flux" >}}/stdlib/universe/keep/), [`unique()`](/{{< latest "flux" >}}/stdlib/universe/unique/), and then [`count()`](/{{< latest "flux" >}}/stdlib/universe/count/) to count the number of unique values. + +```js +from(bucket: "noaa") + |> group() + |> unique(column: "location") + |> count(column: "location") +``` + +#### Example results + +| location | +| ---------:| +| 2 | + + +## Recalculate the _values column + +To recalculate the `_value` column, use the `with` operator in [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to overwrite the existing `_value` column. + +The following query: + + - Uses [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to filter the `average_temperature` measurement. + - Uses [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to convert Fahrenheit temperature values into Celsius. + +```js + +from(bucket: "noaa") + |> filter(fn: (r) => r._measurement == "average_temperature") + |> range(start: -30d) + |> map(fn: (r) => ({r with _value: (float(v: r._value) - 32.0) * 5.0 / 9.0} )) +``` + +| _field | _measurement | _start | _stop | _time | location | _value | +|:------ |:------------ |:------ |:----- |:----- |:-------- | ------: | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:00:00Z | coyote_creek | 27.77777777777778 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:06:00Z | coyote_creek | 22.77777777777778 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:12:00Z | coyote_creek | 30 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:18:00Z | coyote_creek | 31.666666666666668 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:24:00Z | coyote_creek | 25 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:30:00Z | coyote_creek | 21.11111111111111 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:36:00Z | coyote_creek | 28.88888888888889 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:42:00Z | coyote_creek | 24.444444444444443 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:48:00Z | coyote_creek | 29.444444444444443 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T00:54:00Z | coyote_creek | 26.666666666666668 | +| degrees | average_temperature | 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | 2019-08-17T01:00:00Z | coyote_creek | 21.11111111111111 | +| ••• | ••• | ••• | ••• | ••• | ••• | ••• | + +## Calculate a new column + +To use values in a row to calculate and add a new column, use `map()`. +This example below converts temperature from Fahrenheit to Celsius and maps the Celsius value to a new `celsius` column. + +The following query: + + - Uses [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to filter the `average_temperature` measurement. + - Uses [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to create a new column calculated from existing values in each row. + +```js +from(bucket: "noaa") + |> filter(fn: (r) => r._measurement == "average_temperature") + |> range(start: -30d) + |> map(fn: (r) => ({r with celsius: (r._value - 32.0) * 5.0 / 9.0})) +``` + +#### Example results + +| _start | _stop | _field | _measurement | location | _time | _value | celsius | +|:------ |:----- |:------: |:------------: |:--------: |:----- | ------:| -------:| +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:00:00Z | 82 | 27.78 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:06:00Z | 73 | 22.78 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:12:00Z | 86 | 30.00 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:18:00Z | 89 | 31.67 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:24:00Z | 77 | 25.00 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:30:00Z | 70 | 21.11 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:36:00Z | 84 | 28.89 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:42:00Z | 76 | 24.44 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:48:00Z | 85 | 29.44 | +| 1920-03-05T22:10:01Z | 2020-03-05T22:10:01Z | degrees | average_temperature | coyote_creek | 2019-08-17T00:54:00Z | 80 | 26.67 | +| ••• | ••• | ••• | ••• | ••• | ••• | ••• | ••• | diff --git a/content/influxdb/v2.6/query-data/execute-queries/_index.md b/content/influxdb/v2.6/query-data/execute-queries/_index.md new file mode 100644 index 000000000..45a2f70e2 --- /dev/null +++ b/content/influxdb/v2.6/query-data/execute-queries/_index.md @@ -0,0 +1,17 @@ +--- +title: Execute queries +seotitle: Different ways to query InfluxDB +description: There are multiple ways to query data from InfluxDB including the InfluxDB UI, CLI, and API. +weight: 103 +menu: + influxdb_2_6: + name: Execute queries + parent: Query data +influxdb/v2.6/tags: [query] +--- + +There are multiple ways to execute queries with InfluxDB. Choose from the following options: + +{{< children >}} + + diff --git a/content/influxdb/v2.6/query-data/execute-queries/data-explorer.md b/content/influxdb/v2.6/query-data/execute-queries/data-explorer.md new file mode 100644 index 000000000..dfd1cb1a8 --- /dev/null +++ b/content/influxdb/v2.6/query-data/execute-queries/data-explorer.md @@ -0,0 +1,101 @@ +--- +title: Query in Data Explorer +description: > + Query InfluxDB using the InfluxDB user interface (UI) Data Explorer. Discover how to query data in InfluxDB 2.1 using the InfluxDB UI. +aliases: + - /influxdb/v2.6/visualize-data/explore-metrics/ +weight: 201 +menu: + influxdb_2_6: + name: Query with Data Explorer + parent: Execute queries +influxdb/v2.6/tags: [query] +--- + +Build, execute, and visualize your queries in InfluxDB UI's **Data Explorer**. + +![Data Explorer with Flux](/img/influxdb/2-0-data-explorer.png) + +Move seamlessly between using the Flux builder or templates and manually editing the query. +Choose between [visualization types](/influxdb/v2.6/visualize-data/visualization-types/) for your query. + +## Query data with Flux and the Data Explorer + +Flux is a functional data scripting language designed for querying, +analyzing, and acting on time series data. +See [Get started with Flux](/influxdb/v2.6/query-data/get-started) to learn more about Flux. + +1. In the navigation menu on the left, click **Data Explorer**. + + {{< nav-icon "data-explorer" >}} + +2. Use the Flux builder in the bottom panel to create a Flux query: + - Select a bucket to define your data source or select `+ Create Bucket` to add a new bucket. + - Edit your time range with the [time range option](#select-time-range) in the dropdown menu. + - Add filters to narrow your data by selecting attributes or columns in the dropdown menu. + - Select **Group** from the **Filter** dropdown menu to group data into tables. For more about how grouping data in Flux works, see [Group data](/influxdb/v2.6/query-data/flux/group-data/). +3. Alternatively, click **Script Editor** to manually edit the query. + To switch back to the query builder, click **Query Builder**. Note that your updates from the Script Editor will not be saved. +4. Use the **Functions** list to review the available Flux functions. + Click a function from the list to add it to your query. +5. Click **Submit** (or press `Control+Enter`) to run your query. You can then preview your graph in the above pane. + To cancel your query while it's running, click **Cancel**. +6. To work on multiple queries at once, click the {{< icon "plus" >}} to add another tab. + - Click the eye icon on a tab to hide or show a query's visualization. + - Click the name of the query in the tab to rename it. + +## Visualize your query + +- Select an available [visualization types](/influxdb/v2.6/visualize-data/visualization-types/) from the dropdown menu in the upper-left: + + {{< img-hd src="/img/influxdb/2-0-visualizations-dropdown.png" title="Visualization dropdown" />}} + +## Control your dashboard cell + +To open the cell editor overlay, click the gear icon in the upper right of a cell and select **Configure**. + The cell editor overlay opens. + +### View raw data + +Toggle the **View Raw Data** {{< icon "toggle" >}} option to see your data in table format instead of a graph. Scroll through raw data using arrows, or click page numbers to find specific tables. [Group keys](/influxdb/cloud/reference/glossary/#group-key) and [data types](/influxdb/cloud/reference/glossary/#data-type) are easily identifiable at the top of each column underneath the headings. Use this option when data can't be visualized using a visualization type. + + {{< img-hd src="/img/influxdb/cloud-controls-view-raw-data.png" alt="View raw data" />}} + +### Save as CSV + +Click the CSV icon to save the cells contents as a CSV file. + +### Manually refresh dashboard + +Click the refresh button ({{< icon "refresh" >}}) to manually refresh the dashboard's data. + +### Select time range + +1. Select from the time range options in the dropdown menu. + + {{< img-hd src="/img/influxdb/2-0-controls-time-range.png" alt="Select time range" />}} + +2. Select **Custom Time Range** to enter a custom time range with precision up to nanoseconds. +The default time range is 5m. + +> The custom time range uses the selected timezone (local time or UTC). + +### Query Builder or Script Editor + +Click **Query Builder** to use the builder to create a Flux query. Click **Script Editor** to manually edit the query. + +#### Keyboard shortcuts + +In **Script Editor** mode, the following keyboard shortcuts are available: + +| Key | Description | +|--------------------------------|---------------------------------------------| +| `Control + /` (`⌘ + /` on Mac) | Comment/uncomment current or selected lines | +| `Control + Enter` | Submit query | + +## Save your query as a dashboard cell or task + +- Click **Save as** in the upper right, and then: + - To add your query to a dashboard, click **Dashboard Cell**. + - To save your query as a task, click **Task**. + - To save your query as a variable, click **Variable**. diff --git a/content/influxdb/v2.6/query-data/execute-queries/flux-repl.md b/content/influxdb/v2.6/query-data/execute-queries/flux-repl.md new file mode 100644 index 000000000..a1658ba45 --- /dev/null +++ b/content/influxdb/v2.6/query-data/execute-queries/flux-repl.md @@ -0,0 +1,19 @@ +--- +title: Query in the Flux REPL +description: Query InfluxDB using the Flux REPL. Discover how to query data in InfluxDB 2.5 using the Flux REPL. +weight: 203 +menu: + influxdb_2_6: + name: Query in the Flux REPL + parent: Execute queries +influxdb/v2.6/tags: [query] +--- + +The [Flux REPL](/influxdb/v2.6/tools/flux-repl/) starts an interactive +Read-Eval-Print Loop (REPL) where you can write and execute Flux queries. + +```sh +./flux repl +``` + +For more information, see [Use the Flux REPL](/influxdb/v2.6/tools/flux-repl/). diff --git a/content/influxdb/v2.6/query-data/execute-queries/influx-api.md b/content/influxdb/v2.6/query-data/execute-queries/influx-api.md new file mode 100644 index 000000000..8d67079e1 --- /dev/null +++ b/content/influxdb/v2.6/query-data/execute-queries/influx-api.md @@ -0,0 +1,111 @@ +--- +title: Query with the InfluxDB API +description: Query InfluxDB with the InfluxDB API. Discover how to query data in InfluxDB 2.1 using the InfluxDB API. +weight: 202 +menu: + influxdb_2_6: + name: Query with the InfluxDB API + parent: Execute queries +influxdb/v2.6/tags: [query] +--- + +The [InfluxDB v2 API](/influxdb/v2.6/reference/api) provides a programmatic interface for all interactions with InfluxDB. +To query InfluxDB {{< current-version >}}, do one of the following: + +- Send a Flux query request to the [`/api/v2/query`](/influxdb/v2.6/api/#operation/PostQueryAnalyze) endpoint. +- Send an InfluxQL query request to the [/query 1.x compatibility API](/influxdb/v2.6/reference/api/influxdb-1x/query/). + +In your request, set the following: + +- Your organization via the `org` or `orgID` URL parameters. +- `Authorization` header to `Token ` + your API token. +- `Accept` header to `application/csv`. +- `Content-type` header to `application/vnd.flux` (Flux only) or `application/json` (Flux or InfluxQL). +- Query in Flux or InfluxQL with the request's raw data. + +{{% note %}} +#### Use gzip to compress the query response + +To compress the query response, set the `Accept-Encoding` header to `gzip`. +This saves network bandwidth, but increases server-side load. + +We recommend only using gzip compression on responses that are larger than 1.4 KB. +If the response is smaller than 1.4 KB, gzip encoding will always return a 1.4 KB +response, despite the uncompressed response size. +1500 bytes (~1.4 KB) is the maximum transmission unit (MTU) size for the public +network and is the largest packet size allowed at the network layer. +{{% /note %}} + +#### Flux - Example query request + +Below is an example `curl` request that sends a Flux query to InfluxDB {{< current-version >}}: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Without compression](#) +[With compression](#) +{{% /code-tabs %}} + +{{% code-tab-content %}} +```bash +curl --request POST \ + http://localhost:8086/api/v2/query?orgID=INFLUX_ORG_ID \ + --header 'Authorization: Token INFLUX_TOKEN' \ + --header 'Accept: application/csv' \ + --header 'Content-type: application/vnd.flux' \ + --data 'from(bucket:"example-bucket") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> aggregateWindow(every: 1h, fn: mean)' +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```bash +curl --request POST \ + http://localhost:8086/api/v2/query?orgID=INFLUX_ORG_ID \ + --header 'Authorization: Token INFLUX_TOKEN' \ + --header 'Accept: application/csv' \ + --header 'Content-type: application/vnd.flux' \ + --header 'Accept-Encoding: gzip' \ + --data 'from(bucket:"example-bucket") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> aggregateWindow(every: 1h, fn: mean)' +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +#### InfluxQL - Example query request + +Below is an example `curl` request that sends an InfluxQL query to InfluxDB {{< current-version >}}: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Without compression](#) +[With compression](#) +{{% /code-tabs %}} + +{{% code-tab-content %}} +```bash +curl --request -G http://localhost:8086/query?orgID=INFLUX_ORG_ID&database=MyDB&retention_policy=MyRP \ + --header 'Authorization: Token INFLUX_TOKEN' \ + --header 'Accept: application/csv' \ + --header 'Content-type: application/json' \ + --data-urlencode "q=SELECT used_percent FROM example-db.example-rp.example-measurement WHERE host=host1" +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```bash +curl --request -G http://localhost:8086/query?orgID=INFLUX_ORG_ID&database=MyDB&retention_policy=MyRP \ + --header 'Authorization: Token INFLUX_TOKEN' \ + --header 'Accept: application/csv' \ + --header 'Content-type: application/json' \ + --header 'Accept-Encoding: gzip' \ + --data-urlencode "q=SELECT used_percent FROM example-db.example-rp.example-measurement WHERE host=host1" +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +InfluxDB returns the query results in [annotated CSV](/influxdb/v2.6/reference/syntax/annotated-csv/). diff --git a/content/influxdb/v2.6/query-data/execute-queries/influx-query.md b/content/influxdb/v2.6/query-data/execute-queries/influx-query.md new file mode 100644 index 000000000..5a50cd16e --- /dev/null +++ b/content/influxdb/v2.6/query-data/execute-queries/influx-query.md @@ -0,0 +1,42 @@ +--- +title: Use the influx query command +description: Query InfluxDB using the influx CLI. Discover how to query data in InfluxDB 2.1 using `influx query`. +weight: 204 +menu: + influxdb_2_6: + name: Use the influx CLI + parent: Execute queries +influxdb/v2.6/tags: [query] +related: + - /influxdb/v2.6/reference/cli/influx/query/ +--- + +Use the [`influx query` command](/influxdb/v2.6/reference/cli/influx/query) to query data in InfluxDB using Flux. +Pass Flux queries to the command as either a file or via stdin. + +###### Run a query from a file + +```bash +influx query --file /path/to/query.flux +``` + +###### Pass raw Flux via stdin pipe + +```bash +influx query - # Return to open the pipe + +data = from(bucket: "example-bucket") |> range(start: -10m) # ... +# Linux & macOS: to close the pipe and submit the command +# Windows: , then , then to close the pipe and submit the command +``` + +{{% note %}} +#### Remove unnecessary columns in large datasets +When using the `influx query` command to query and download large datasets, +drop columns such as `_start` and `_stop` to optimize the download file size. + +```js +// ... + |> drop(columns: ["_start", "_stop"]) +``` +{{% /note %}} diff --git a/content/influxdb/v2.6/query-data/execute-queries/query-sample-data.md b/content/influxdb/v2.6/query-data/execute-queries/query-sample-data.md new file mode 100644 index 000000000..1b4f86c50 --- /dev/null +++ b/content/influxdb/v2.6/query-data/execute-queries/query-sample-data.md @@ -0,0 +1,88 @@ +--- +title: Query sample data +description: > + Explore InfluxDB OSS with our sample data buckets. +menu: + influxdb_2_6: + name: Query with sample data + parent: Execute queries +weight: 210 +--- + +Use **InfluxDB OSS** sample datasets to quickly access data that lets you explore and familiarize yourself with InfluxDB Cloud without requiring you to have or write your own data. + +- [Choose sample data](#choose-sample-data) +- [Explore sample data](#explore-sample-data) +- [Create sample data dashboards](#create-sample-data-dashboards) + +{{% note %}} +#### Network bandwidth + +Each execution of `sample.data()` downloads the specified dataset from **Amazon S3**. +If using [InfluxDB Cloud](/influxdb/cloud/) or a hosted InfluxDB OSS instance, +you may see additional network bandwidth costs when using this function. +Approximate sample dataset sizes are listed for each [sample dataset](/influxdb/v2.6/reference/sample-data/#sample-datasets) and in the output of [`sample.list()`](/influxdb/v2.6/reference/flux/stdlib/influxdb-sample/list/). + +{{% /note %}} + +## Choose sample data + +1. Choose from the following sample datasets: + - **Air sensor sample data**: Explore, visualize, and monitor humidity, temperature, and carbon monoxide levels in the air. + - **Bird migration sample data**: Explore, visualize, and monitor the latitude and longitude of bird migration patterns. + - **NOAA NDBC sample data**: Explore, visualize, and monitor NDBC's observations from their buoys. This data observes air temperature, wind speed, and more from specific locations. + - **NOAA water sample data**: Explore, visualize, and monitor temperature, water level, pH, and quality from specific locations. + - **USGS Earthquake data**: Explore, visualize, and monitor earthquake monitoring data. This data includes alerts, cdi, quarry blast, magnitide, and more. +2. Do one of the following to download sample data: + - [Add sample data with community template](#add-sample-data-with-community-templates) + - [Add sample data using the InfluxDB UI](#add-sample-data) + +### Add sample data with community template + +1. Visit the **InfluxDB templates page** in the InfluxDB OSS UI. Click **Settings** > **Templates** in the navigation menu on the left. + + {{< nav-icon "settings" >}} + +2. Paste the Sample Data community temple URL in **resource manifest file** field: + + ``` + https://github.com/influxdata/community-templates/blob/master/sample-data/sample-data.yml + ``` + +## Explore sample data +Use the [Data Explorer](/influxdb/cloud/visualize-data/explore-metrics/) +to query and visualize data in sample data buckets. + +In the navigation menu on the left, click **Data Explorer**. + +{{< nav-icon "explore" >}} + +### Add sample data + +1. In the navigation menu on the left, click **Data (Load Data)** > **Buckets**. + + {{< nav-icon "data" >}} + +2. Click **{{< icon "plus" >}} Create bucket**, and then name your bucket. The bucket will appear in your list of buckets. +3. View the [sample datasets document](/influxdb/cloud/reference/sample-data/#sample-datasets) and choose a sample data to query. +4. Copy the `sample.data()` function listed underneath. +5. Click **Explore** on the left navigation of InfluxDB Cloud and click your bucket, and then click **Script Editor**. +6. Paste the `sample.data()` function. +7. Click **Submit** to run the query. + +For more information about querying in the Script Editor, see how to [Query data with Flux and the Data Explorer](/influxdb/cloud/query-data/execute-queries/data-explorer/#query-data-with-flux-and-the-data-explorer). + +## Create sample data dashboards + +After adding a sample data bucket, create a dashboard specific to the sample dataset: + +1. Click **Boards (Dashboards)** in the navigation menu on the left. + + {{< nav-icon "dashboards" >}} + +2. Click **Create Dashboard > New Dashboard**, and name the dashboard after your bucket. +3. Click **Add Cell**, and select your sample data bucket. +4. Click **Script Editor**. +5. Copy and paste the `sample.data()` function into the script editor. +6. Click **Submit** to run the query. +6. Define the variables of your sample data. diff --git a/content/influxdb/v2.6/query-data/flux/_index.md b/content/influxdb/v2.6/query-data/flux/_index.md new file mode 100644 index 000000000..0803a150b --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/_index.md @@ -0,0 +1,35 @@ +--- +title: Query data with Flux +description: Guides that walk through both common and complex queries and use cases for Flux. +weight: 102 +influxdb/v2.6/tags: [flux, query] +menu: + influxdb_2_6: + name: Query with Flux + parent: Query data +alias: + - /influxdb/v2.6/query-data/guides/ +--- + +The following guides walk through both common and complex queries and use cases for Flux. + +{{% note %}} +#### Example data variable +Many of the examples provided in the following guides use a `data` variable, +which represents a basic query that filters data by measurement and field. +`data` is defined as: + +```js +data = from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") +``` +{{% /note %}} + +## Flux query guides + +{{< children type="anchored-list" pages="all" >}} + +--- + +{{< children pages="all" readmore=true hr=true >}} diff --git a/content/influxdb/v2.6/query-data/flux/calculate-percentages.md b/content/influxdb/v2.6/query-data/flux/calculate-percentages.md new file mode 100644 index 000000000..b69021793 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/calculate-percentages.md @@ -0,0 +1,211 @@ +--- +title: Calculate percentages with Flux +list_title: Calculate percentages +description: > + Use [`pivot()` or `join()`](/influxdb/v2.6/query-data/flux/mathematic-operations/#pivot-vs-join) + and the `map()` function to align operand values into rows and calculate a percentage. +menu: + influxdb_2_6: + name: Calculate percentages + parent: Query with Flux +weight: 209 +aliases: + - /influxdb/v2.6/query-data/guides/calculate-percentages/ +related: + - /influxdb/v2.6/query-data/flux/mathematic-operations + - /{{< latest "flux" >}}/stdlib/universe/map + - /{{< latest "flux" >}}/stdlib/universe/pivot + - /{{< latest "flux" >}}/stdlib/universe/join +list_query_example: percentages +--- + +Calculating percentages from queried data is a common use case for time series data. +To calculate a percentage in Flux, operands must be in each row. +Use `map()` to re-map values in the row and calculate a percentage. + +**To calculate percentages** + +1. Use [`from()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/), + [`range()`](/{{< latest "flux" >}}/stdlib/universe/range/) and + [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) to query operands. +2. Use [`pivot()` or `join()`](/influxdb/v2.6/query-data/flux/mathematic-operations/#pivot-vs-join) + to align operand values into rows. +3. Use [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) + to divide the numerator operand value by the denominator operand value and multiply by 100. + +{{% note %}} +The following examples use `pivot()` to align operands into rows because +`pivot()` works in most cases and is more performant than `join()`. +_See [Pivot vs join](/influxdb/v2.6/query-data/flux/mathematic-operations/#pivot-vs-join)._ +{{% /note %}} + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m1" and r._field =~ /field[1-2]/ ) + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({ r with _value: r.field1 / r.field2 * 100.0 })) +``` + +## GPU monitoring example +The following example queries data from the gpu-monitor bucket and calculates the +percentage of GPU memory used over time. +Data includes the following: + +- **`gpu` measurement** +- **`mem_used` field**: used GPU memory in bytes +- **`mem_total` field**: total GPU memory in bytes + +### Query mem_used and mem_total fields +```js +from(bucket: "gpu-monitor") + |> range(start: 2020-01-01T00:00:00Z) + |> filter(fn: (r) => r._measurement == "gpu" and r._field =~ /mem_/) +``` + +###### Returns the following stream of tables: + +| _time | _measurement | _field | _value | +|:----- |:------------:|:------: | ------: | +| 2020-01-01T00:00:00Z | gpu | mem_used | 2517924577 | +| 2020-01-01T00:00:10Z | gpu | mem_used | 2695091978 | +| 2020-01-01T00:00:20Z | gpu | mem_used | 2576980377 | +| 2020-01-01T00:00:30Z | gpu | mem_used | 3006477107 | +| 2020-01-01T00:00:40Z | gpu | mem_used | 3543348019 | +| 2020-01-01T00:00:50Z | gpu | mem_used | 4402341478 | + +

+ +| _time | _measurement | _field | _value | +|:----- |:------------:|:------: | ------: | +| 2020-01-01T00:00:00Z | gpu | mem_total | 8589934592 | +| 2020-01-01T00:00:10Z | gpu | mem_total | 8589934592 | +| 2020-01-01T00:00:20Z | gpu | mem_total | 8589934592 | +| 2020-01-01T00:00:30Z | gpu | mem_total | 8589934592 | +| 2020-01-01T00:00:40Z | gpu | mem_total | 8589934592 | +| 2020-01-01T00:00:50Z | gpu | mem_total | 8589934592 | + +### Pivot fields into columns +Use `pivot()` to pivot the `mem_used` and `mem_total` fields into columns. +Output includes `mem_used` and `mem_total` columns with values for each corresponding `_time`. + +```js +// ... + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") +``` + +###### Returns the following: + +| _time | _measurement | mem_used | mem_total | +|:----- |:------------:| --------: | ---------: | +| 2020-01-01T00:00:00Z | gpu | 2517924577 | 8589934592 | +| 2020-01-01T00:00:10Z | gpu | 2695091978 | 8589934592 | +| 2020-01-01T00:00:20Z | gpu | 2576980377 | 8589934592 | +| 2020-01-01T00:00:30Z | gpu | 3006477107 | 8589934592 | +| 2020-01-01T00:00:40Z | gpu | 3543348019 | 8589934592 | +| 2020-01-01T00:00:50Z | gpu | 4402341478 | 8589934592 | + +### Map new values +Each row now contains the values necessary to calculate a percentage. +Use `map()` to re-map values in each row. +Divide `mem_used` by `mem_total` and multiply by 100 to return the percentage. + +{{% note %}} +To return a precise float percentage value that includes decimal points, the example +below casts integer field values to floats and multiplies by a float value (`100.0`). +{{% /note %}} + +```js +// ... + |> map( + fn: (r) => ({ + _time: r._time, + _measurement: r._measurement, + _field: "mem_used_percent", + _value: float(v: r.mem_used) / float(v: r.mem_total) * 100.0 + }), + ) +``` +##### Query results: + +| _time | _measurement | _field | _value | +|:----- |:------------:|:------: | ------: | +| 2020-01-01T00:00:00Z | gpu | mem_used_percent | 29.31 | +| 2020-01-01T00:00:10Z | gpu | mem_used_percent | 31.37 | +| 2020-01-01T00:00:20Z | gpu | mem_used_percent | 30.00 | +| 2020-01-01T00:00:30Z | gpu | mem_used_percent | 35.00 | +| 2020-01-01T00:00:40Z | gpu | mem_used_percent | 41.25 | +| 2020-01-01T00:00:50Z | gpu | mem_used_percent | 51.25 | + +### Full query +```js +from(bucket: "gpu-monitor") + |> range(start: 2020-01-01T00:00:00Z) + |> filter(fn: (r) => r._measurement == "gpu" and r._field =~ /mem_/ ) + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map( + fn: (r) => ({ + _time: r._time, + _measurement: r._measurement, + _field: "mem_used_percent", + _value: float(v: r.mem_used) / float(v: r.mem_total) * 100.0 + }), + ) +``` + +## Examples + +#### Calculate percentages using multiple fields +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "used_system" or r._field == "used_user" or r._field == "total") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map( + fn: (r) => ({ + r with _value: float(v: r.used_system + r.used_user) / float(v: r.total) * 100.0 + }), + ) +``` + +#### Calculate percentages using multiple measurements + +1. Ensure measurements are in the same [bucket](/influxdb/v2.6/reference/glossary/#bucket). +2. Use `filter()` to include data from both measurements. +3. Use `group()` to ungroup data and return a single table. +4. Use `pivot()` to pivot fields into columns. +5. Use `map()` to re-map rows and perform the percentage calculation. + + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => (r._measurement == "m1" or r._measurement == "m2") and (r._field == "field1" or r._field == "field2")) + |> group() + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: r.field1 / r.field2 * 100.0})) +``` + +#### Calculate percentages using multiple data sources +```js +import "sql" +import "influxdata/influxdb/secrets" + +pgUser = secrets.get(key: "POSTGRES_USER") +pgPass = secrets.get(key: "POSTGRES_PASSWORD") +pgHost = secrets.get(key: "POSTGRES_HOST") + +t1 = sql.from( + driverName: "postgres", + dataSourceName: "postgresql://${pgUser}:${pgPass}@${pgHost}", + query: "SELECT id, name, available FROM example_table", +) + +t2 = from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + +join(tables: {t1: t1, t2: t2}, on: ["id"]) + |> map(fn: (r) => ({r with _value: r._value_t2 / r.available_t1 * 100.0})) +``` diff --git a/content/influxdb/v2.6/query-data/flux/conditional-logic.md b/content/influxdb/v2.6/query-data/flux/conditional-logic.md new file mode 100644 index 000000000..a00f5730d --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/conditional-logic.md @@ -0,0 +1,224 @@ +--- +title: Query using conditional logic +seotitle: Query using conditional logic in Flux +list_title: Conditional logic +description: > + This guide describes how to use Flux conditional expressions, such as `if`, + `else`, and `then`, to query and transform data. **Flux evaluates statements from left to right and stops evaluating once a condition matches.** +influxdb/v2.6/tags: [conditionals, flux] +menu: + influxdb_2_6: + name: Conditional logic + parent: Query with Flux +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/conditional-logic/ +related: + - /influxdb/v2.6/query-data/flux/query-fields/ + - /{{< latest "flux" >}}/stdlib/universe/filter/ + - /{{< latest "flux" >}}/stdlib/universe/map/ + - /{{< latest "flux" >}}/stdlib/universe/reduce/ +list_code_example: | + ```js + if color == "green" then "008000" else "ffffff" + ``` +--- + +Flux provides `if`, `then`, and `else` conditional expressions that allow for powerful and flexible Flux queries. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +##### Conditional expression syntax +```js +// Pattern +if then else + +// Example +if color == "green" then "008000" else "ffffff" +``` + +Conditional expressions are most useful in the following contexts: + +- When defining variables. +- When using functions that operate on a single row at a time ( + [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/), + [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/), + [`reduce()`](/{{< latest "flux" >}}/stdlib/universe/reduce) ). + +## Evaluating conditional expressions + +Flux evaluates statements in order and stops evaluating once a condition matches. + +For example, given the following statement: + +```js +if r._value > 95.0000001 and r._value <= 100.0 then + "critical" +else if r._value > 85.0000001 and r._value <= 95.0 then + "warning" +else if r._value > 70.0000001 and r._value <= 85.0 then + "high" +else + "normal" +``` + +When `r._value` is 96, the output is "critical" and the remaining conditions are not evaluated. + +## Examples + +- [Conditionally set the value of a variable](#conditionally-set-the-value-of-a-variable) +- [Create conditional filters](#create-conditional-filters) +- [Conditionally transform column values with map()](#conditionally-transform-column-values-with-map) +- [Conditionally increment a count with reduce()](#conditionally-increment-a-count-with-reduce) + +### Conditionally set the value of a variable +The following example sets the `overdue` variable based on the +`dueDate` variable's relation to `now()`. + +```js +dueDate = 2019-05-01 +overdue = if dueDate < now() then true else false +``` + +### Create conditional filters +The following example uses an example `metric` [dashboard variable](/influxdb/v2.6/visualize-data/variables/) +to change how the query filters data. +`metric` has three possible values: + +- Memory +- CPU +- Disk + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter( + fn: (r) => if v.metric == "Memory" then + r._measurement == "mem" and r._field == "used_percent" + else if v.metric == "CPU" then + r._measurement == "cpu" and r._field == "usage_user" + else if v.metric == "Disk" then + r._measurement == "disk" and r._field == "used_percent" + else + r._measurement != "", + ) +``` + + +### Conditionally transform column values with map() +The following example uses the [`map()` function](/{{< latest "flux" >}}/stdlib/universe/map/) +to conditionally transform column values. +It sets the `level` column to a specific string based on `_value` column. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[No Comments](#) +[Comments](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> map( + fn: (r) => ({r with + level: if r._value >= 95.0000001 and r._value <= 100.0 then + "critical" + else if r._value >= 85.0000001 and r._value <= 95.0 then + "warning" + else if r._value >= 70.0000001 and r._value <= 85.0 then + "high" + else + "normal", + }), + ) +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> map( + fn: (r) => ({ + // Retain all existing columns in the mapped row + r with + // Set the level column value based on the _value column + level: if r._value >= 95.0000001 and r._value <= 100.0 then + "critical" + else if r._value >= 85.0000001 and r._value <= 95.0 then + "warning" + else if r._value >= 70.0000001 and r._value <= 85.0 then + "high" + else + "normal", + }), + ) +``` + +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +### Conditionally increment a count with reduce() +The following example uses the [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +and [`reduce()`](/{{< latest "flux" >}}/stdlib/universe/reduce/) +functions to count the number of records in every five minute window that exceed a defined threshold. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[No Comments](#) +[Comments](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```js +threshold = 65.0 + +data = from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> aggregateWindow( + every: 5m, + fn: (column, tables=<-) => tables + |> reduce( + identity: {above_threshold_count: 0.0}, + fn: (r, accumulator) => ({ + above_threshold_count: if r._value >= threshold then + accumulator.above_threshold_count + 1.0 + else + accumulator.above_threshold_count + 0.0, + }), + ), + ) +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```js +threshold = 65.0 + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + // Aggregate data into 5 minute windows using a custom reduce() function + |> aggregateWindow( + every: 5m, + // Use a custom function in the fn parameter. + // The aggregateWindow fn parameter requires 'column' and 'tables' parameters. + fn: (column, tables=<-) => tables + |> reduce( + identity: {above_threshold_count: 0.0}, + fn: (r, accumulator) => ({ + // Conditionally increment above_threshold_count if + // r.value exceeds the threshold + above_threshold_count: if r._value >= threshold then + accumulator.above_threshold_count + 1.0 + else + accumulator.above_threshold_count + 0.0, + }), + ), + ) +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/flux/cumulativesum.md b/content/influxdb/v2.6/query-data/flux/cumulativesum.md new file mode 100644 index 000000000..d121b4de7 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/cumulativesum.md @@ -0,0 +1,70 @@ +--- +title: Query cumulative sum +seotitle: Query cumulative sum in Flux +list_title: Cumulative sum +description: > + Use the `cumulativeSum()` function to calculate a running total of values. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Cumulative sum +influxdb/v2.6/tags: [query, cumulative sum] +related: + - /{{< latest "flux" >}}/stdlib/universe/cumulativesum/ +list_query_example: cumulative_sum +--- + +Use the [`cumulativeSum()` function](/{{< latest "flux" >}}/stdlib/universe/cumulativesum/) +to calculate a running total of values. +`cumulativeSum` sums the values of subsequent records and returns each row updated with the summed total. + +{{< flex >}} +{{% flex-content "half" %}} +**Given the following input table:** + +| _time | _value | +| ----- |:------:| +| 0001 | 1 | +| 0002 | 2 | +| 0003 | 1 | +| 0004 | 3 | +{{% /flex-content %}} +{{% flex-content "half" %}} +**`cumulativeSum()` returns:** + +| _time | _value | +| ----- |:------:| +| 0001 | 1 | +| 0002 | 3 | +| 0003 | 4 | +| 0004 | 7 | +{{% /flex-content %}} +{{< /flex >}} + +{{% note %}} +The examples below use the [example data variable](/influxdb/v2.6/query-data/flux/#example-data-variable). +{{% /note %}} + +##### Calculate the running total of values +```js +data + |> cumulativeSum() +``` + +## Use cumulativeSum() with aggregateWindow() +[`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +segments data into windows of time, aggregates data in each window into a single +point, then removes the time-based segmentation. +It is primarily used to [downsample data](/influxdb/v2.6/process-data/common-tasks/downsample-data/). + +`aggregateWindow()` expects an aggregate function that returns a single row for each time window. +To use `cumulativeSum()` with `aggregateWindow`, use `sum` in `aggregateWindow()`, +then calculate the running total of the aggregate values with `cumulativeSum()`. + + +```js +data + |> aggregateWindow(every: 5m, fn: sum) + |> cumulativeSum() +``` diff --git a/content/influxdb/v2.6/query-data/flux/custom-functions/_index.md b/content/influxdb/v2.6/query-data/flux/custom-functions/_index.md new file mode 100644 index 000000000..859ac3deb --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/custom-functions/_index.md @@ -0,0 +1,223 @@ +--- +title: Create custom Flux functions +description: Create your own custom Flux functions to transform and operate on data. +list_title: Custom functions +influxdb/v2.6/tags: [functions, custom, flux] +menu: + influxdb_2_6: + name: Custom functions + parent: Query with Flux +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/custom-functions/ +list_code_example: | + ```js + multByX = (tables=<-, x) => tables + |> map(fn: (r) => ({r with _value: r._value * x})) + + data + |> multByX(x: 2.0) + ``` +--- + +Flux's functional syntax lets you create custom functions. +This guide walks through the basics of creating your own function. + +- [Function definition syntax](#function-definition-syntax) +- [Use piped-forward data in a custom function](#use-piped-forward-data-in-a-custom-function) +- [Define parameter defaults](#define-parameter-defaults) +- [Define functions with scoped variables](#define-functions-with-scoped-variables) + +## Function definition syntax +The basic syntax for defining functions in Flux is as follows: + +```js +// Basic function definition syntax +functionName = (functionParameters) => functionOperations +``` + +##### functionName +The name used to call the function in your Flux script. + +##### functionParameters +A comma-separated list of parameters passed into the function and used in its operations. +[Parameter defaults](#define-parameter-defaults) can be defined for each. + +##### functionOperations +Operations and functions that manipulate the input into the desired output. + +#### Basic function examples + +###### Example square function +```js +// Function definition +square = (n) => n * n + +// Function usage +> square(n:3) +9 +``` + +###### Example multiply function +```js +// Function definition +multiply = (x, y) => x * y + +// Function usage +> multiply(x: 2, y: 15) +30 +``` + +## Use piped-forward data in a custom function +Most Flux functions process piped-forward data. +To process piped-forward data, one of the function +parameters must capture the input tables using the `<-` pipe-receive expression. + +In the example below, the `tables` parameter is assigned to the `<-` expression, +which represents all data piped-forward into the function. +`tables` is then piped-forward into other operations in the function definition. + +```js +functionName = (tables=<-) => tables |> functionOperations +``` + +#### Pipe-forwardable function example + +###### Multiply row values by x +The example below defines a `multByX` function that multiplies the `_value` column +of each row in the input table by the `x` parameter. +It uses the [`map()` function](/{{< latest "flux" >}}/stdlib/universe/map) +to modify each `_value`. + +```js +// Function definition +multByX = (tables=<-, x) => tables + |> map(fn: (r) => ({r with _value: r._value * x})) + +// Function usage +from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> multByX(x: 2.0) +``` + +## Define parameter defaults +Use the `=` assignment operator to assign a default value to function parameters +in your function definition: + +```js +functionName = (param1=defaultValue1, param2=defaultValue2) => functionOperation +``` + +Defaults are overridden by explicitly defining the parameter in the function call. + +### Example functions with defaults + +#### Get a list of leaders +The example below defines a `leaderBoard` function that returns a limited number +of records sorted by values in specified columns. +It uses the [`sort()` function](/{{< latest "flux" >}}/stdlib/universe/sort) +to sort records in either descending or ascending order. +It then uses the [`limit()` function](/{{< latest "flux" >}}/stdlib/universe/limit) +to return a specified number of records from the sorted table. + +```js +// Function definition +leaderBoard = (tables=<-, limit=4, columns=["_value"], desc=true) => tables + |> sort(columns: columns, desc: desc) + |> limit(n: limit) + +// Function usage +// Get the 4 highest scoring players +from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "player-stats" and r._field == "total-points") + |> leaderBoard() + +// Get the 10 shortest race times +from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "race-times" and r._field == "elapsed-time") + |> leaderBoard(limit: 10, desc: false) +``` + +## Define functions with scoped variables +To create custom functions with variables scoped to the function, place your +function operations and variables inside of a [block (`{}`)](/influxdb/v2.6/reference/flux/language/blocks/) +and use a `return` statement to return a specific variable. + +```js +functionName = (functionParameters) => { + exampleVar = "foo" + + return exampleVar +} +``` + +### Example functions with scoped variables + +- [Return an alert level based on a value](#return-an-alert-level-based-on-a-value) +- [Convert a HEX color code to a name](#convert-a-hex-color-code-to-a-name) + +#### Return an alert level based on a value +The following function uses conditional logic to return an alert level based on +a numeric input value: + +```js +alertLevel = (v) => { + level = if float(v: v) >= 90.0 then + "crit" + else if float(v: v) >= 80.0 then + "warn" + else if float(v: v) >= 65.0 then + "info" + else + "ok" + + return level +} + +alertLevel(v: 87.3) +// Returns "warn" +``` + +#### Convert a HEX color code to a name +The following function converts a hexadecimal (HEX) color code to the equivalent HTML color name. +The functions uses the [Flux dictionary package](/{{< latest "flux" >}}/stdlib/dict/) +to create a dictionary of HEX codes and their corresponding names. + +```js +import "dict" + +hexName = (hex) => { + hexNames = dict.fromList( + pairs: [ + {key: "#00ffff", value: "Aqua"}, + {key: "#000000", value: "Black"}, + {key: "#0000ff", value: "Blue"}, + {key: "#ff00ff", value: "Fuchsia"}, + {key: "#808080", value: "Gray"}, + {key: "#008000", value: "Green"}, + {key: "#00ff00", value: "Lime"}, + {key: "#800000", value: "Maroon"}, + {key: "#000080", value: "Navy"}, + {key: "#808000", value: "Olive"}, + {key: "#800080", value: "Purple"}, + {key: "#ff0000", value: "Red"}, + {key: "#c0c0c0", value: "Silver"}, + {key: "#008080", value: "Teal"}, + {key: "#ffffff", value: "White"}, + {key: "#ffff00", value: "Yellow"}, + ], + ) + name = dict.get(dict: hexNames, key: hex, default: "No known name") + + return name +} + +hexName(hex: "#000000") +// Returns "Black" + +hexName(hex: "#8b8b8b") +// Returns "No known name" +``` diff --git a/content/influxdb/v2.6/query-data/flux/custom-functions/custom-aggregate.md b/content/influxdb/v2.6/query-data/flux/custom-functions/custom-aggregate.md new file mode 100644 index 000000000..4813f5f5c --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/custom-functions/custom-aggregate.md @@ -0,0 +1,245 @@ +--- +title: Create custom aggregate functions +description: Create your own custom aggregate functions in Flux using the `reduce()` function. +influxdb/v2.6/tags: [functions, custom, flux, aggregates] +menu: + influxdb_2_6: + name: Custom aggregate functions + parent: Custom functions +weight: 301 +aliases: + - /influxdb/v2.6/query-data/guides/custom-functions/custom-aggregate/ +related: + - /i{{< latest "flux" >}}/stdlib/universe/reduce/ +--- + +To aggregate your data, use the Flux +[aggregate functions](/{{< latest "flux" >}}/function-types#aggregates) +or create custom aggregate functions using the +[`reduce()`function](/{{< latest "flux" >}}/stdlib/universe/reduce/). + +## Aggregate function characteristics +Aggregate functions all have the same basic characteristics: + +- They operate on individual input tables and transform all records into a single record. +- The output table has the same [group key](/{{< latest "flux" >}}/get-started/data-model/#group-key) as the input table. + +## How reduce() works +The `reduce()` function operates on one row at a time using the function defined in +the [`fn` parameter](/{{< latest "flux" >}}/stdlib/universe/reduce/#fn). +The `fn` function maps keys to specific values using two [records](/{{< latest "flux" >}}/data-types/composite/record/) +specified by the following parameters: + +| Parameter | Description | +|:---------: |:----------- | +| `r` | A record that represents the row or record. | +| `accumulator` | A record that contains values used in each row's aggregate calculation. | + +{{% note %}} +The `reduce()` function's [`identity` parameter](/{{< latest "flux" >}}/stdlib/universe/reduce/#identity) +defines the initial `accumulator` record. +{{% /note %}} + +### Example reduce() function +The following example `reduce()` function produces a sum and product of all values +in an input table. + +```js +|> reduce( + fn: (r, accumulator) => ({ + sum: r._value + accumulator.sum, + product: r._value * accumulator.product + }), + identity: {sum: 0.0, product: 1.0}, +) +``` + +To illustrate how this function works, take this simplified table for example: + +| _time | _value | +|:----- | ------:| +| 2019-04-23T16:10:49Z | 1.6 | +| 2019-04-23T16:10:59Z | 2.3 | +| 2019-04-23T16:11:09Z | 0.7 | +| 2019-04-23T16:11:19Z | 1.2 | +| 2019-04-23T16:11:29Z | 3.8 | + +###### Input records +The `fn` function uses the data in the first row to define the `r` record. +It defines the `accumulator` record using the `identity` parameter. + +```js +r = { _time: 2019-04-23T16:10:49.00Z, _value: 1.6 } +accumulator = { sum : 0.0, product : 1.0 } +``` + +###### Key mappings +It then uses the `r` and `accumulator` records to populate values in the key mappings: +```js +// sum: r._value + accumulator.sum +sum: 1.6 + 0.0 + +// product: r._value * accumulator.product +product: 1.6 * 1.0 +``` + +###### Output record +This produces an output record with the following key value pairs: + +```js +{ sum: 1.6, product: 1.6 } +``` + +The function then processes the next row using this **output record** as the `accumulator`. + +{{% note %}} +Because `reduce()` uses the output record as the `accumulator` when processing the next row, +keys mapped in the `fn` function must match keys in the `identity` and `accumulator` records. +{{% /note %}} + +###### Processing the next row +```js +// Input records for the second row +r = { _time: 2019-04-23T16:10:59.00Z, _value: 2.3 } +accumulator = { sum : 1.6, product : 1.6 } + +// Key mappings for the second row +sum: 2.3 + 1.6 +product: 2.3 * 1.6 + +// Output record of the second row +{ sum: 3.9, product: 3.68 } +``` + +It then uses the new output record as the `accumulator` for the next row. +This cycle continues until all rows in the table are processed. + +##### Final output record and table +After all records in the table are processed, `reduce()` uses the final output record +to create a transformed table with one row and columns for each mapped key. + +###### Final output record +```js +{ sum: 9.6, product: 11.74656 } +``` + +###### Output table +| sum | product | +| --- | -------- | +| 9.6 | 11.74656 | + +{{% note %}} +#### What happened to the \_time column? +The `reduce()` function only keeps columns that are: + +1. Are part of the input table's [group key](/{{< latest "flux" >}}/get-started/data-model/#group-key). +2. Explicitly mapped in the `fn` function. + +It drops all other columns. +Because `_time` is not part of the group key and is not mapped in the `fn` function, +it isn't included in the output table. +{{% /note %}} + +## Custom aggregate function examples +To create custom aggregate functions, use principles outlined in +[Creating custom functions](/influxdb/v2.6/query-data/flux/custom-functions) +and the `reduce()` function to aggregate rows in each input table. + +### Create a custom average function +This example illustrates how to create a function that averages values in a table. +_This is meant for demonstration purposes only. +The built-in [`mean()` function](/{{< latest "flux" >}}/stdlib/universe/mean/) +does the same thing and is much more performant._ + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[Comments](#) +[No Comments](#) +{{% /code-tabs %}} + +{{% code-tab-content %}} + +```js +average = (tables=<-, outputField="average") => tables + |> reduce( + // Define the initial accumulator record + identity: {count: 0.0, sum: 0.0, avg: 0.0}, + fn: (r, accumulator) => ({ + // Increment the counter on each reduce loop + count: accumulator.count + 1.0, + // Add the _value to the existing sum + sum: accumulator.sum + r._value, + // Divide the existing sum by the existing count for a new average + avg: (accumulator.sum + r._value) / (accumulator.count + 1.0), + }), + ) + // Drop the sum and the count columns since they are no longer needed + |> drop(columns: ["sum", "count"]) + // Set the _field column of the output table to to the value + // provided in the outputField parameter + |> set(key: "_field", value: outputField) + // Rename avg column to _value + |> rename(columns: {avg: "_value"}) +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```js +average = (tables=<-, outputField="average") => tables + |> reduce( + identity: {count: 0.0, sum: 0.0, avg: 0.0}, + fn: (r, accumulator) => ({ + count: accumulator.count + 1.0, + sum: accumulator.sum + r._value, + avg: (accumulator.sum + r._value) / (accumulator.count + 1.0), + }), + ) + |> drop(columns: ["sum", "count"]) + |> set(key: "_field", value: outputField) + |> rename(columns: {avg: "_value"}) +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +### Aggregate multiple columns +Built-in aggregate functions only operate on one column. +Use `reduce()` to create a custom aggregate function that aggregates multiple columns. + +The following function expects input tables to have `c1_value` and `c2_value` +columns and generates an average for each. + +```js +multiAvg = (tables=<-) => tables + |> reduce( + identity: { + count: 1.0, + c1_sum: 0.0, + c1_avg: 0.0, + c2_sum: 0.0, + c2_avg: 0.0, + }, + fn: (r, accumulator) => ({ + count: accumulator.count + 1.0, + c1_sum: accumulator.c1_sum + r.c1_value, + c1_avg: accumulator.c1_sum / accumulator.count, + c2_sum: accumulator.c2_sum + r.c2_value, + c2_avg: accumulator.c2_sum / accumulator.count, + }), + ) +``` + +### Aggregate gross and net profit +Use `reduce()` to create a function that aggregates gross and net profit. +This example expects `profit` and `expenses` columns in the input tables. + +```js +profitSummary = (tables=<-) => tables + |> reduce( + identity: {gross: 0.0, net: 0.0}, + fn: (r, accumulator) => ({ + gross: accumulator.gross + r.profit, + net: accumulator.net + r.profit - r.expenses + } + ) + ) +``` diff --git a/content/influxdb/v2.6/query-data/flux/exists.md b/content/influxdb/v2.6/query-data/flux/exists.md new file mode 100644 index 000000000..4067f10be --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/exists.md @@ -0,0 +1,115 @@ +--- +title: Check if a value exists +seotitle: Use Flux to check if a value exists +list_title: Exists +description: > + Use the Flux `exists` operator to check if a row record contains a column or if + that column's value is `null`. +influxdb/v2.6/tags: [exists] +menu: + influxdb_2_6: + name: Exists + parent: Query with Flux +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/exists/ +related: + - /influxdb/v2.6/query-data/flux/query-fields/ + - /{{< latest "flux" >}}/stdlib/universe/filter/ +list_code_example: | + ##### Filter null values + ```js + data + |> filter(fn: (r) => exists r._value) + ``` +--- + +Use the `exists` operator to check if a row record contains a column or if a +column's value is _null_. + +```js +(r) => exists r.column +``` + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +Use `exists` with row functions ( +[`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/), +[`map()`](/{{< latest "flux" >}}/stdlib/universe/map/), +[`reduce()`](/{{< latest "flux" >}}/stdlib/universe/reduce/)) +to check if a row includes a column or if the value for that column is _null_. + +#### Filter null values + +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => exists r._value) +``` + +#### Map values based on existence + +```js +from(bucket: "default") + |> range(start: -30s) + |> map( + fn: (r) => ({r with + human_readable: if exists r._value then + "${r._field} is ${string(v: r._value)}." + else + "${r._field} has no value.", + }), + ) +``` + +#### Ignore null values in a custom aggregate function + +```js +customSumProduct = (tables=<-) => tables + |> reduce( + identity: {sum: 0.0, product: 1.0}, + fn: (r, accumulator) => ({r with + sum: if exists r._value then + r._value + accumulator.sum + else + accumulator.sum, + product: if exists r._value then + r.value * accumulator.product + else + accumulator.product, + }), + ) +``` + +#### Check if a statically defined record contains a key + +When you use the [record literal syntax](/flux/v0.x/data-types/composite/record/#record-syntax) +to statically define a record, Flux knows the record type and what keys to expect. + +- If the key exists in the static record, `exists` returns `true`. +- If the key exists in the static record, but has a _null_ value, `exists` returns `false`. +- If the key does not exist in the static record, because the record type is + statically known, `exists` returns an error. + +```js +import "internal/debug" + +p = { + firstName: "John", + lastName: "Doe", + age: 42, + height: debug.null(type: "int"), +} + +exists p.firstName +// Returns true + +exists p.height +// Returns false + +exists p.hairColor +// Returns "error: record is missing label hairColor" +``` diff --git a/content/influxdb/v2.6/query-data/flux/explore-schema.md b/content/influxdb/v2.6/query-data/flux/explore-schema.md new file mode 100644 index 000000000..6ec848a87 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/explore-schema.md @@ -0,0 +1,291 @@ +--- +title: Explore your data schema with Flux +list_title: Explore your schema +description: > + Flux provides functions that let you explore the structure and schema of your + data stored in InfluxDB. +influxdb/v2.6/tags: [schema] +menu: + influxdb_2_6: + name: Explore your schema + parent: Query with Flux +weight: 206 +related: + - /{{< latest "flux" >}}/stdlib/universe/buckets/ + - /{{< latest "flux" >}}/stdlib/schema/measurements + - /{{< latest "flux" >}}/stdlib/schema/fieldkeys + - /{{< latest "flux" >}}/stdlib/schema/measurementfieldkeys + - /{{< latest "flux" >}}/stdlib/schema/tagkeys + - /{{< latest "flux" >}}/stdlib/schema/measurementtagkeys + - /{{< latest "flux" >}}/stdlib/schema/tagvalues + - /{{< latest "flux" >}}/stdlib/schema/measurementtagvalues +list_code_example: | + ```js + import "influxdata/influxdb/schema" + + // List buckets + buckets() + + // List measurements + schema.measurements(bucket: "example-bucket") + + // List field keys + schema.fieldKeys(bucket: "example-bucket") + + // List tag keys + schema.tagKeys(bucket: "example-bucket") + + // List tag values + schema.tagValues(bucket: "example-bucket", tag: "example-tag") + ``` +--- + +Flux provides functions that let you explore the structure and schema of your +data stored in InfluxDB. + +- [List buckets](#list-buckets) +- [List measurements](#list-measurements) +- [List field keys](#list-field-keys) +- [List tag keys](#list-tag-keys) +- [List tag values](#list-tag-values) + +{{% warn %}} +Functions in the `schema` package are not supported in the [Flux REPL](/influxdb/v2.6/tools/repl/). +{{% /warn %}} + +## List buckets +Use [`buckets()`](/{{< latest "flux" >}}/stdlib/universe/buckets/) +to list **buckets in your organization**. + +```js +buckets() +``` + +{{< expand-wrapper >}} +{{% expand "View example `buckets()` output" %}} + +`buckets()` returns a single table with the following columns: + +- **organizationID**: Organization ID +- **name**: Bucket name +- **id**: Bucket ID +- **retentionPolicy**: Retention policy associated with the bucket +- **retentionPeriod**: Retention period in nanoseconds + +| organizationID | name | id | retentionPolicy | retentionPeriod | +| :------------- | :--------------- | :------ | :-------------- | --------------: | +| XooX0x0 | _monitoring | XooX0x1 | | 604800000000000 | +| XooX0x0 | _tasks | XooX0x2 | | 259200000000000 | +| XooX0x0 | example-bucket-1 | XooX0x3 | | 0 | +| XooX0x0 | example-bucket-2 | XooX0x4 | | 0 | +| XooX0x0 | example-bucket-3 | XooX0x5 | | 172800000000000 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## List measurements +Use [`schema.measurements()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/measurements) +to list **measurements in a bucket**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.measurements(bucket: "example-bucket") +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.measurements()` output" %}} + +`schema.measurements()` returns a single table with a `_value` column. +Each row contains the name of a measurement. + +| _value | +| :----- | +| m1 | +| m2 | +| m3 | +| m4 | +| m5 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## List field keys +Use [`schema.fieldKeys`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/fieldkeys) +to list **field keys in a bucket**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.fieldKeys(bucket: "example-bucket") +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.fieldKeys()` output" %}} + +`schema.fieldKeys()` returns a single table with a `_value` column. +Each row contains a unique field key from the specified bucket. + +| _value | +| :----- | +| field1 | +| field2 | +| field3 | +| field4 | +| field5 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +### List fields in a measurement +Use [`schema.measurementFieldKeys`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/measurementfieldkeys) +to list **field keys in a measurement**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.measurementFieldKeys( + bucket: "example-bucket", + measurement: "example-measurement", +) +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.measurementFieldKeys()` output" %}} + +`schema.measurementFieldKeys()` returns a single table with a `_value` column. +Each row contains the name of a unique field key in the specified bucket and measurement. + +| _value | +| :----- | +| field1 | +| field2 | +| field3 | +| field4 | +| field5 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## List tag keys +Use [`schema.tagKeys()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/tagkeys) +to list **tag keys in a bucket**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.tagKeys(bucket: "example-bucket") +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.tagKeys()` output" %}} + +`schema.tagKeys()` returns a single table with a `_value` column. +Each row contains the a unique tag key from the specified bucket. + +| _value | +| :----- | +| tag1 | +| tag2 | +| tag3 | +| tag4 | +| tag5 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +### List tag keys in a measurement +Use [`schema.measurementTagKeys`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/measurementtagkeys) +to list **tag keys in a measurement**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.measurementTagKeys( + bucket: "example-bucket", + measurement: "example-measurement", +) +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.measurementTagKeys()` output" %}} + +`schema.measurementTagKeys()` returns a single table with a `_value` column. +Each row contains a unique tag key from the specified bucket and measurement. + +| _value | +| :----- | +| tag1 | +| tag2 | +| tag3 | +| tag4 | +| tag5 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## List tag values +Use [`schema.tagValues()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/tagvalues) +to list **tag values for a given tag in a bucket**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.tagValues(bucket: "example-bucket", tag: "example-tag") +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.tagValues()` output" %}} + +`schema.tagValues()` returns a single table with a `_value` column. +Each row contains a unique tag value from the specified bucket and tag key. + +| _value | +| :-------- | +| tagValue1 | +| tagValue2 | +| tagValue3 | +| tagValue4 | +| tagValue5 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +### List tag values in a measurement +Use [`schema.measurementTagValues`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/schema/measurementtagvalues) +to list **tag values for a given tag in a measurement**. +_By default, this function returns results from the last 30 days._ + +```js +import "influxdata/influxdb/schema" + +schema.measurementTagValues( + bucket: "example-bucket", + tag: "example-tag", + measurement: "example-measurement", +) +``` + +{{< expand-wrapper >}} +{{% expand "View example `schema.measurementTagValues()` output" %}} + +`schema.measurementTagValues()` returns a single table with a `_value` column. +Each row contains a unique tag value from the specified bucket, measurement, +and tag key. + +| _value | +| :-------- | +| tagValue1 | +| tagValue2 | +| tagValue3 | +| tagValue4 | +| tagValue5 | + +{{% /expand %}} +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/flux/fill.md b/content/influxdb/v2.6/query-data/flux/fill.md new file mode 100644 index 000000000..72c5edbf9 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/fill.md @@ -0,0 +1,113 @@ +--- +title: Fill null values in data +seotitle: Fill null values in data +list_title: Fill +description: > + Use `fill()` function to replace _null_ values. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Fill +influxdb/v2.6/tags: [query, fill] +related: + - /{{< latest "flux" >}}/stdlib/universe/fill/ +list_query_example: fill_null +--- + +Use [`fill()`](/{{< latest "flux" >}}/stdlib/universe/fill/) +to replace _null_ values with: + +- [the previous non-null value](#fill-with-the-previous-value) +- [a specified value](#fill-with-a-specified-value) + + +```js +data + |> fill(usePrevious: true) + +// OR + +data + |> fill(value: 0.0) +``` + +{{% note %}} +#### Fill empty windows of time +The `fill()` function **does not** fill empty windows of time. +It only replaces _null_ values in existing data. +Filling empty windows of time requires time interpolation +_(see [influxdata/flux#2428](https://github.com/influxdata/flux/issues/2428))_. +{{% /note %}} + +## Fill with the previous value +To fill _null_ values with the previous **non-null** value, set the `usePrevious` parameter to `true`. + +{{% note %}} +Values remain _null_ if there is no previous non-null value in the table. +{{% /note %}} + +```js +data + |> fill(usePrevious: true) +``` + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | null | +| 2020-01-01T00:02:00Z | 0.8 | +| 2020-01-01T00:03:00Z | null | +| 2020-01-01T00:04:00Z | null | +| 2020-01-01T00:05:00Z | 1.4 | +{{% /flex-content %}} +{{% flex-content %}} +**`fill(usePrevious: true)` returns:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | null | +| 2020-01-01T00:02:00Z | 0.8 | +| 2020-01-01T00:03:00Z | 0.8 | +| 2020-01-01T00:04:00Z | 0.8 | +| 2020-01-01T00:05:00Z | 1.4 | +{{% /flex-content %}} +{{< /flex >}} + +## Fill with a specified value +To fill _null_ values with a specified value, use the `value` parameter to specify the fill value. +_The fill value must match the [data type](/{{< latest "flux" >}}/spec/types/#basic-types) +of the [column](/{{< latest "flux" >}}/stdlib/universe/fill/#column)._ + +```js +data + |> fill(value: 0.0) +``` + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | null | +| 2020-01-01T00:02:00Z | 0.8 | +| 2020-01-01T00:03:00Z | null | +| 2020-01-01T00:04:00Z | null | +| 2020-01-01T00:05:00Z | 1.4 | +{{% /flex-content %}} +{{% flex-content %}} +**`fill(value: 0.0)` returns:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 0.0 | +| 2020-01-01T00:02:00Z | 0.8 | +| 2020-01-01T00:03:00Z | 0.0 | +| 2020-01-01T00:04:00Z | 0.0 | +| 2020-01-01T00:05:00Z | 1.4 | +{{% /flex-content %}} +{{< /flex >}} diff --git a/content/influxdb/v2.6/query-data/flux/first-last.md b/content/influxdb/v2.6/query-data/flux/first-last.md new file mode 100644 index 000000000..08c7edb6d --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/first-last.md @@ -0,0 +1,145 @@ +--- +title: Query first and last values +seotitle: Query first and last values in Flux +list_title: First and last +description: > + Use `first()` or `last()` to return the first or last point in an input table. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: First & last +influxdb/v2.6/tags: [query] +related: + - /{{< latest "flux" >}}/stdlib/universe/first/ + - /{{< latest "flux" >}}/stdlib/universe/last/ +list_query_example: first_last +--- + +Use [`first()`](/{{< latest "flux" >}}/stdlib/universe/first/) or +[`last()`](/{{< latest "flux" >}}/stdlib/universe/last/) to return the first or +last record in an input table. + +```js +data + |> first() + +// OR + +data + |> last() +``` + +{{% note %}} +By default, InfluxDB returns results sorted by time, however you can use the +[`sort()` function](/{{< latest "flux" >}}/stdlib/universe/sort/) +to change how results are sorted. +`first()` and `last()` respect the sort order of input data and return records +based on the order they are received in. +{{% /note %}} + +### first +`first()` returns the first non-null record in an input table. + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**The following function returns:** +```js +|> first() +``` + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +{{% /flex-content %}} +{{< /flex >}} + +### last +`last()` returns the last non-null record in an input table. + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**The following function returns:** + +```js +|> last() +``` + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{< /flex >}} + +## Use first() or last() with aggregateWindow() +Use `first()` and `last()` with [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +to select the first or last records in time-based groups. +`aggregateWindow()` segments data into windows of time, aggregates data in each window into a single +point using aggregate or selector functions, and then removes the time-based segmentation. + + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:00Z | 10 | +| 2020-01-01T00:00:15Z | 12 | +| 2020-01-01T00:00:45Z | 9 | +| 2020-01-01T00:01:05Z | 9 | +| 2020-01-01T00:01:10Z | 15 | +| 2020-01-01T00:02:30Z | 11 | +{{% /flex-content %}} + +{{% flex-content %}} +**The following function returns:** +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[first](#) +[last](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```js +|> aggregateWindow(every: 1h, fn: first) +``` +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:59Z | 10 | +| 2020-01-01T00:01:59Z | 9 | +| 2020-01-01T00:02:59Z | 11 | +{{% /code-tab-content %}} +{{% code-tab-content %}} +```js +|> aggregateWindow(every: 1h, fn: last) +``` + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:59Z | 9 | +| 2020-01-01T00:01:59Z | 15 | +| 2020-01-01T00:02:59Z | 11 | +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} +{{%/flex-content %}} +{{< /flex >}} diff --git a/content/influxdb/v2.6/query-data/flux/flux-version.md b/content/influxdb/v2.6/query-data/flux/flux-version.md new file mode 100644 index 000000000..0287a7648 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/flux-version.md @@ -0,0 +1,145 @@ +--- +title: Query the Flux version +seotitle: Query the version of Flux installed in InfluxDB +list_title: Query the Flux version +description: > + Use `runtime.version()` to return the version of Flux installed in InfluxDB. +weight: 221 +menu: + influxdb_2_6: + parent: Query with Flux + name: Flux version +influxdb/v2.6/tags: [query] +related: + - /{{< latest "flux" >}}/stdlib/runtime/version/ +list_code_example: | + ```js + import "array" + import "runtime" + + array.from(rows: [{version: runtime.version()}]) + ``` +--- + +InfluxDB {{< current-version >}} includes specific version of Flux that may or +may not support documented Flux functionality. +It's important to know what version of Flux you're currently using and what +functions are supported in that specific version. + +To query the version of Flux installed with InfluxDB, use `array.from()` to +create an ad hoc stream of tables and `runtime.version()` to populate a column +with the Flux version. + +{{% note %}} +Because the InfluxDB `/api/v2/query` endpoint can only return a stream of tables +and not single scalar values, you must use `array.from()` to create a stream of tables. +{{% /note %}} + +Run the following query in the **InfluxDB user interface**, with the **`influx` CLI**, +or **InfluxDB API**: + +```js +import "array" +import "runtime" + +array.from(rows: [{version: runtime.version()}]) +``` + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB UI](#) +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} +{{% tab-content %}} + +To return the version of Flux installed with InfluxDB using the InfluxDB UI: + +1. Click **Data Explorer** in the left navigation bar. + +{{< nav-icon "data-explorer" >}} + +2. Click **{{% caps %}}Script Editor{{% /caps %}}** to manually create and + edit a Flux query. +3. Enable the **View Raw Data {{< icon "toggle" >}}** toggle or select one of the + following visualization types: + + - [Single Stat](/influxdb/v2.6/visualize-data/visualization-types/single-stat/) + - [Table](/influxdb/v2.6/visualize-data/visualization-types/table/) + +4. Enter and run the following query: + + ```js + import "array" + import "runtime" + + array.from(rows: [{version: runtime.version()}]) + ``` + +{{% /tab-content %}} +{{% tab-content %}} + +To return the version of Flux installed with InfluxDB using the `influx` CLI, +use the `influx query` command. Provide the following: + +- InfluxDB **host**, **organization**, and **API token** + _(the example below assumes that a + [CLI configuration](/influxdb/v2.6/reference/cli/influx/#provide-required-authentication-credentials) + is set up and active)_ +- Query to execute + +```sh +$ influx query \ + 'import "array" + import "runtime" + + array.from(rows: [{version: runtime.version()}])' + +# Output +Result: _result +Table: keys: [] + version:string +---------------------- + v0.161.0 +``` +{{% /tab-content %}} + +{{% tab-content %}} + +To return the version of Flux installed with InfluxDB using the InfluxDB API, +use the [`/api/v2/query` endpoint](/influxdb/v2.6/api/#tag/Query). + +{{< api-endpoint method="POST" endpoint="http://localhost:8086/api/v2/query" >}} +Provide the following: + +- InfluxDB {{% cloud-only %}}Cloud{{% /cloud-only %}} host +- InfluxDB organization name or ID as a query parameter +- `Authorization` header with the `Token` scheme and your API token +- `Accept: application/csv` header +- `Content-type: application/vnd.flux` header +- Query to execute as the request body + +```sh +curl --request POST \ + http://localhost:8086/api/v2/query?orgID=INFLUX_ORG_ID \ + --header 'Authorization: Token INFLUX_TOKEN' \ + --header 'Accept: application/csv' \ + --header 'Content-type: application/vnd.flux' \ + --data 'import "array" + import "runtime" + + array.from(rows: [{version: runtime.version()}])' + +# Output +,result,table,version +,_result,0,v0.161.0 +``` + +{{% /tab-content %}} + +{{% warn %}} +#### Flux version in the Flux REPL +When you run `runtime.version()` in the [Flux REPL](/influxdb/v2.6/tools/flux-repl/), +the function returns the version of Flux the REPL was built with, not the version +of Flux installed in the instance of InfluxDB you're querying. +{{% /warn %}} \ No newline at end of file diff --git a/content/influxdb/v2.6/query-data/flux/geo/_index.md b/content/influxdb/v2.6/query-data/flux/geo/_index.md new file mode 100644 index 000000000..aff79c5b3 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/geo/_index.md @@ -0,0 +1,69 @@ +--- +title: Work with geo-temporal data +list_title: Geo-temporal data +description: > + Use the Flux Geo package to filter geo-temporal data and group by geographic location or track. +menu: + influxdb_2_6: + name: Geo-temporal data + parent: Query with Flux +weight: 220 +list_code_example: | + ```js + import "experimental/geo" + + sampleGeoData + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) + |> geo.groupByArea(newColumn: "geoArea", level: 5) + ``` +--- + +Use the [Flux Geo package](/{{< latest "flux" >}}/stdlib/experimental/geo) to +filter geo-temporal data and group by geographic location or track. + +{{% warn %}} +The Geo package is experimental and subject to change at any time. +By using it, you agree to the [risks of experimental functions](/{{< latest "flux" >}}/stdlib/experimental/to/#experimental-functions-are-subject-to-change). +{{% /warn %}} + +**To work with geo-temporal data:** + +1. Import the `experimental/geo` package. + + ```js + import "experimental/geo" + ``` + +2. Load geo-temporal data. _See below for [sample geo-temporal data](#sample-data)._ +3. Do one or more of the following: + + - [Shape data to work with the Geo package](#shape-data-to-work-with-the-geo-package) + - [Filter data by region](#filter-geo-temporal-data-by-region) (using strict or non-strict filters) + - [Group data by area or by track](#group-geo-temporal-data) + +{{< children >}} + +--- + +## Sample data +Many of the examples in this section use a `sampleGeoData` variable that represents +a sample set of geo-temporal data. +The [Bird Migration Sample Data](/influxdb/v2.6/reference/sample-data/#bird-migration-sample-data) +provides sample geo-temporal data that meets the +[requirements of the Flux Geo package](/{{< latest "flux" >}}/stdlib/experimental/geo/#geo-schema-requirements). + +### Load bird migration sample data +Use the [`sample.data()` function](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/sample/data/) +to load the sample bird migration data: + +```js +import "influxdata/influxdb/sample" + +sampleGeoData = sample.data(set: "birdMigration") +``` + +{{% note %}} +`sample.data()` downloads sample data each time you execute the query **(~1.3 MB)**. +If bandwidth is a concern, use the [`to()` function](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/to/) +to write the data to a bucket, and then query the bucket with [`from()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/). +{{% /note %}} diff --git a/content/influxdb/v2.6/query-data/flux/geo/filter-by-region.md b/content/influxdb/v2.6/query-data/flux/geo/filter-by-region.md new file mode 100644 index 000000000..cf23a3f37 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/geo/filter-by-region.md @@ -0,0 +1,124 @@ +--- +title: Filter geo-temporal data by region +description: > + Use the `geo.filterRows` function to filter geo-temporal data by box-shaped, circular, or polygonal geographic regions. +menu: + influxdb_2_6: + name: Filter by region + parent: Geo-temporal data +weight: 302 +related: + - /{{< latest "flux" >}}/stdlib/experimental/geo/ + - /{{< latest "flux" >}}/stdlib/experimental/geo/filterrows/ +list_code_example: | + ```js + import "experimental/geo" + + sampleGeoData + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}, strict: true) + ``` +--- + +Use the [`geo.filterRows` function](/{{< latest "flux" >}}/stdlib/experimental/geo/filterrows/) +to filter geo-temporal data by geographic region: + +1. [Define a geographic region](#define-a-geographic-region) +2. [Use strict or non-strict filtering](#strict-and-non-strict-filtering) + +The following example uses the [sample bird migration data](/influxdb/v2.6/query-data/flux/geo/#sample-data) +and queries data points **within 200km of Cairo, Egypt**: + +```js +import "experimental/geo" + +sampleGeoData + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}, strict: true) +``` + +## Define a geographic region +Many functions in the Geo package filter data based on geographic region. +Define a geographic region using one of the the following shapes: + +- [box](#box) +- [circle](#circle) +- [polygon](#polygon) + +### box +Define a box-shaped region by specifying a record containing the following properties: + +- **minLat:** minimum latitude in decimal degrees (WGS 84) _(Float)_ +- **maxLat:** maximum latitude in decimal degrees (WGS 84) _(Float)_ +- **minLon:** minimum longitude in decimal degrees (WGS 84) _(Float)_ +- **maxLon:** maximum longitude in decimal degrees (WGS 84) _(Float)_ + +##### Example box-shaped region +```js +{ + minLat: 40.51757813, + maxLat: 40.86914063, + minLon: -73.65234375, + maxLon: -72.94921875, +} +``` + +### circle +Define a circular region by specifying a record containing the following properties: + +- **lat**: latitude of the circle center in decimal degrees (WGS 84) _(Float)_ +- **lon**: longitude of the circle center in decimal degrees (WGS 84) _(Float)_ +- **radius**: radius of the circle in kilometers (km) _(Float)_ + +##### Example circular region +```js +{ + lat: 40.69335938, + lon: -73.30078125, + radius: 20.0, +} +``` + +### polygon +Define a polygonal region with a record containing the latitude and longitude for +each point in the polygon: + +- **points**: points that define the custom polygon _(Array of records)_ + + Define each point with a record containing the following properties: + + - **lat**: latitude in decimal degrees (WGS 84) _(Float)_ + - **lon**: longitude in decimal degrees (WGS 84) _(Float)_ + +##### Example polygonal region +```js +{ + points: [ + {lat: 40.671659, lon: -73.936631}, + {lat: 40.706543, lon: -73.749177}, + {lat: 40.791333, lon: -73.880327}, + ] +} +``` + +## Strict and non-strict filtering +In most cases, the specified geographic region does not perfectly align with S2 grid cells. + +- **Non-strict filtering** returns points that may be outside of the specified region but + inside S2 grid cells partially covered by the region. +- **Strict filtering** returns only points inside the specified region. + +_Strict filtering is less performant, but more accurate than non-strict filtering._ + + S2 grid cell + Filter region + Returned point + +{{< flex >}} +{{% flex-content %}} +**Strict filtering** +{{< svg "/static/svgs/geo-strict.svg" >}} +{{% /flex-content %}} +{{% flex-content %}} +**Non-strict filtering** +{{< svg "/static/svgs/geo-non-strict.svg" >}} +{{% /flex-content %}} +{{< /flex >}} diff --git a/content/influxdb/v2.6/query-data/flux/geo/group-geo-data.md b/content/influxdb/v2.6/query-data/flux/geo/group-geo-data.md new file mode 100644 index 000000000..30645590a --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/geo/group-geo-data.md @@ -0,0 +1,73 @@ +--- +title: Group geo-temporal data +description: > + Use the `geo.groupByArea()` to group geo-temporal data by area and `geo.asTracks()` + to group data into tracks or routes. +menu: + influxdb_2_6: + parent: Geo-temporal data +weight: 302 +related: + - /{{< latest "flux" >}}/stdlib/experimental/geo/ + - /{{< latest "flux" >}}/stdlib/experimental/geo/groupbyarea/ + - /{{< latest "flux" >}}/stdlib/experimental/geo/astracks/ +list_code_example: | + ```js + import "experimental/geo" + + sampleGeoData + |> geo.groupByArea(newColumn: "geoArea", level: 5) + |> geo.asTracks(groupBy: ["id"],sortBy: ["_time"]) + ``` +--- + +Use the `geo.groupByArea()` to group geo-temporal data by area and `geo.asTracks()` +to group data into tracks or routes. + +- [Group data by area](#group-data-by-area) +- [Group data into tracks or routes](#group-data-by-track-or-route) + +{{% note %}} +For example results, use the [bird migration sample data](/influxdb/v2.6/reference/sample-data/#bird-migration-sample-data) +to populate the `sampleGeoData` variable in the queries below. +{{% /note %}} + +### Group data by area +Use the [`geo.groupByArea()` function](/{{< latest "flux" >}}/stdlib/experimental/geo/groupbyarea/) +to group geo-temporal data points by geographic area. +Areas are determined by [S2 grid cells](https://s2geometry.io/devguide/s2cell_hierarchy.html#s2cellid-numbering) + +- Specify a new column to store the unique area identifier for each point with the `newColumn` parameter. +- Specify the [S2 cell level](https://s2geometry.io/resources/s2cell_statistics) + to use when calculating geographic areas with the `level` parameter. + +The following example uses the [sample bird migration data](/influxdb/v2.6/query-data/flux/geo/#sample-data) +to query data points within 200km of Cairo, Egypt and group them by geographic area: + +```js +import "experimental/geo" + +sampleGeoData + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) + |> geo.groupByArea(newColumn: "geoArea", level: 5) +``` + +### Group data by track or route +Use [`geo.asTracks()` function](/{{< latest "flux" >}}/stdlib/experimental/geo/astracks/) +to group data points into tracks or routes and order them by time or other columns. +Data must contain a unique identifier for each track. For example: `id` or `tid`. + +- Specify columns that uniquely identify each track or route with the `groupBy` parameter. +- Specify which columns to sort by with the `sortBy` parameter. Default is `["_time"]`. + +The following example uses the [sample bird migration data](/influxdb/v2.6/query-data/flux/geo/#sample-data) +to query data points within 200km of Cairo, Egypt and group them into routes unique +to each bird: + +```js +import "experimental/geo" + +sampleGeoData + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) + |> geo.asTracks(groupBy: ["id"], orderBy: ["_time"]) +``` diff --git a/content/influxdb/v2.6/query-data/flux/geo/shape-geo-data.md b/content/influxdb/v2.6/query-data/flux/geo/shape-geo-data.md new file mode 100644 index 000000000..a7591eb9b --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/geo/shape-geo-data.md @@ -0,0 +1,120 @@ +--- +title: Shape data to work with the Geo package +description: > + Functions in the Flux Geo package require **lat** and **lon** fields and an **s2_cell_id** tag. + Rename latitude and longitude fields and generate S2 cell ID tokens. +menu: + influxdb_2_6: + name: Shape geo-temporal data + parent: Geo-temporal data +weight: 301 +related: + - /{{< latest "flux" >}}/stdlib/experimental/geo/ + - /{{< latest "flux" >}}/stdlib/experimental/geo/shapedata/ +list_code_example: | + ```js + import "experimental/geo" + + sampleGeoData + |> geo.shapeData(latField: "latitude", lonField: "longitude", level: 10) + ``` +--- + +Functions in the Geo package require the following data schema: + +- an **s2_cell_id** tag containing the [S2 Cell ID](https://s2geometry.io/devguide/s2cell_hierarchy.html#s2cellid-numbering) + **as a token** +- a **`lat` field** field containing the **latitude in decimal degrees** (WGS 84) +- a **`lon` field** field containing the **longitude in decimal degrees** (WGS 84) + +## Shape geo-temporal data +If your data already contains latitude and longitude fields, use the +[`geo.shapeData()`function](/{{< latest "flux" >}}/stdlib/experimental/geo/shapedata/) +to rename the fields to match the requirements of the Geo package, pivot the data +into row-wise sets, and generate S2 cell ID tokens for each point. + +```js +import "experimental/geo" + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.shapeData(latField: "latitude", lonField: "longitude", level: 10) +``` + +## Generate S2 cell ID tokens +The Geo package uses the [S2 Geometry Library](https://s2geometry.io/) to represent +geographic coordinates on a three-dimensional sphere. +The sphere is divided into [cells](https://s2geometry.io/devguide/s2cell_hierarchy), +each with a unique 64-bit identifier (S2 cell ID). +Grid and S2 cell ID accuracy are defined by a [level](https://s2geometry.io/resources/s2cell_statistics). + +{{% note %}} +To filter more quickly, use higher S2 Cell ID levels, +but know that that higher levels increase [series cardinality](/influxdb/v2.6/reference/glossary/#series-cardinality). +{{% /note %}} + +The Geo package requires S2 cell IDs as tokens. +To generate add S2 cell IDs tokens to your data, use one of the following options: + +- [Generate S2 cell ID tokens with Telegraf](#generate-s2-cell-id-tokens-with-telegraf) +- [Generate S2 cell ID tokens language-specific libraries](#generate-s2-cell-id-tokens-language-specific-libraries) +- [Generate S2 cell ID tokens with Flux](#generate-s2-cell-id-tokens-with-flux) + +### Generate S2 cell ID tokens with Telegraf +Enable the [Telegraf S2 Geo (`s2geo`) processor](https://github.com/influxdata/telegraf/tree/master/plugins/processors/s2geo) +to generate S2 cell ID tokens at a specified `cell_level` using `lat` and `lon` field values. + +Add the `processors.s2geo` configuration to your Telegraf configuration file (`telegraf.conf`): + +```toml +[[processors.s2geo]] + ## The name of the lat and lon fields containing WGS-84 latitude and + ## longitude in decimal degrees. + lat_field = "lat" + lon_field = "lon" + + ## New tag to create + tag_key = "s2_cell_id" + + ## Cell level (see https://s2geometry.io/resources/s2cell_statistics.html) + cell_level = 9 +``` + +Telegraf stores the S2 cell ID token in the `s2_cell_id` tag. + +### Generate S2 cell ID tokens language-specific libraries +Many programming languages offer S2 Libraries with methods for generating S2 cell ID tokens. +Use latitude and longitude with the `s2.CellID.ToToken` endpoint of the S2 Geometry +Library to generate `s2_cell_id` tags. For example: + +- **Go:** [s2.CellID.ToToken()](https://godoc.org/github.com/golang/geo/s2#CellID.ToToken) +- **Python:** [s2sphere.CellId.to_token()](https://s2sphere.readthedocs.io/en/latest/api.html#s2sphere.CellId) +- **Crystal:** [cell.to_token(level)](https://github.com/spider-gazelle/s2_cells#usage) +- **JavaScript:** [s2.cellid.toToken()](https://github.com/mapbox/node-s2/blob/master/API.md#cellidtotoken---string) + +### Generate S2 cell ID tokens with Flux +Use the [`geo.s2CellIDToken()` function](/{{< latest "flux" >}}/stdlib/experimental/geo/s2cellidtoken/) +with existing longitude (`lon`) and latitude (`lat`) field values to generate and add the S2 cell ID token. +First, use the [`geo.toRows()` function](/{{< latest "flux" >}}/stdlib/experimental/geo/torows/) +to pivot **lat** and **lon** fields into row-wise sets: + +```js +import "experimental/geo" + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.toRows() + |> map( + fn: (r) => ({ + r with + s2_cell_id: geo.s2CellIDToken(point: {lon: r.lon, lat: r.lat}, level: 10) + }) + ) +``` + +{{% note %}} +The [`geo.shapeData()`function](/{{< latest "flux" >}}/stdlib/experimental/geo/shapedata/) +generates S2 cell ID tokens as well. +{{% /note %}} diff --git a/content/influxdb/v2.6/query-data/flux/group-data.md b/content/influxdb/v2.6/query-data/flux/group-data.md new file mode 100644 index 000000000..ba255f7e4 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/group-data.md @@ -0,0 +1,676 @@ +--- +title: Group data in InfluxDB with Flux +list_title: Group +description: > + Use `group()` to group data with common values in specific columns. +influxdb/v2.6/tags: [group] +menu: + influxdb_2_6: + name: Group + parent: Query with Flux +weight: 202 +aliases: + - /influxdb/v2.6/query-data/guides/group-data/ + - /influxdb/v2.6/query-data/flux/grouping-data/ +related: + - /{{< latest "flux" >}}/stdlib/universe/group + - /{{< latest "flux" >}}/stdlib/experimental/group +list_query_example: group +--- + +With Flux, you can group data by any column in your queried data set. +"Grouping" partitions data into tables in which each row shares a common value for specified columns. +This guide walks through grouping data in Flux and provides examples of how data is shaped in the process. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +## Group keys +Every table has a **group key** – a list of columns for which every row in the table has the same value. + +###### Example group key +```js +[_start, _stop, _field, _measurement, host] +``` + +Grouping data in Flux is essentially defining the group key of output tables. +Understanding how modifying group keys shapes output data is key to successfully +grouping and transforming data into your desired output. + +## group() Function +Flux's [`group()` function](/{{< latest "flux" >}}/stdlib/universe/group) defines the +group key for output tables, i.e. grouping records based on values for specific columns. + +###### group() example +```js +dataStream + |> group(columns: ["cpu", "host"]) +``` + +###### Resulting group key +```js +[cpu, host] +``` + +The `group()` function has the following parameters: + +### columns +The list of columns to include or exclude (depending on the [mode](#mode)) in the grouping operation. + +### mode +The method used to define the group and resulting group key. +Possible values include `by` and `except`. + + +## Example grouping operations +To illustrate how grouping works, define a `dataSet` variable that queries System +CPU usage from the `example-bucket` bucket. +Filter the `cpu` tag so it only returns results for each numbered CPU core. + +### Data set +CPU used by system operations for all numbered CPU cores. +It uses a regular expression to filter only numbered cores. + +```js +dataSet = from(bucket: "example-bucket") + |> range(start: -2m) + |> filter(fn: (r) => r._field == "usage_system" and r.cpu =~ /cpu[0-9*]/) + |> drop(columns: ["host"]) +``` + +{{% note %}} +This example drops the `host` column from the returned data since the CPU data +is only tracked for a single host and it simplifies the output tables. +Don't drop the `host` column if monitoring multiple hosts. +{{% /note %}} + +{{% truncate %}} +``` +Table: keys: [_start, _stop, _field, _measurement, cpu] + _start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:00.000000000Z 7.892107892107892 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:10.000000000Z 7.2 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:20.000000000Z 7.4 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:30.000000000Z 5.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:40.000000000Z 7.4 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:34:50.000000000Z 7.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:00.000000000Z 10.3 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:10.000000000Z 9.2 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:20.000000000Z 8.4 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:30.000000000Z 8.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:40.000000000Z 8.6 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:35:50.000000000Z 10.2 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu0 2018-11-05T21:36:00.000000000Z 10.6 + +Table: keys: [_start, _stop, _field, _measurement, cpu] + _start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:00.000000000Z 0.7992007992007992 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:10.000000000Z 0.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:20.000000000Z 0.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:30.000000000Z 0.4 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:40.000000000Z 0.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:34:50.000000000Z 0.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:00.000000000Z 1.4 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:10.000000000Z 1.2 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:20.000000000Z 0.8 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:30.000000000Z 0.8991008991008991 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:40.000000000Z 0.8008008008008008 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:35:50.000000000Z 0.999000999000999 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu1 2018-11-05T21:36:00.000000000Z 1.1022044088176353 + +Table: keys: [_start, _stop, _field, _measurement, cpu] + _start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:00.000000000Z 4.1 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:10.000000000Z 3.6 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:20.000000000Z 3.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:30.000000000Z 2.6 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:40.000000000Z 4.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:34:50.000000000Z 4.895104895104895 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:00.000000000Z 6.906906906906907 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:10.000000000Z 5.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:20.000000000Z 5.1 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:30.000000000Z 4.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:40.000000000Z 5.1 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:35:50.000000000Z 5.9 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu2 2018-11-05T21:36:00.000000000Z 6.4935064935064934 + +Table: keys: [_start, _stop, _field, _measurement, cpu] + _start:time _stop:time _field:string _measurement:string cpu:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:00.000000000Z 0.5005005005005005 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:10.000000000Z 0.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:20.000000000Z 0.5 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:30.000000000Z 0.3 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:40.000000000Z 0.6 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:34:50.000000000Z 0.6 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:00.000000000Z 1.3986013986013985 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:10.000000000Z 0.9 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:20.000000000Z 0.5005005005005005 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:30.000000000Z 0.7 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:40.000000000Z 0.6 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:35:50.000000000Z 0.8 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z usage_system cpu cpu3 2018-11-05T21:36:00.000000000Z 0.9 +``` +{{% /truncate %}} + +**Note that the group key is output with each table: `Table: keys: `.** + +![Group example data set](/img/flux/grouping-data-set.png) + +### Group by CPU +Group the `dataSet` stream by the `cpu` column. + +```js +dataSet + |> group(columns: ["cpu"]) +``` + +This won't actually change the structure of the data since it already has `cpu` +in the group key and is therefore grouped by `cpu`. +However, notice that it does change the group key: + +{{% truncate %}} +###### Group by CPU output tables +``` +Table: keys: [cpu] + cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time +---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 7.892107892107892 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 7.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 5.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 7.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 10.3 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 9.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 8.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 8.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 8.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 10.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu0 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [cpu] + cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time +---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 0.7992007992007992 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 0.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 1.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 1.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 0.8991008991008991 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 0.8008008008008008 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 0.999000999000999 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu1 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.1022044088176353 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [cpu] + cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time +---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 4.1 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 3.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 3.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 2.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 4.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 4.895104895104895 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 6.906906906906907 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 5.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 4.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 5.9 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu2 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 6.4935064935064934 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [cpu] + cpu:string _stop:time _time:time _value:float _field:string _measurement:string _start:time +---------------------- ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:10.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:20.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:30.000000000Z 0.3 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:40.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:50.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:00.000000000Z 1.3986013986013985 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:10.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:20.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:30.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:40.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:35:50.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z + cpu3 2018-11-05T21:36:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z +``` +{{% /truncate %}} + +The visualization remains the same. + +![Group by CPU](/img/flux/grouping-data-set.png) + +### Group by time +Grouping data by the `_time` column is a good illustration of how grouping changes the structure of your data. + +```js +dataSet + |> group(columns: ["_time"]) +``` + +When grouping by `_time`, all records that share a common `_time` value are grouped into individual tables. +So each output table represents a single point in time. + +{{% truncate %}} +###### Group by time output tables +``` +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.892107892107892 usage_system cpu cpu0 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7992007992007992 usage_system cpu cpu1 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.1 usage_system cpu cpu2 +2018-11-05T21:34:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.2 usage_system cpu cpu0 +2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1 +2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 3.6 usage_system cpu cpu2 +2018-11-05T21:34:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu cpu0 +2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1 +2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 3.5 usage_system cpu cpu2 +2018-11-05T21:34:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.5 usage_system cpu cpu0 +2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.4 usage_system cpu cpu1 +2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 2.6 usage_system cpu cpu2 +2018-11-05T21:34:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.3 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu cpu0 +2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1 +2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.5 usage_system cpu cpu2 +2018-11-05T21:34:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 7.5 usage_system cpu cpu0 +2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu1 +2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.895104895104895 usage_system cpu cpu2 +2018-11-05T21:34:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.3 usage_system cpu cpu0 +2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.4 usage_system cpu cpu1 +2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 6.906906906906907 usage_system cpu cpu2 +2018-11-05T21:35:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.3986013986013985 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 9.2 usage_system cpu cpu0 +2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.2 usage_system cpu cpu1 +2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.7 usage_system cpu cpu2 +2018-11-05T21:35:10.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 8.4 usage_system cpu cpu0 +2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu cpu1 +2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu cpu2 +2018-11-05T21:35:20.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 8.5 usage_system cpu cpu0 +2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8991008991008991 usage_system cpu cpu1 +2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 4.7 usage_system cpu cpu2 +2018-11-05T21:35:30.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 8.6 usage_system cpu cpu0 +2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8008008008008008 usage_system cpu cpu1 +2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu cpu2 +2018-11-05T21:35:40.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.2 usage_system cpu cpu0 +2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.999000999000999 usage_system cpu cpu1 +2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 5.9 usage_system cpu cpu2 +2018-11-05T21:35:50.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu cpu3 + +Table: keys: [_time] + _time:time _start:time _stop:time _value:float _field:string _measurement:string cpu:string +------------------------------ ------------------------------ ------------------------------ ---------------------------- ---------------------- ---------------------- ---------------------- +2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 10.6 usage_system cpu cpu0 +2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 1.1022044088176353 usage_system cpu cpu1 +2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 6.4935064935064934 usage_system cpu cpu2 +2018-11-05T21:36:00.000000000Z 2018-11-05T21:34:00.000000000Z 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu cpu3 +``` +{{% /truncate %}} + +Because each timestamp is structured as a separate table, when visualized, all +points that share the same timestamp appear connected. + +![Group by time](/img/flux/grouping-by-time.png) + +{{% note %}} +With some further processing, you could calculate the average CPU usage across all CPUs per point +of time and group them into a single table, but we won't cover that in this example. +If you're interested in running and visualizing this yourself, here's what the query would look like: + +```js +dataSet + |> group(columns: ["_time"]) + |> mean() + |> group(columns: ["_value", "_time"], mode: "except") +``` +{{% /note %}} + +## Group by CPU and time +Group by the `cpu` and `_time` columns. + +```js +dataSet + |> group(columns: ["cpu", "_time"]) +``` + +This outputs a table for every unique `cpu` and `_time` combination: + +{{% truncate %}} +###### Group by CPU and time output tables +``` +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:00.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.892107892107892 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:00.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7992007992007992 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:00.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.1 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:00.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:10.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:10.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:10.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 3.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:10.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:20.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:20.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:20.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 3.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:20.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:30.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 5.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:30.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:30.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 2.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:30.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.3 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:40.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:40.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:40.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:40.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:50.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 7.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:50.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:50.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.895104895104895 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:34:50.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:00.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 10.3 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:00.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 1.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:00.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 6.906906906906907 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:00.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 1.3986013986013985 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:10.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 9.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:10.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 1.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:10.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:10.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:20.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 8.4 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:20.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:20.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:20.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.5005005005005005 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:30.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 8.5 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:30.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.8991008991008991 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:30.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 4.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:30.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.7 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:40.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 8.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:40.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.8008008008008008 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:40.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.1 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:40.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:50.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 10.2 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:50.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 0.999000999000999 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:50.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 5.9 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:35:50.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.8 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:36:00.000000000Z cpu0 2018-11-05T21:36:00.000000000Z 10.6 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:36:00.000000000Z cpu1 2018-11-05T21:36:00.000000000Z 1.1022044088176353 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:36:00.000000000Z cpu2 2018-11-05T21:36:00.000000000Z 6.4935064935064934 usage_system cpu 2018-11-05T21:34:00.000000000Z + +Table: keys: [_time, cpu] + _time:time cpu:string _stop:time _value:float _field:string _measurement:string _start:time +------------------------------ ---------------------- ------------------------------ ---------------------------- ---------------------- ---------------------- ------------------------------ +2018-11-05T21:36:00.000000000Z cpu3 2018-11-05T21:36:00.000000000Z 0.9 usage_system cpu 2018-11-05T21:34:00.000000000Z +``` +{{% /truncate %}} + +When visualized, tables appear as individual, unconnected points. + +![Group by CPU and time](/img/flux/grouping-by-cpu-time.png) + +Grouping by `cpu` and `_time` is a good illustration of how grouping works. + +## In conclusion +Grouping is a powerful way to shape your data into your desired output format. +It modifies the group keys of output tables, grouping records into tables that +all share common values within specified columns. diff --git a/content/influxdb/v2.6/query-data/flux/histograms.md b/content/influxdb/v2.6/query-data/flux/histograms.md new file mode 100644 index 000000000..405c211dc --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/histograms.md @@ -0,0 +1,171 @@ +--- +title: Create histograms with Flux +list_title: Histograms +description: > + Use `histogram()` to create cumulative histograms with Flux. +influxdb/v2.6/tags: [histogram] +menu: + influxdb_2_6: + name: Histograms + parent: Query with Flux +weight: 210 +aliases: + - /influxdb/v2.6/query-data/guides/histograms/ +related: + - /{{< latest "flux" >}}/stdlib/universe/histogram + - /{{< latest "flux" >}}/prometheus/metric-types/histogram/, Work with Prometheus histograms in Flux +list_query_example: histogram +--- + +Histograms provide valuable insight into the distribution of your data. +This guide walks through using Flux's `histogram()` function to transform your data into a **cumulative histogram**. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +## histogram() function + +The [`histogram()` function](/{{< latest "flux" >}}/stdlib/universe/histogram) approximates the +cumulative distribution of a dataset by counting data frequencies for a list of "bins." +A **bin** is simply a range in which a data point falls. +All data points that are less than or equal to the bound are counted in the bin. +In the histogram output, a column is added (`le`) that represents the upper bounds of of each bin. +Bin counts are cumulative. + +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(bins: [0.0, 10.0, 20.0, 30.0]) +``` + +{{% note %}} +Values output by the `histogram` function represent points of data aggregated over time. +Since values do not represent single points in time, there is no `_time` column in the output table. +{{% /note %}} + +## Bin helper functions +Flux provides two helper functions for generating histogram bins. +Each generates an array of floats designed to be used in the `histogram()` function's `bins` parameter. + +### linearBins() +The [`linearBins()` function](/{{< latest "flux" >}}/stdlib/universe/linearbins) generates a list of linearly separated floats. + +```js +linearBins(start: 0.0, width: 10.0, count: 10) + +// Generated list: [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, +Inf] +``` + +### logarithmicBins() +The [`logarithmicBins()` function](/{{< latest "flux" >}}/stdlib/universe/logarithmicbins) generates a list of exponentially separated floats. + +```js +logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinity: true) + +// Generated list: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, +Inf] +``` + +## Histogram visualization +The [Histogram visualization type](/influxdb/v2.6/visualize-data/visualization-types/histogram/) +automatically converts query results into a binned and segmented histogram. + +{{< img-hd src="/img/influxdb/2-0-visualizations-histogram-example.png" alt="Histogram visualization" />}} + +Use the [Histogram visualization controls](/influxdb/v2.6/visualize-data/visualization-types/histogram/#histogram-controls) +to specify the number of bins and define groups in bins. + +### Histogram visualization data structure +Because the Histogram visualization uses visualization controls to creates bins and groups, +**do not** structure query results as histogram data. + +{{% note %}} +Output of the [`histogram()` function](#histogram-function) is **not** compatible +with the Histogram visualization type. +View the example [below](#visualize-errors-by-severity). +{{% /note %}} + +## Examples + +### Generate a histogram with linear bins +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(bins: linearBins(start: 65.5, width: 0.5, count: 20, infinity: false)) +``` + +###### Output table +``` +Table: keys: [_start, _stop, _field, _measurement, host] + _start:time _stop:time _field:string _measurement:string host:string le:float _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------ ---------------------------- ---------------------------- +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 65.5 5 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 66 6 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 66.5 8 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 67 9 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 67.5 9 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 68 10 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 68.5 12 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 69 12 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 69.5 15 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 70 23 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 70.5 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 71 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 71.5 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 72 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 72.5 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 73 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 73.5 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 74 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 74.5 30 +2018-11-07T22:19:58.423358000Z 2018-11-07T22:24:58.423358000Z used_percent mem Scotts-MacBook-Pro.local 75 30 +``` + +### Generate a histogram with logarithmic bins +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(bins: logarithmicBins(start: 0.5, factor: 2.0, count: 10, infinity: false)) +``` + +###### Output table +``` +Table: keys: [_start, _stop, _field, _measurement, host] + _start:time _stop:time _field:string _measurement:string host:string le:float _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------ ---------------------------- ---------------------------- +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 0.5 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 1 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 2 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 4 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 8 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 16 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 32 0 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 64 2 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 128 30 +2018-11-07T22:23:36.860664000Z 2018-11-07T22:28:36.860664000Z used_percent mem Scotts-MacBook-Pro.local 256 30 +``` + +### Visualize errors by severity +Use the [Telegraf Syslog plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/syslog) +to collect error information from your system. +Query the `severity_code` field in the `syslog` measurement: + +```js +from(bucket: "example-bucket") + |> range(start: v.timeRangeStart, stop: v.timeRangeStop) + |> filter(fn: (r) => r._measurement == "syslog" and r._field == "severity_code") +``` + +In the Histogram visualization options, select `_time` as the **X Column** +and `severity` as the **Group By** option: + +{{< img-hd src="/img/influxdb/2-0-visualizations-histogram-errors.png" alt="Logs by severity histogram" />}} + +### Use Prometheus histograms in Flux + +_For information about working with Prometheus histograms in Flux, see +[Work with Prometheus histograms](/{{< latest "flux" >}}/prometheus/metric-types/histogram/)._ \ No newline at end of file diff --git a/content/influxdb/v2.6/query-data/flux/increase.md b/content/influxdb/v2.6/query-data/flux/increase.md new file mode 100644 index 000000000..3ada7455e --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/increase.md @@ -0,0 +1,57 @@ +--- +title: Calculate the increase +seotitle: Calculate the increase in Flux +list_title: Increase +description: > + Use `increase()` to track increases across multiple columns in a table. + This function is especially useful when tracking changes in counter values that + wrap over time or periodically reset. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Increase +influxdb/v2.6/tags: [query, increase, counters] +related: + - /{{< latest "flux" >}}/stdlib/universe/increase/ +list_query_example: increase +--- + +Use [`increase()`](/{{< latest "flux" >}}/stdlib/universe/increase/) +to track increases across multiple columns in a table. +This function is especially useful when tracking changes in counter values that +wrap over time or periodically reset. + +```js +data + |> increase() +``` + +`increase()` returns a cumulative sum of **non-negative** differences between rows in a table. +For example: + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1 | +| 2020-01-01T00:02:00Z | 2 | +| 2020-01-01T00:03:00Z | 8 | +| 2020-01-01T00:04:00Z | 10 | +| 2020-01-01T00:05:00Z | 0 | +| 2020-01-01T00:06:00Z | 4 | +{{% /flex-content %}} +{{% flex-content %}} +**`increase()` returns:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:02:00Z | 1 | +| 2020-01-01T00:03:00Z | 7 | +| 2020-01-01T00:04:00Z | 9 | +| 2020-01-01T00:05:00Z | 9 | +| 2020-01-01T00:06:00Z | 13 | +{{% /flex-content %}} +{{< /flex >}} diff --git a/content/influxdb/v2.6/query-data/flux/join.md b/content/influxdb/v2.6/query-data/flux/join.md new file mode 100644 index 000000000..5b5c344ff --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/join.md @@ -0,0 +1,401 @@ +--- +title: Join data with Flux +seotitle: Join data in InfluxDB with Flux +list_title: Join +description: This guide walks through joining data with Flux and outlines how it shapes your data in the process. +influxdb/v2.6/tags: [join, flux] +menu: + influxdb_2_6: + name: Join + parent: Query with Flux +weight: 210 +aliases: + - /influxdb/v2.6/query-data/guides/join/ +related: + - /{{< latest "flux" >}}/join-data/ + - /{{< latest "flux" >}}/join-data/inner/ + - /{{< latest "flux" >}}/join-data/left-outer/ + - /{{< latest "flux" >}}/join-data/right-outer/ + - /{{< latest "flux" >}}/join-data/full-outer/ + - /{{< latest "flux" >}}/join-data/time/ + - /{{< latest "flux" >}}/stdlib/join/ +list_query_example: join-new +--- + +Use the Flux [`join` package](/{{< latest "flux" >}}/stdlib/join/) to join two data sets +based on common values using the following join methods: + +{{< flex >}} +{{< flex-content "quarter" >}} +

Inner join

+ {{< svg svg="static/svgs/join-diagram.svg" class="inner small center" >}} +{{< /flex-content >}} +{{< flex-content "quarter" >}} +

Left outer join

+ {{< svg svg="static/svgs/join-diagram.svg" class="left small center" >}} +{{< /flex-content >}} +{{< flex-content "quarter" >}} +

Right outer join

+ {{< svg svg="static/svgs/join-diagram.svg" class="right small center" >}} +{{< /flex-content >}} +{{< flex-content "quarter" >}} +

Full outer join

+ {{< svg svg="static/svgs/join-diagram.svg" class="full small center" >}} +{{< /flex-content >}} +{{< /flex >}} + +The join package lets you join data from different data sources such as +[InfluxDB](/{{< latest "flux" >}}/query-data/influxdb/), [SQL database](/{{< latest "flux" >}}/query-data/sql/), +[CSV](/{{< latest "flux" >}}/query-data/csv/), and [others](/{{< latest "flux" >}}/query-data/). + +## Use join functions to join your data + +{{< tabs-wrapper >}} +{{% tabs %}} +[Inner join](#) +[Left join](#) +[Right join](#) +[Full outer join](#) +[Join on time](#) +{{% /tabs %}} + + +{{% tab-content %}} + +1. Import the `join` package. +2. Define the **left** and **right** data streams to join: + + - Each stream must have one or more columns with common values. + Column labels do not need to match, but column values do. + - Each stream should have identical [group keys](/{{< latest "flux" >}}/get-started/data-model/#group-key). + + _For more information, see [join data requirements](/{{< latest "flux" >}}/join-data/#data-requirements)._ + +3. Use [`join.inner()`](/{{< latest "flux" >}}/stdlib/join/inner/) to join the two streams together. + Provide the following required parameters: + + - `left`: Stream of data representing the left side of the join. + - `right`: Stream of data representing the right side of the join. + - `on`: [Join predicate](/{{< latest "flux" >}}/join-data/#join-predicate-function-on). + For example: `(l, r) => l.column == r.column`. + - `as`: [Join output function](/{{< latest "flux" >}}/join-data/#join-output-function-as) + that returns a record with values from each input stream. + For example: `(l, r) => ({l with column1: r.column1, column2: r.column2})`. + +```js +import "join" +import "sql" + +left = + from(bucket: "example-bucket-1") + |> range(start: "-1h") + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "example-field") + +right = + sql.from( + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + query: "SELECT * FROM example_table", + ) + +join.inner( + left: left, + right: right, + on: (l, r) => l.column == r.column, + as: (l, r) => ({l with name: r.name, location: r.location}), +) +``` + +For more information and detailed examples, see [Perform an inner join](/{{< latest "flux" >}}/join-data/inner/) +in the Flux documentation. + +{{% /tab-content %}} + + + +{{% tab-content %}} + +1. Import the `join` package. +2. Define the **left** and **right** data streams to join: + + - Each stream must have one or more columns with common values. + Column labels do not need to match, but column values do. + - Each stream should have identical [group keys](/{{< latest "flux" >}}/get-started/data-model/#group-key). + + _For more information, see [join data requirements](/{{< latest "flux" >}}/join-data/#data-requirements)._ + +3. Use [`join.left()`](/{{< latest "flux" >}}/stdlib/join/left/) to join the two streams together. + Provide the following required parameters: + + - `left`: Stream of data representing the left side of the join. + - `right`: Stream of data representing the right side of the join. + - `on`: [Join predicate](/{{< latest "flux" >}}/join-data/#join-predicate-function-on). + For example: `(l, r) => l.column == r.column`. + - `as`: [Join output function](/{{< latest "flux" >}}/join-data/#join-output-function-as) + that returns a record with values from each input stream. + For example: `(l, r) => ({l with column1: r.column1, column2: r.column2})`. + +```js +import "join" +import "sql" + +left = + from(bucket: "example-bucket-1") + |> range(start: "-1h") + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "example-field") + +right = + sql.from( + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + query: "SELECT * FROM example_table", + ) + +join.left( + left: left, + right: right, + on: (l, r) => l.column == r.column, + as: (l, r) => ({l with name: r.name, location: r.location}), +) +``` + +For more information and detailed examples, see [Perform a left outer join](/{{< latest "flux" >}}/join-data/left-outer/) +in the Flux documentation. + +{{% /tab-content %}} + + + +{{% tab-content %}} + +1. Import the `join` package. +2. Define the **left** and **right** data streams to join: + + - Each stream must have one or more columns with common values. + Column labels do not need to match, but column values do. + - Each stream should have identical [group keys](/{{< latest "flux" >}}/get-started/data-model/#group-key). + + _For more information, see [join data requirements](/{{< latest "flux" >}}/join-data/#data-requirements)._ + +3. Use [`join.right()`](/{{< latest "flux" >}}/stdlib/join/right/) to join the two streams together. + Provide the following required parameters: + + - `left`: Stream of data representing the left side of the join. + - `right`: Stream of data representing the right side of the join. + - `on`: [Join predicate](/{{< latest "flux" >}}/join-data/#join-predicate-function-on). + For example: `(l, r) => l.column == r.column`. + - `as`: [Join output function](/{{< latest "flux" >}}/join-data/#join-output-function-as) + that returns a record with values from each input stream. + For example: `(l, r) => ({l with column1: r.column1, column2: r.column2})`. + +```js +import "join" +import "sql" + +left = + from(bucket: "example-bucket-1") + |> range(start: "-1h") + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "example-field") + +right = + sql.from( + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + query: "SELECT * FROM example_table", + ) + +join.right( + left: left, + right: right, + on: (l, r) => l.column == r.column, + as: (l, r) => ({l with name: r.name, location: r.location}), +) +``` + +For more information and detailed examples, see [Perform a right outer join](/{{< latest "flux" >}}/join-data/right-outer/) +in the Flux documentation. + +{{% /tab-content %}} + + + +{{% tab-content %}} +1. Import the `join` package. +2. Define the **left** and **right** data streams to join: + + - Each stream must have one or more columns with common values. + Column labels do not need to match, but column values do. + - Each stream should have identical [group keys](/{{< latest "flux" >}}/get-started/data-model/#group-key). + + _For more information, see [join data requirements](/{{< latest "flux" >}}/join-data/#data-requirements)._ + +3. Use [`join.full()`](/{{< latest "flux" >}}/stdlib/join/full/) to join the two streams together. + Provide the following required parameters: + + - `left`: Stream of data representing the left side of the join. + - `right`: Stream of data representing the right side of the join. + - `on`: [Join predicate](/{{< latest "flux" >}}/join-data/#join-predicate-function-on). + For example: `(l, r) => l.column == r.column`. + - `as`: [Join output function](/{{< latest "flux" >}}/join-data/#join-output-function-as) + that returns a record with values from each input stream. + For example: `(l, r) => ({l with column1: r.column1, column2: r.column2})`. + +{{% note %}} +Full outer joins must account for non-group-key columns in both `l` and `r` +records being null. Use conditional logic to check which record contains non-null +values for columns not in the group key. +For more information, see [Account for missing, non-group-key values](/{{< latest "flux" >}}/join-data/full-outer/#account-for-missing-non-group-key-values). +{{% /note %}} + +```js +import "join" +import "sql" + +left = + from(bucket: "example-bucket-1") + |> range(start: "-1h") + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "example-field") + +right = + sql.from( + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + query: "SELECT * FROM example_table", + ) + +join.full( + left: left, + right: right, + on: (l, r) => l.id== r.id, + as: (l, r) => { + id = if exists l.id then l.id else r.id + + return {name: l.name, location: r.location, id: id} + }, +) +``` + +For more information and detailed examples, see [Perform a full outer join](/{{< latest "flux" >}}/join-data/full-outer/) +in the Flux documentation. + +{{% /tab-content %}} + + + +{{% tab-content %}} + +1. Import the `join` package. +2. Define the **left** and **right** data streams to join: + + - Each stream must also have a `_time` column. + - Each stream must have one or more columns with common values. + Column labels do not need to match, but column values do. + - Each stream should have identical [group keys](/{{< latest "flux" >}}/get-started/data-model/#group-key). + + _For more information, see [join data requirements](/{{< latest "flux" >}}/join-data/#data-requirements)._ + +3. Use [`join.time()`](/{{< latest "flux" >}}/stdlib/join/time/) to join the two streams + together based on time values. + Provide the following parameters: + + - `left`: ({{< req >}}) Stream of data representing the left side of the join. + - `right`: ({{< req >}}) Stream of data representing the right side of the join. + - `as`: ({{< req >}}) [Join output function](/{{< latest "flux" >}}/join-data/#join-output-function-as) + that returns a record with values from each input stream. + For example: `(l, r) => ({r with column1: l.column1, column2: l.column2})`. + - `method`: Join method to use. Default is `inner`. + +```js +import "join" +import "sql" + +left = + from(bucket: "example-bucket-1") + |> range(start: "-1h") + |> filter(fn: (r) => r._measurement == "example-m1") + |> filter(fn: (r) => r._field == "example-f1") + +right = + from(bucket: "example-bucket-2") + |> range(start: "-1h") + |> filter(fn: (r) => r._measurement == "example-m2") + |> filter(fn: (r) => r._field == "example-f2") + +join.time(method: "left", left: left, right: right, as: (l, r) => ({l with f2: r._value})) +``` + +For more information and detailed examples, see [Join on time](/{{< latest "flux" >}}/join-data/time/) +in the Flux documentation. + +{{% /tab-content %}} + +{{< /tabs-wrapper >}} + +--- + +## When to use union and pivot instead of join functions + +We recommend using the `join` package to join streams that have mostly different +schemas or that come from two separate data sources. +If you're joining two datasets queried from InfluxDB, using +[`union()`](/{{< latest "flux" >}}/stdlib/universe/union/) and [`pivot()`](/{{< latest "flux" >}}/stdlib/universe/pivot/) +to combine the data will likely be more performant. + +For example, if you need to query fields from different InfluxDB buckets and align +field values in each row based on time: + +```js +f1 = + from(bucket: "example-bucket-1") + |> range(start: "-1h") + |> filter(fn: (r) => r._field == "f1") + |> drop(columns: "_measurement") + +f2 = + from(bucket: "example-bucket-2") + |> range(start: "-1h") + |> filter(fn: (r) => r._field == "f2") + |> drop(columns: "_measurement") + +union(tables: [f1, f2]) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") +``` +{{< expand-wrapper >}} +{{% expand "View example input and output data" %}} + +#### Input +{{< flex >}} +{{% flex-content %}} +##### f1 +| _time | _field | _value | +| :------------------- | :----- | -----: | +| 2020-01-01T00:01:00Z | f1 | 1 | +| 2020-01-01T00:02:00Z | f1 | 2 | +| 2020-01-01T00:03:00Z | f1 | 1 | +| 2020-01-01T00:04:00Z | f1 | 3 | +{{% /flex-content %}} +{{% flex-content %}} +##### f2 +| _time | _field | _value | +| :------------------- | :----- | -----: | +| 2020-01-01T00:01:00Z | f2 | 5 | +| 2020-01-01T00:02:00Z | f2 | 12 | +| 2020-01-01T00:03:00Z | f2 | 8 | +| 2020-01-01T00:04:00Z | f2 | 6 | +{{% /flex-content %}} +{{< /flex >}} + +#### Output +| _time | f1 | f2 | +| :------------------- | --: | --: | +| 2020-01-01T00:01:00Z | 1 | 5 | +| 2020-01-01T00:02:00Z | 2 | 12 | +| 2020-01-01T00:03:00Z | 1 | 8 | +| 2020-01-01T00:04:00Z | 3 | 6 | + +{{% /expand %}} +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/flux/mathematic-operations.md b/content/influxdb/v2.6/query-data/flux/mathematic-operations.md new file mode 100644 index 000000000..612e9b405 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/mathematic-operations.md @@ -0,0 +1,208 @@ +--- +title: Transform data with mathematic operations +seotitle: Transform data with mathematic operations in Flux +list_title: Transform data with math +description: > + Use `map()` to remap column values and apply mathematic operations. +influxdb/v2.6/tags: [math, flux] +menu: + influxdb_2_6: + name: Transform data with math + parent: Query with Flux +weight: 208 +aliases: + - /influxdb/v2.6/query-data/guides/mathematic-operations/ +related: + - /{{< latest "flux" >}}/stdlib/universe/map + - /{{< latest "flux" >}}/stdlib/universe/aggregates/reduce/ + - /{{< latest "flux" >}}/language/operators/ + - /{{< latest "flux" >}}/function-types/#type-conversions, Flux type-conversion functions + - /influxdb/v2.6/query-data/flux/calculate-percentages/ +list_query_example: map_math +--- + +[Flux](/{{< latest "flux" >}}/), InfluxData's data scripting and query language, +supports mathematic expressions in data transformations. +This article describes how to use [Flux arithmetic operators](/{{< latest "flux" >}}/spec/operators/#arithmetic-operators) +to "map" over data and transform values using mathematic operations. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +##### Basic mathematic operations +```js +// Examples executed using the Flux REPL +> 9 + 9 +18 +> 22 - 14 +8 +> 6 * 5 +30 +> 21 / 7 +3 +``` + +

See Flux Read-Eval-Print Loop (REPL).

+ +{{% note %}} +#### Operands must be the same type +Operands in Flux mathematic operations must be the same data type. +For example, integers cannot be used in operations with floats. +Otherwise, you will get an error similar to: + +``` +Error: type error: float != int +``` + +To convert operands to the same type, use [type-conversion functions](/{{< latest "flux" >}}/stdlib/universe/) +or manually format operands. +The operand data type determines the output data type. +For example: + +```js +100 // Parsed as an integer +100.0 // Parsed as a float + +// Example evaluations +> 20 / 8 +2 + +> 20.0 / 8.0 +2.5 +``` +{{% /note %}} + +## Custom mathematic functions +Flux lets you [create custom functions](/influxdb/v2.6/query-data/flux/custom-functions) that use mathematic operations. +View the examples below. + +###### Custom multiplication function +```js +multiply = (x, y) => x * y + +multiply(x: 10, y: 12) +// Returns 120 +``` + +###### Custom percentage function +```js +percent = (sample, total) => (sample / total) * 100.0 + +percent(sample: 20.0, total: 80.0) +// Returns 25.0 +``` + +### Transform values in a data stream +To transform multiple values in an input stream, your function needs to: + +- [Handle piped-forward data](/influxdb/v2.6/query-data/flux/custom-functions/#use-piped-forward-data-in-a-custom-function). +- Each operand necessary for the calculation exists in each row _(see [Pivot vs join](#pivot-vs-join) below)_. +- Use the [`map()` function](/{{< latest "flux" >}}/stdlib/universe/map) to iterate over each row. + +The example `multiplyByX()` function below includes: + +- A `tables` parameter that represents the input data stream (`<-`). +- An `x` parameter which is the number by which values in the `_value` column are multiplied. +- A `map()` function that iterates over each row in the input stream. + It uses the `with` operator to preserve existing columns in each row. + It also multiples the `_value` column by `x`. + +```js +multiplyByX = (x, tables=<-) => tables + |> map(fn: (r) => ({r with _value: r._value * x})) + +data + |> multiplyByX(x: 10) +``` + +## Examples + +### Convert bytes to gigabytes +To convert active memory from bytes to gigabytes (GB), divide the `active` field +in the `mem` measurement by 1,073,741,824. + +The `map()` function iterates over each row in the piped-forward data and defines +a new `_value` by dividing the original `_value` by 1073741824. + +```js +from(bucket: "example-bucket") + |> range(start: -10m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "active") + |> map(fn: (r) => ({r with _value: r._value / 1073741824})) +``` + +You could turn that same calculation into a function: + +```js +bytesToGB = (tables=<-) => tables + |> map(fn: (r) => ({r with _value: r._value / 1073741824})) + +data + |> bytesToGB() +``` + +#### Include partial gigabytes +Because the original metric (bytes) is an integer, the output of the operation is an integer and does not include partial GBs. +To calculate partial GBs, convert the `_value` column and its values to floats using the +[`float()` function](/{{< latest "flux" >}}/stdlib/universe/float) +and format the denominator in the division operation as a float. + +```js +bytesToGB = (tables=<-) => tables + |> map(fn: (r) => ({r with _value: float(v: r._value) / 1073741824.0})) +``` + +### Calculate a percentage +To calculate a percentage, use simple division, then multiply the result by 100. + +```js +> 1.0 / 4.0 * 100.0 +25.0 +``` + +_For an in-depth look at calculating percentages, see [Calculate percentages](/influxdb/v2.6/query-data/flux/calculate-percentages)._ + +## Pivot vs join +To query and use values in mathematical operations in Flux, operand values must +exists in a single row. +Both `pivot()` and `join()` will do this, but there are important differences between the two: + +#### Pivot is more performant +`pivot()` reads and operates on a single stream of data. +`join()` requires two streams of data and the overhead of reading and combining +both streams can be significant, especially for larger data sets. + +#### Use join for multiple data sources +Use `join()` when querying data from different buckets or data sources. + +##### Pivot fields into columns for mathematic calculations +```js +data + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: (r.field1 + r.field2) / r.field3 * 100.0})) +``` + +##### Join multiple data sources for mathematic calculations +```js +import "sql" +import "influxdata/influxdb/secrets" + +pgUser = secrets.get(key: "POSTGRES_USER") +pgPass = secrets.get(key: "POSTGRES_PASSWORD") +pgHost = secrets.get(key: "POSTGRES_HOST") + +t1 = sql.from( + driverName: "postgres", + dataSourceName: "postgresql://${pgUser}:${pgPass}@${pgHost}", + query: "SELECT id, name, available FROM example_table", +) + +t2 = from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + +join(tables: {t1: t1, t2: t2}, on: ["id"]) + |> map(fn: (r) => ({r with _value: r._value_t2 / r.available_t1 * 100.0})) +``` diff --git a/content/influxdb/v2.6/query-data/flux/median.md b/content/influxdb/v2.6/query-data/flux/median.md new file mode 100644 index 000000000..3ede7c3c5 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/median.md @@ -0,0 +1,149 @@ +--- +title: Find median values +seotitle: Find median values in Flux +list_title: Median +description: > + Use `median()` to return a value representing the `0.5` quantile (50th percentile) or median of input data. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Median +influxdb/v2.6/tags: [query, median] +related: + - /influxdb/v2.6/query-data/flux/percentile-quantile/ + - /{{< latest "flux" >}}/stdlib/universe/median/ + - /{{< latest "flux" >}}/stdlib/universe/quantile/ +list_query_example: median +--- + +Use the [`median()` function](/{{< latest "flux" >}}/stdlib/universe/median/) +to return a value representing the `0.5` quantile (50th percentile) or median of input data. + +## Select a method for calculating the median +Select one of the following methods to calculate the median: + +- [estimate_tdigest](#estimate_tdigest) +- [exact_mean](#exact_mean) +- [exact_selector](#exact_selector) + +### estimate_tdigest +**(Default)** An aggregate method that uses a [t-digest data structure](https://github.com/tdunning/t-digest) +to compute an accurate `0.5` quantile estimate on large data sources. +Output tables consist of a single row containing the calculated median. + +{{< flex >}} +{{% flex-content %}} +**Given the following input table:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**`estimate_tdigest` returns:** + +| _value | +|:------:| +| 1.5 | +{{% /flex-content %}} +{{< /flex >}} + +### exact_mean +An aggregate method that takes the average of the two points closest to the `0.5` quantile value. +Output tables consist of a single row containing the calculated median. + +{{< flex >}} +{{% flex-content %}} +**Given the following input table:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**`exact_mean` returns:** + +| _value | +|:------:| +| 1.5 | +{{% /flex-content %}} +{{< /flex >}} + +### exact_selector +A selector method that returns the data point for which at least 50% of points are less than. +Output tables consist of a single row containing the calculated median. + +{{< flex >}} +{{% flex-content %}} +**Given the following input table:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**`exact_selector` returns:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:02:00Z | 1.0 | +{{% /flex-content %}} +{{< /flex >}} + +{{% note %}} +The examples below use the [example data variable](/influxdb/v2.6/query-data/flux/#example-data-variable). +{{% /note %}} + +## Find the value that represents the median +Use the default method, `"estimate_tdigest"`, to return all rows in a table that +contain values in the 50th percentile of data in the table. + +```js +data + |> median() +``` + +## Find the average of values closest to the median +Use the `exact_mean` method to return a single row per input table containing the +average of the two values closest to the mathematical median of data in the table. + +```js +data + |> median(method: "exact_mean") +``` + +## Find the point with the median value +Use the `exact_selector` method to return a single row per input table containing the +value that 50% of values in the table are less than. + +```js +data + |> median(method: "exact_selector") +``` + +## Use median() with aggregateWindow() +[`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +segments data into windows of time, aggregates data in each window into a single +point, and then removes the time-based segmentation. +It is primarily used to [downsample data](/influxdb/v2.6/process-data/common-tasks/downsample-data/). + +To specify the [median calculation method](#select-a-method-for-calculating-the-median) in `aggregateWindow()`, use the +[full function syntax](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/#specify-parameters-of-the-aggregate-function): + +```js +data + |> aggregateWindow( + every: 5m, + fn: (tables=<-, column) => tables |> median(method: "exact_selector"), + ) +``` diff --git a/content/influxdb/v2.6/query-data/flux/monitor-states.md b/content/influxdb/v2.6/query-data/flux/monitor-states.md new file mode 100644 index 000000000..44f350676 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/monitor-states.md @@ -0,0 +1,189 @@ +--- +title: Monitor states +seotitle: Monitor states and state changes in your events and metrics with Flux. +description: Flux provides several functions to help monitor states and state changes in your data. +influxdb/v2.6/tags: [states, monitor, flux] +menu: + influxdb_2_6: + name: Monitor states + parent: Query with Flux +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/monitor-states/ +related: + - /{{< latest "flux" >}}/stdlib/universe/stateduration/ + - /{{< latest "flux" >}}/stdlib/universe/statecount/ +--- + +Flux helps you monitor states in your metrics and events: + +- [Find how long a state persists](#find-how-long-a-state-persists) +- [Count the number of consecutive states](#count-the-number-of-consecutive-states) + + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +## Find how long a state persists + +Use [`stateDuration()`](/{{< latest "flux" >}}/stdlib/universe/stateduration/) +to calculate the duration of consecutive rows with a specified state. +For each consecutive point that matches the specified state, `stateDuration()` +increments and stores the duration (in the specified unit) in a user-defined column. + +Include the following information: + +- **Column to search:** any tag key, tag value, field key, field value, or measurement. +- **Value:** the value (or state) to search for in the specified column. +- **State duration column:** a new column to store the state duration─the length of time that the specified value persists. +- **Unit:** the unit of time (`1s` (by default), `1m`, `1h`) used to increment the state duration. + +```js +data + |> stateDuration(fn: (r) => r.column_to_search == "value_to_search_for", column: "state_duration", unit: 1s) +``` + +- For the first point that evaluates `true`, the state duration is set to `0`. + For each consecutive point that evaluates `true`, the state duration + increases by the time interval between each consecutive point (in specified units). +- If the state is `false`, the state duration is reset to `-1`. + +### Example query with stateDuration() + +The following query searches the `doors` bucket over the past 5 minutes to find how many seconds a door has been `closed`. + +```js +from(bucket: "doors") + |> range(start: -5m) + |> stateDuration(fn: (r) => r._value == "closed", column: "door_closed", unit: 1s) +``` + +In this example, `door_closed` is the **State duration** column. +If you write data to the `doors` bucket every minute, the state duration +increases by `60s` for each consecutive point where `_value` is `closed`. +If `_value` is not `closed`, the state duration is reset to `0`. + +#### Query results + +Results for the example query above may look like this (for simplicity, we've omitted the measurement, tag, and field columns): + +| _time | _value | door_closed | +| :------------------- | :----: | ----------: | +| 2019-10-26T17:39:16Z | closed | 0 | +| 2019-10-26T17:40:16Z | closed | 60 | +| 2019-10-26T17:41:16Z | closed | 120 | +| 2019-10-26T17:42:16Z | open | -1 | +| 2019-10-26T17:43:16Z | closed | 0 | +| 2019-10-26T17:44:27Z | closed | 60 | + +## Count the number of consecutive states + +Use the [`stateCount()` function](/{{< latest "flux" >}}/stdlib/universe/statecount/) +and include the following information: + +- **Column to search:** any tag key, tag value, field key, field value, or measurement. +- **Value:** to search for in the specified column. +- **State count column:** a new column to store the state count─the number of + consecutive records in which the specified value exists. + +```js +|> stateCount( + fn: (r) => r.column_to_search == "value_to_search_for", + column: "state_count", +) +``` + +- For the first point that evaluates `true`, the state count is set to `1`. For each consecutive point that evaluates `true`, the state count increases by 1. +- If the state is `false`, the state count is reset to `-1`. + +### Example query with stateCount() + +The following query searches the `doors` bucket over the past 5 minutes and calculates how many points have `closed` as their `_value`. + +```js +from(bucket: "doors") + |> range(start: -5m) + |> stateCount(fn: (r) => r._value == "closed", column: "door_closed") +``` + +This example stores the **state count** in the `door_closed` column. If you write data to the `doors` bucket every minute, the state count increases by `1` for each consecutive point where `_value` is `closed`. If `_value` is not `closed`, the state count is reset to `-1`. + +#### Query results + +Results for the example query above may look like this (for simplicity, we've omitted the measurement, tag, and field columns): + +| _time | _value | door_closed | +| :------------------- | :----: | ----------: | +| 2019-10-26T17:39:16Z | closed | 1 | +| 2019-10-26T17:40:16Z | closed | 2 | +| 2019-10-26T17:41:16Z | closed | 3 | +| 2019-10-26T17:42:16Z | open | -1 | +| 2019-10-26T17:43:16Z | closed | 1 | +| 2019-10-26T17:44:27Z | closed | 2 | + +#### Example query to count machine state + +The following query checks the machine state every minute (idle, assigned, or busy). InfluxDB searches the `servers` bucket over the past hour and counts records with a machine state of `idle`, `assigned` or `busy`. + +```js +from(bucket: "servers") + |> range(start: -1h) + |> filter(fn: (r) => r.machine_state == "idle" or r.machine_state == "assigned" or r.machine_state == "busy") + |> stateCount(fn: (r) => r.machine_state == "busy", column: "_count") + |> stateCount(fn: (r) => r.machine_state == "assigned", column: "_count") + |> stateCount(fn: (r) => r.machine_state == "idle", column: "_count") +``` + + + + diff --git a/content/influxdb/v2.6/query-data/flux/moving-average.md b/content/influxdb/v2.6/query-data/flux/moving-average.md new file mode 100644 index 000000000..87b09ab29 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/moving-average.md @@ -0,0 +1,118 @@ +--- +title: Calculate the moving average +seotitle: Calculate the moving average in Flux +list_title: Moving Average +description: > + Use `movingAverage()` or `timedMovingAverage()` to return the moving average of data. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Moving Average +influxdb/v2.6/tags: [query, moving average] +related: + - /{{< latest "flux" >}}/stdlib/universe/movingaverage/ + - /{{< latest "flux" >}}/stdlib/universe/timedmovingaverage/ +list_query_example: moving_average +--- + +Use [`movingAverage()`](/{{< latest "flux" >}}/stdlib/universe/movingaverage/) +or [`timedMovingAverage()`](/{{< latest "flux" >}}/stdlib/universe/timedmovingaverage/) +to return the moving average of data. + +```js +data + |> movingAverage(n: 5) + +// OR + +data + |> timedMovingAverage(every: 5m, period: 10m) +``` + +### movingAverage() +For each row in a table, `movingAverage()` returns the average of the current value and +**previous** values where `n` is the total number of values used to calculate the average. + +If `n = 3`: + +| Row # | Calculation | +|:-----:|:----------- | +| 1 | _Insufficient number of rows_ | +| 2 | _Insufficient number of rows_ | +| 3 | (Row1 + Row2 + Row3) / 3 | +| 4 | (Row2 + Row3 + Row4) / 3 | +| 5 | (Row3 + Row4 + Row5) / 3 | + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.2 | +| 2020-01-01T00:03:00Z | 1.8 | +| 2020-01-01T00:04:00Z | 0.9 | +| 2020-01-01T00:05:00Z | 1.4 | +| 2020-01-01T00:06:00Z | 2.0 | +{{% /flex-content %}} +{{% flex-content %}} +**The following would return:** + +```js +|> movingAverage(n: 3) +``` + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:03:00Z | 1.33 | +| 2020-01-01T00:04:00Z | 1.30 | +| 2020-01-01T00:05:00Z | 1.36 | +| 2020-01-01T00:06:00Z | 1.43 | +{{% /flex-content %}} +{{< /flex >}} + +### timedMovingAverage() +For each row in a table, `timedMovingAverage()` returns the average of the +current value and all row values in the **previous** `period` (duration). +It returns moving averages at a frequency defined by the `every` parameter. + +Each color in the diagram below represents a period of time used to calculate an +average and the time a point representing the average is returned. +If `every = 30m` and `period = 1h`: + +{{< svg "/static/svgs/timed-moving-avg.svg" >}} + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:00Z | 1.0 | +| 2020-01-01T00:30:00Z | 1.2 | +| 2020-01-01T01:00:00Z | 1.8 | +| 2020-01-01T01:30:00Z | 0.9 | +| 2020-01-01T02:00:00Z | 1.4 | +| 2020-01-01T02:30:00Z | 2.0 | +| 2020-01-01T03:00:00Z | 1.9 | +{{% /flex-content %}} +{{% flex-content %}} +**The following would return:** + +```js +|> timedMovingAverage(every: 30m, period: 1h) +``` + +| _time | _value | +| :------------------- | -----: | +| 2020-01-01T00:30:00Z | 1.0 | +| 2020-01-01T01:00:00Z | 1.1 | +| 2020-01-01T01:30:00Z | 1.5 | +| 2020-01-01T02:00:00Z | 1.35 | +| 2020-01-01T02:30:00Z | 1.15 | +| 2020-01-01T03:00:00Z | 1.7 | +| 2020-01-01T03:00:00Z | 2 | +{{% /flex-content %}} +{{< /flex >}} diff --git a/content/influxdb/v2.6/query-data/flux/operate-on-timestamps.md b/content/influxdb/v2.6/query-data/flux/operate-on-timestamps.md new file mode 100644 index 000000000..43c3d9c81 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/operate-on-timestamps.md @@ -0,0 +1,203 @@ +--- +title: Operate on timestamps with Flux +list_title: Operate on timestamps +description: > + Use Flux to process and operate on timestamps. +menu: + influxdb_2_6: + name: Operate on timestamps + parent: Query with Flux +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/manipulate-timestamps/ + - /influxdb/v2.6/query-data/flux/manipulate-timestamps/ +related: + - /{{< latest "flux" >}}/stdlib/universe/now/ + - /{{< latest "flux" >}}/stdlib/system/time/ + - /{{< latest "flux" >}}/stdlib/universe/time/ + - /{{< latest "flux" >}}/stdlib/universe/uint/ + - /{{< latest "flux" >}}/stdlib/universe/int/ + - /{{< latest "flux" >}}/stdlib/universe/truncatetimecolumn/ + - /{{< latest "flux" >}}/stdlib/date/truncate/ + - /{{< latest "flux" >}}/stdlib/date/add/ + - /{{< latest "flux" >}}/stdlib/date/sub/ +--- + +Every point stored in InfluxDB has an associated timestamp. +Use Flux to process and operate on timestamps to suit your needs. + +- [Convert timestamp format](#convert-timestamp-format) +- [Calculate the duration between two timestamps](#calculate-the-duration-between-two-timestamps) +- [Retrieve the current time](#retrieve-the-current-time) +- [Normalize irregular timestamps](#normalize-irregular-timestamps) +- [Use timestamps and durations together](#use-timestamps-and-durations-together) + +{{% note %}} +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. +{{% /note %}} + + +## Convert timestamp format + +- [Unix nanosecond to RFC3339](#unix-nanosecond-to-rfc3339) +- [RFC3339 to Unix nanosecond](#rfc3339-to-unix-nanosecond) + +### Unix nanosecond to RFC3339 +Use the [`time()` function](/{{< latest "flux" >}}/stdlib/universe/time/) +to convert a [Unix **nanosecond** timestamp](/influxdb/v2.6/reference/glossary/#unix-timestamp) +to an [RFC3339 timestamp](/influxdb/v2.6/reference/glossary/#rfc3339-timestamp). + +```js +time(v: 1568808000000000000) +// Returns 2019-09-18T12:00:00.000000000Z +``` + +### RFC3339 to Unix nanosecond +Use the [`uint()` function](/{{< latest "flux" >}}/stdlib/universe/uint/) +to convert an RFC3339 timestamp to a Unix nanosecond timestamp. + +```js +uint(v: 2019-09-18T12:00:00.000000000Z) +// Returns 1568808000000000000 +``` + +## Calculate the duration between two timestamps +Flux doesn't support mathematical operations using [time type](/{{< latest "flux" >}}/spec/types/#time-types) values. +To calculate the duration between two timestamps: + +1. Use the `uint()` function to convert each timestamp to a Unix nanosecond timestamp. +2. Subtract one Unix nanosecond timestamp from the other. +3. Use the `duration()` function to convert the result into a duration. + +```js +time1 = uint(v: 2019-09-17T21:12:05Z) +time2 = uint(v: 2019-09-18T22:16:35Z) + +duration(v: time2 - time1) +// Returns 25h4m30s +``` + +{{% note %}} +Flux doesn't support duration column types. +To store a duration in a column, use the [`string()` function](/{{< latest "flux" >}}/stdlib/universe/string/) +to convert the duration to a string. +{{% /note %}} + +## Retrieve the current time +- [Current UTC time](#current-utc-time) +- [Current system time](#current-system-time) + +### Current UTC time +Use the [`now()` function](/{{< latest "flux" >}}/stdlib/universe/now/) to +return the current UTC time in RFC3339 format. + +```js +now() +``` + +{{% note %}} +`now()` is cached at runtime, so all instances of `now()` in a Flux script +return the same value. +{{% /note %}} + +### Current system time +Import the `system` package and use the [`system.time()` function](/{{< latest "flux" >}}/stdlib/system/time/) +to return the current system time of the host machine in RFC3339 format. + +```js +import "system" + +system.time() +``` + +{{% note %}} +`system.time()` returns the time it is executed, so each instance of `system.time()` +in a Flux script returns a unique value. +{{% /note %}} + +## Normalize irregular timestamps +To normalize irregular timestamps, truncate all `_time` values to a specified unit +with the [`truncateTimeColumn()` function](/{{< latest "flux" >}}/stdlib/universe/truncatetimecolumn/). +This is useful in [`join()`](/{{< latest "flux" >}}/stdlib/universe/join/) +and [`pivot()`](/{{< latest "flux" >}}/stdlib/universe/pivot/) +operations where points should align by time, but timestamps vary slightly. + +```js +data + |> truncateTimeColumn(unit: 1m) +``` + +{{< flex >}} +{{% flex-content %}} +**Input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:49Z | 2.0 | +| 2020-01-01T00:01:01Z | 1.9 | +| 2020-01-01T00:03:22Z | 1.8 | +| 2020-01-01T00:04:04Z | 1.9 | +| 2020-01-01T00:05:38Z | 2.1 | +{{% /flex-content %}} +{{% flex-content %}} +**Output:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:00Z | 2.0 | +| 2020-01-01T00:01:00Z | 1.9 | +| 2020-01-01T00:03:00Z | 1.8 | +| 2020-01-01T00:04:00Z | 1.9 | +| 2020-01-01T00:05:00Z | 2.1 | +{{% /flex-content %}} +{{< /flex >}} + +## Use timestamps and durations together +- [Add a duration to a timestamp](#add-a-duration-to-a-timestamp) +- [Subtract a duration from a timestamp](#subtract-a-duration-from-a-timestamp) + +### Add a duration to a timestamp +[`date.add()`](/{{< latest "flux" >}}/stdlib/date/add/) +adds a duration to a specified time and returns the resulting time. + +```js +import "date" + +date.add(d: 6h, to: 2019-09-16T12:00:00Z) + +// Returns 2019-09-16T18:00:00.000000000Z +``` + +### Subtract a duration from a timestamp +[`date.sub()`](/{{< latest "flux" >}}/stdlib/date/sub/) +subtracts a duration from a specified time and returns the resulting time. + +```js +import "date" + +date.sub(d: 6h, from: 2019-09-16T12:00:00Z) + +// Returns 2019-09-16T06:00:00.000000000Z +``` + +### Shift a timestamp forward or backward + +The [timeShift()](/{{< latest "flux" >}}/stdlib/universe/timeshift/) function adds the specified duration of time to each value in time columns (`_start`, `_stop`, `_time`). + +Shift forward in time: + +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> timeShift(duration: 12h) +``` +Shift backward in time: + +```js +from(bucket: "example-bucket") + |> range(start: -5m) + |> timeShift(duration: -12h) +``` diff --git a/content/influxdb/v2.6/query-data/flux/percentile-quantile.md b/content/influxdb/v2.6/query-data/flux/percentile-quantile.md new file mode 100644 index 000000000..a32dcc06e --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/percentile-quantile.md @@ -0,0 +1,164 @@ +--- +title: Find percentile and quantile values +seotitle: Query percentile and quantile values in Flux +list_title: Percentile & quantile +description: > + Use the `quantile()` function to return all values within the `q` quantile or + percentile of input data. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Percentile & quantile +influxdb/v2.6/tags: [query, percentile, quantile] +related: + - /influxdb/v2.6/query-data/flux/query-median/ + - /{{< latest "flux" >}}/stdlib/universe/quantile/ +list_query_example: quantile +--- + +Use the [`quantile()` function](/{{< latest "flux" >}}/stdlib/universe/quantile/) +to return a value representing the `q` quantile or percentile of input data. + +## Percentile versus quantile +Percentiles and quantiles are very similar, differing only in the number used to calculate return values. +A percentile is calculated using numbers between `0` and `100`. +A quantile is calculated using numbers between `0.0` and `1.0`. +For example, the **`0.5` quantile** is the same as the **50th percentile**. + +## Select a method for calculating the quantile +Select one of the following methods to calculate the quantile: + +- [estimate_tdigest](#estimate_tdigest) +- [exact_mean](#exact_mean) +- [exact_selector](#exact_selector) + +### estimate_tdigest +**(Default)** An aggregate method that uses a [t-digest data structure](https://github.com/tdunning/t-digest) +to compute a quantile estimate on large data sources. +Output tables consist of a single row containing the calculated quantile. + +If calculating the `0.5` quantile or 50th percentile: + +{{< flex >}} +{{% flex-content %}} +**Given the following input table:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**`estimate_tdigest` returns:** + +| _value | +|:------:| +| 1.5 | +{{% /flex-content %}} +{{< /flex >}} + +### exact_mean +An aggregate method that takes the average of the two points closest to the quantile value. +Output tables consist of a single row containing the calculated quantile. + +If calculating the `0.5` quantile or 50th percentile: + +{{< flex >}} +{{% flex-content %}} +**Given the following input table:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**`exact_mean` returns:** + +| _value | +|:------:| +| 1.5 | +{{% /flex-content %}} +{{< /flex >}} + +### exact_selector +A selector method that returns the data point for which at least `q` points are less than. +Output tables consist of a single row containing the calculated quantile. + +If calculating the `0.5` quantile or 50th percentile: + +{{< flex >}} +{{% flex-content %}} +**Given the following input table:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:01:00Z | 1.0 | +| 2020-01-01T00:02:00Z | 1.0 | +| 2020-01-01T00:03:00Z | 2.0 | +| 2020-01-01T00:04:00Z | 3.0 | +{{% /flex-content %}} +{{% flex-content %}} +**`exact_selector` returns:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:02:00Z | 1.0 | +{{% /flex-content %}} +{{< /flex >}} + +{{% note %}} +The examples below use the [example data variable](/influxdb/v2.6/query-data/flux/#example-data-variable). +{{% /note %}} + +## Find the value representing the 99th percentile +Use the default method, `"estimate_tdigest"`, to return all rows in a table that +contain values in the 99th percentile of data in the table. + +```js +data + |> quantile(q: 0.99) +``` + +## Find the average of values closest to the quantile +Use the `exact_mean` method to return a single row per input table containing the +average of the two values closest to the mathematical quantile of data in the table. +For example, to calculate the `0.99` quantile: + +```js +data + |> quantile(q: 0.99, method: "exact_mean") +``` + +## Find the point with the quantile value +Use the `exact_selector` method to return a single row per input table containing the +value that `q * 100`% of values in the table are less than. +For example, to calculate the `0.99` quantile: + +```js +data + |> quantile(q: 0.99, method: "exact_selector") +``` + +## Use quantile() with aggregateWindow() +[`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/) +segments data into windows of time, aggregates data in each window into a single +point, and then removes the time-based segmentation. +It is primarily used to [downsample data](/influxdb/v2.6/process-data/common-tasks/downsample-data/). + +To specify the [quantile calculation method](#select-a-method-for-calculating-the-quantile) in +`aggregateWindow()`, use the [full function syntax](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow/#specify-parameters-of-the-aggregate-function): + +```js +data + |> aggregateWindow( + every: 5m, + fn: (tables=<-, column) => tables + |> quantile(q: 0.99, method: "exact_selector"), + ) +``` diff --git a/content/influxdb/v2.6/query-data/flux/query-fields.md b/content/influxdb/v2.6/query-data/flux/query-fields.md new file mode 100644 index 000000000..05c50bef2 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/query-fields.md @@ -0,0 +1,76 @@ +--- +title: Query fields and tags +seotitle: Query fields and tags in InfluxDB using Flux +description: > + Use `filter()` to query data based on fields, tags, or any other column value. + `filter()` performs operations similar to the `SELECT` statement and the `WHERE` + clause in InfluxQL and other SQL-like query languages. +weight: 201 +menu: + influxdb_2_6: + parent: Query with Flux +influxdb/v2.6/tags: [query, select, where] +related: + - /{{< latest "flux" >}}/stdlib/universe/filter/ + - /influxdb/v2.6/query-data/flux/conditional-logic/ + - /influxdb/v2.6/query-data/flux/regular-expressions/ +list_code_example: | + ```js + from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r.tag == "example-tag") + |> filter(fn: (r) => r._field == "example-field") + ``` +--- + +Use [`filter()`](/{{< latest "flux" >}}/stdlib/universe/filter/) +to query data based on fields, tags, or any other column value. +`filter()` performs operations similar to the `SELECT` statement and the `WHERE` +clause in InfluxQL and other SQL-like query languages. + +## The filter() function +`filter()` has an `fn` parameter that expects a [predicate function](/influxdb/v2.6/reference/glossary/#predicate-function), +an anonymous function comprised of one or more [predicate expressions](/influxdb/v2.6/reference/glossary/#predicate-expression). +The predicate function evaluates each input row. +Rows that evaluate to `true` are **included** in the output data. +Rows that evaluate to `false` are **excluded** from the output data. + +```js +// ... + |> filter(fn: (r) => r._measurement == "example-measurement-name" ) +``` + +The `fn` predicate function requires an `r` argument, which represents each row +as `filter()` iterates over input data. +Key-value pairs in the row record represent columns and their values. +Use [dot notation or bracket notation](/{{< latest "flux" >}}/data-types/composite/record/#reference-values-in-a-record) +to reference specific column values in the predicate function. +Use [logical operators](/{{< latest "flux" >}}/spec/operators/#logical-operators) +to chain multiple predicate expressions together. + +```js +// Row record +r = {foo: "bar", baz: "quz"} + +// Example predicate function +(r) => r.foo == "bar" and r["baz"] == "quz" + +// Evaluation results +(r) => true and true +``` + +## Filter by fields and tags +The combination of [`from()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from), +[`range()`](/{{< latest "flux" >}}/stdlib/universe/range), +and `filter()` represent the most basic Flux query: + +1. Use `from()` to define your [bucket](/influxdb/v2.6/reference/glossary/#bucket). +2. Use `range()` to limit query results by time. +3. Use `filter()` to identify what rows of data to output. + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement-name" and r.mytagname == "example-tag-value") + |> filter(fn: (r) => r._field == "example-field-name") +``` diff --git a/content/influxdb/v2.6/query-data/flux/rate.md b/content/influxdb/v2.6/query-data/flux/rate.md new file mode 100644 index 000000000..3c935daec --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/rate.md @@ -0,0 +1,173 @@ +--- +title: Calculate the rate of change +seotitle: Calculate the rate of change in Flux +list_title: Rate +description: > + Use `derivative()` to calculate the rate of change between subsequent values or + `aggregate.rate()` to calculate the average rate of change per window of time. + If time between points varies, these functions normalize points to a common time interval + making values easily comparable. +weight: 210 +menu: + influxdb_2_6: + parent: Query with Flux + name: Rate +influxdb/v2.6/tags: [query, rate] +related: + - /{{< latest "flux" >}}/stdlib/universe/derivative/ + - /{{< latest "flux" >}}/stdlib/experimental/aggregate/rate/ +list_query_example: rate_of_change +--- + + +Use [`derivative()`](/{{< latest "flux" >}}/stdlib/universe/derivative/) +to calculate the rate of change between subsequent values or +[`aggregate.rate()`](/{{< latest "flux" >}}/stdlib/experimental/to/aggregate/rate/) +to calculate the average rate of change per window of time. +If time between points varies, these functions normalize points to a common time interval +making values easily comparable. + +- [Rate of change between subsequent values](#rate-of-change-between-subsequent-values) +- [Average rate of change per window of time](#average-rate-of-change-per-window-of-time) + +## Rate of change between subsequent values +Use the [`derivative()` function](/{{< latest "flux" >}}/stdlib/universe/derivative/) +to calculate the rate of change per unit of time between subsequent _non-null_ values. + +```js +data + |> derivative(unit: 1s) +``` + +By default, `derivative()` returns only positive derivative values and replaces negative values with _null_. +Calculated values are returned as [floats](/{{< latest "flux" >}}/spec/types/#numeric-types). + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:00Z | 250 | +| 2020-01-01T00:04:00Z | 160 | +| 2020-01-01T00:12:00Z | 150 | +| 2020-01-01T00:19:00Z | 220 | +| 2020-01-01T00:32:00Z | 200 | +| 2020-01-01T00:51:00Z | 290 | +| 2020-01-01T01:00:00Z | 340 | +{{% /flex-content %}} +{{% flex-content %}} +**`derivative(unit: 1m)` returns:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:04:00Z | | +| 2020-01-01T00:12:00Z | | +| 2020-01-01T00:19:00Z | 10.0 | +| 2020-01-01T00:32:00Z | | +| 2020-01-01T00:51:00Z | 4.74 | +| 2020-01-01T01:00:00Z | 5.56 | +{{% /flex-content %}} +{{< /flex >}} + +Results represent the rate of change **per minute** between subsequent values with +negative values set to _null_. + +### Return negative derivative values +To return negative derivative values, set the `nonNegative` parameter to `false`, + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:00Z | 250 | +| 2020-01-01T00:04:00Z | 160 | +| 2020-01-01T00:12:00Z | 150 | +| 2020-01-01T00:19:00Z | 220 | +| 2020-01-01T00:32:00Z | 200 | +| 2020-01-01T00:51:00Z | 290 | +| 2020-01-01T01:00:00Z | 340 | +{{% /flex-content %}} +{{% flex-content %}} +**The following returns:** + +```js +|> derivative(unit: 1m, nonNegative: false) +``` + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:04:00Z | -22.5 | +| 2020-01-01T00:12:00Z | -1.25 | +| 2020-01-01T00:19:00Z | 10.0 | +| 2020-01-01T00:32:00Z | -1.54 | +| 2020-01-01T00:51:00Z | 4.74 | +| 2020-01-01T01:00:00Z | 5.56 | +{{% /flex-content %}} +{{< /flex >}} + +Results represent the rate of change **per minute** between subsequent values and +include negative values. + +## Average rate of change per window of time + +Use the [`aggregate.rate()` function](/{{< latest "flux" >}}/stdlib/experimental/to/aggregate/rate/) +to calculate the average rate of change per window of time. + +```js +import "experimental/aggregate" + +data + |> aggregate.rate( + every: 1m, + unit: 1s, + groupColumns: ["tag1", "tag2"], + ) +``` + +`aggregate.rate()` returns the average rate of change (as a [float](/{{< latest "flux" >}}/spec/types/#numeric-types)) +per `unit` for time intervals defined by `every`. +Negative values are replaced with _null_. + +{{% note %}} +`aggregate.rate()` does not support `nonNegative: false`. +{{% /note %}} + +{{< flex >}} +{{% flex-content %}} +**Given the following input:** + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:00:00Z | 250 | +| 2020-01-01T00:04:00Z | 160 | +| 2020-01-01T00:12:00Z | 150 | +| 2020-01-01T00:19:00Z | 220 | +| 2020-01-01T00:32:00Z | 200 | +| 2020-01-01T00:51:00Z | 290 | +| 2020-01-01T01:00:00Z | 340 | +{{% /flex-content %}} +{{% flex-content %}} +**The following returns:** + +```js +|> aggregate.rate( + every: 20m, + unit: 1m, +) +``` + +| _time | _value | +|:----- | ------:| +| 2020-01-01T00:20:00Z | 10.00 | +| 2020-01-01T00:40:00Z | | +| 2020-01-01T01:00:00Z | 4.74 | +| 2020-01-01T01:20:00Z | 5.56 | +{{% /flex-content %}} +{{< /flex >}} + +Results represent the **average change rate per minute** of every **20 minute interval** +with negative values set to _null_. +Timestamps represent the right bound of the time window used to average values. diff --git a/content/influxdb/v2.6/query-data/flux/regular-expressions.md b/content/influxdb/v2.6/query-data/flux/regular-expressions.md new file mode 100644 index 000000000..1c969902b --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/regular-expressions.md @@ -0,0 +1,90 @@ +--- +title: Use regular expressions in Flux +list_title: Regular expressions +description: This guide walks through using regular expressions in evaluation logic in Flux functions. +influxdb/v2.6/tags: [regex] +menu: + influxdb_2_6: + name: Regular expressions + parent: Query with Flux +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/regular-expressions/ +related: + - /influxdb/v2.6/query-data/flux/query-fields/ + - /{{< latest "flux" >}}/stdlib/regexp/ +list_query_example: regular_expressions +--- + +Regular expressions (regexes) are incredibly powerful when matching patterns in large collections of data. +With Flux, regular expressions are primarily used for evaluation logic in predicate functions for things +such as filtering rows, dropping and keeping columns, state detection, etc. +This guide shows how to use regular expressions in your Flux scripts. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +## Go regular expression syntax +Flux uses Go's [regexp package](https://golang.org/pkg/regexp/) for regular expression search. +The links [below](#helpful-links) provide information about Go's regular expression syntax. + +## Regular expression operators +Flux provides two comparison operators for use with regular expressions. + +#### `=~` +When the expression on the left **MATCHES** the regular expression on the right, this evaluates to `true`. + +#### `!~` +When the expression on the left **DOES NOT MATCH** the regular expression on the right, this evaluates to `true`. + +## Regular expressions in Flux +When using regex matching in your Flux scripts, enclose your regular expressions with `/`. +The following is the basic regex comparison syntax: + +###### Basic regex comparison syntax +```js +expression =~ /regex/ +expression !~ /regex/ +``` +## Examples + +### Use a regex to filter by tag value +The following example filters records by the `cpu` tag. +It only keeps records for which the `cpu` is either `cpu0`, `cpu1`, or `cpu2`. + +```js +from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r.cpu =~ /cpu[0-2]$/) +``` + +### Use a regex to filter by field key +The following example excludes records that do not have `_percent` in a field key. + +```js +from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "mem" and r._field =~ /_percent/) +``` + +### Drop columns matching a regex +The following example drops columns whose names do not being with `_`. + +```js +from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "mem") + |> drop(fn: (column) => column !~ /_.*/) +``` + +## Helpful links + +##### Syntax documentation +[regexp Syntax GoDoc](https://godoc.org/regexp/syntax) +[RE2 Syntax Overview](https://github.com/google/re2/wiki/Syntax) + +##### Go regex testers +[Regex Tester - Golang](https://regex-golang.appspot.com/assets/html/index.html) +[Regex101](https://regex101.com/) diff --git a/content/influxdb/v2.6/query-data/flux/scalar-values.md b/content/influxdb/v2.6/query-data/flux/scalar-values.md new file mode 100644 index 000000000..f5b135e3b --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/scalar-values.md @@ -0,0 +1,232 @@ +--- +title: Extract scalar values in Flux +list_title: Extract scalar values +description: > + Use Flux dynamic query functions to extract scalar values from Flux query output. + This lets you, for example, dynamically set variables using query results. +menu: + influxdb_2_6: + name: Extract scalar values + parent: Query with Flux +weight: 220 +influxdb/v2.6/tags: [scalar] +related: + - /{{< latest "flux" >}}/function-types/#dynamic-queries, Flux dynamic query functions +aliases: + - /influxdb/v2.6/query-data/guides/scalar-values/ +list_code_example: | + ```js + scalarValue = (tables=<-) => { + _record = tables + |> findRecord(fn: (key) => true, idx: 0) + + return _record._value + } + ``` +--- + +Use Flux [dynamic query functions](/{{< latest "flux" >}}/function-types/#dynamic-queries) +to extract scalar values from Flux query output. +This lets you, for example, dynamically set variables using query results. + +**To extract scalar values from output:** + +1. [Extract a column from the input stream](#extract-a-column) + _**or**_ [extract a row from the input stream](#extract-a-row). +2. Use the returned array or record to reference scalar values. + +_The samples on this page use the [sample data provided below](#sample-data)._ + +{{% warn %}} +#### Current limitations +- The InfluxDB user interface (UI) does not currently support raw scalar output. + Use [`map()`](/{{< latest "flux" >}}/stdlib/universe/map/) to add + scalar values to output data. +{{% /warn %}} + +## Table extraction +Flux formats query results as a stream of tables. +Both [`findColumn()`](/{{< latest "flux" >}}/stdlib/universe/findcolumn/) +and [`findRecord()`](/{{< latest "flux" >}}/stdlib/universe/findrecord/) +extract the first table in a stream of tables whose [group key](/{{< latest "flux" >}}/get-started/data-model/#group-key) +values match the `fn` [predicate function](/{{< latest "flux" >}}/get-started/syntax-basics/#predicate-functions). + +{{% note %}} +#### Extract the correct table +Flux functions do not guarantee table order. +`findColumn()` and `findRecord` extract only the **first** table that matches the `fn` predicate. +To extract the correct table, be very specific in your predicate function or +filter and transform your data to minimize the number of tables piped-forward into the functions. +{{% /note %}} + +## Extract a column +Use the [`findColumn()` function](/{{< latest "flux" >}}/stdlib/universe/findcolumn/) +to output an array of values from a specific column in the extracted table. + +_See [Sample data](#sample-data) below._ + +```js +sampleData + |> findColumn( + fn: (key) => key._field == "temp" and key.location == "sfo", + column: "_value", + ) + +// Returns [65.1, 66.2, 66.3, 66.8] +``` + +### Use extracted column values +Use a variable to store the array of values. +In the example below, `SFOTemps` represents the array of values. +Reference a specific index (integer starting from `0`) in the array to return the +value at that index. + +_See [Sample data](#sample-data) below._ + +```js +SFOTemps = sampleData + |> findColumn( + fn: (key) => key._field == "temp" and key.location == "sfo", + column: "_value", + ) + +SFOTemps +// Returns [65.1, 66.2, 66.3, 66.8] + +SFOTemps[0] +// Returns 65.1 + +SFOTemps[2] +// Returns 66.3 +``` + +## Extract a row +Use the [`findRecord()` function](/{{< latest "flux" >}}/stdlib/universe/findrecord/) +to output data from a single row in the extracted table. +Specify the index of the row to output using the `idx` parameter. +The function outputs a record with key-value pairs for each column. + +```js +sampleData + |> findRecord( + fn: (key) => key._field == "temp" and key.location == "sfo", + idx: 0, + ) + +// Returns { +// _time:2019-11-11T12:00:00Z, +// _field:"temp", +// location:"sfo", +// _value: 65.1 +// } +``` + +### Use an extracted row record +Use a variable to store the extracted row record. +In the example below, `tempInfo` represents the extracted row. +Use [dot or bracket notation](/{{< latest "flux" >}}/data-types/composite/record/#dot-notation) +to reference keys in the record. + +```js +tempInfo = sampleData + |> findRecord( + fn: (key) => key._field == "temp" and key.location == "sfo", + idx: 0, + ) + +tempInfo +// Returns { +// _time:2019-11-11T12:00:00Z, +// _field:"temp", +// location:"sfo", +// _value: 65.1 +// } + +tempInfo._time +// Returns 2019-11-11T12:00:00Z + +tempInfo.location +// Returns sfo +``` + +## Example helper functions +Create custom helper functions to extract scalar values from query output. + +##### Extract a scalar field value +```js +// Define a helper function to extract field values +getFieldValue = (tables=<-, field) => { + extract = tables + |> findColumn(fn: (key) => key._field == field, column: "_value") + + return extract[0] +} + +// Use the helper function to define a variable +lastJFKTemp = sampleData + |> filter(fn: (r) => r.location == "kjfk") + |> last() + |> getFieldValue(field: "temp") + +lastJFKTemp +// Returns 71.2 +``` + +##### Extract scalar row data +```js +// Define a helper function to extract a row as a record +getRow = (tables=<-, field, idx=0) => { + extract = tables + |> findRecord(fn: (key) => true, idx: idx) + + return extract +} + +// Use the helper function to define a variable +lastReported = sampleData + |> last() + |> getRow(field: "temp") + +"The last location to report was ${lastReported.location}. +The temperature was ${string(v: lastReported._value)}°F." + +// Returns: +// The last location to report was kord. +// The temperature was 38.9°F. +``` + +--- + +## Sample data + +The following sample data set represents fictional temperature metrics collected +from three locations. +It's formatted in [annotated CSV](/influxdb/v2.6/reference/syntax/annotated-csv/) and imported +into the Flux query using the [`csv.from()` function](/{{< latest "flux" >}}/stdlib/csv/from/). + +Place the following at the beginning of your query to use the sample data: + +{{% truncate %}} +```js +import "csv" + +sampleData = csv.from(csv: " +#datatype,string,long,dateTime:RFC3339,string,string,double +#group,false,true,false,true,true,false +#default,,,,,, +,result,table,_time,location,_field,_value +,,0,2019-11-01T12:00:00Z,sfo,temp,65.1 +,,0,2019-11-01T13:00:00Z,sfo,temp,66.2 +,,0,2019-11-01T14:00:00Z,sfo,temp,66.3 +,,0,2019-11-01T15:00:00Z,sfo,temp,66.8 +,,1,2019-11-01T12:00:00Z,kjfk,temp,69.4 +,,1,2019-11-01T13:00:00Z,kjfk,temp,69.9 +,,1,2019-11-01T14:00:00Z,kjfk,temp,71.0 +,,1,2019-11-01T15:00:00Z,kjfk,temp,71.2 +,,2,2019-11-01T12:00:00Z,kord,temp,46.4 +,,2,2019-11-01T13:00:00Z,kord,temp,46.3 +,,2,2019-11-01T14:00:00Z,kord,temp,42.7 +,,2,2019-11-01T15:00:00Z,kord,temp,38.9 +") +``` +{{% /truncate %}} diff --git a/content/influxdb/v2.6/query-data/flux/sort-limit.md b/content/influxdb/v2.6/query-data/flux/sort-limit.md new file mode 100644 index 000000000..51910d7c3 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/sort-limit.md @@ -0,0 +1,68 @@ +--- +title: Sort and limit data with Flux +seotitle: Sort and limit data in InfluxDB with Flux +list_title: Sort and limit +description: > + Use `sort()` to order records within each table by specific columns and + `limit()` to limit the number of records in output tables to a fixed number, `n`. +influxdb/v2.6/tags: [sort, limit] +menu: + influxdb_2_6: + name: Sort and limit + parent: Query with Flux +weight: 203 +aliases: + - /influxdb/v2.6/query-data/guides/sort-limit/ +related: + - /{{< latest "flux" >}}/stdlib/universe/sort + - /{{< latest "flux" >}}/stdlib/universe/limit +list_query_example: sort_limit +--- + +Use [`sort()`](/{{< latest "flux" >}}/stdlib/universe/sort) +to order records within each table by specific columns and +[`limit()`](/{{< latest "flux" >}}/stdlib/universe/limit) +to limit the number of records in output tables to a fixed number, `n`. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +##### Example sorting system uptime + +The following example orders system uptime first by region, then host, then value. + +```js +from(bucket: "example-bucket") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime") + |> sort(columns: ["region", "host", "_value"]) +``` + +The [`limit()` function](/{{< latest "flux" >}}/stdlib/universe/limit) +limits the number of records in output tables to a fixed number, `n`. +The following example shows up to 10 records from the past hour. + +```js +from(bucket:"example-bucket") + |> range(start:-1h) + |> limit(n:10) +``` + +You can use `sort()` and `limit()` together to show the top N records. +The example below returns the 10 top system uptime values sorted first by +region, then host, then value. + +```js +from(bucket: "example-bucket") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime") + |> sort(columns: ["region", "host", "_value"]) + |> limit(n: 10) +``` + +You now have created a Flux query that sorts and limits data. +Flux also provides the [`top()`](/{{< latest "flux" >}}/stdlib/universe/top) +and [`bottom()`](/{{< latest "flux" >}}/stdlib/universe/bottom) +functions to perform both of these functions at the same time. diff --git a/content/influxdb/v2.6/query-data/flux/sql.md b/content/influxdb/v2.6/query-data/flux/sql.md new file mode 100644 index 000000000..ca101fb08 --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/sql.md @@ -0,0 +1,378 @@ +--- +title: Query SQL data sources +seotitle: Query SQL data sources with InfluxDB +list_title: Query SQL data +description: > + The Flux `sql` package provides functions for working with SQL data sources. + Use `sql.from()` to query SQL databases like PostgreSQL, MySQL, Snowflake, + SQLite, Microsoft SQL Server, Amazon Athena, and Google BigQuery. +influxdb/v2.6/tags: [query, flux, sql] +menu: + influxdb_2_6: + parent: Query with Flux + list_title: SQL data +weight: 220 +aliases: + - /influxdb/v2.6/query-data/guides/sql/ +related: + - /{{< latest "flux" >}}/stdlib/sql/ +list_code_example: | + ```js + import "sql" + + sql.from( + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query: "SELECT * FROM example_table", + ) + ``` +--- + +The [Flux](/influxdb/v2.6/reference/flux) `sql` package provides functions for working with SQL data sources. +[`sql.from()`](/{{< latest "flux" >}}/stdlib/sql/from/) lets you query SQL data sources +like [PostgreSQL](https://www.postgresql.org/), [MySQL](https://www.mysql.com/), +[Snowflake](https://www.snowflake.com/), [SQLite](https://www.sqlite.org/index.html), +[Microsoft SQL Server](https://www.microsoft.com/en-us/sql-server/default.aspx), +[Amazon Athena](https://aws.amazon.com/athena/) and [Google BigQuery](https://cloud.google.com/bigquery) +and use the results with InfluxDB dashboards, tasks, and other operations. + +- [Query a SQL data source](#query-a-sql-data-source) +- [Join SQL data with data in InfluxDB](#join-sql-data-with-data-in-influxdb) +- [Use SQL results to populate dashboard variables](#use-sql-results-to-populate-dashboard-variables) +- [Use secrets to store SQL database credentials](#use-secrets-to-store-sql-database-credentials) +- [Sample sensor data](#sample-sensor-data) + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +## Query a SQL data source +To query a SQL data source: + +1. Import the `sql` package in your Flux query +2. Use the `sql.from()` function to specify the driver, data source name (DSN), + and query used to query data from your SQL data source: + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[PostgreSQL](#) +[MySQL](#) +[Snowflake](#) +[SQLite](#) +[SQL Server](#) +[Athena](#) +[BigQuery](#) +{{% /code-tabs %}} + +{{% code-tab-content %}} +```js +import "sql" + +sql.from( + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query: "SELECT * FROM example_table", +) +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```js +import "sql" + +sql.from( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", +) +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```js +import "sql" + +sql.from( + driverName: "snowflake", + dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", + query: "SELECT * FROM example_table", +) +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```js +// NOTE: InfluxDB OSS and InfluxDB Cloud do not have access to +// the local filesystem and cannot query SQLite data sources. +// Use the Flux REPL to query an SQLite data source. + +import "sql" + +sql.from( + driverName: "sqlite3", + dataSourceName: "file:/path/to/test.db?cache=shared&mode=ro", + query: "SELECT * FROM example_table", +) +``` +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```js +import "sql" + +sql.from( + driverName: "sqlserver", + dataSourceName: "sqlserver://user:password@localhost:1234?database=examplebdb", + query: "GO SELECT * FROM Example.Table", +) +``` + +_For information about authenticating with SQL Server using ADO-style parameters, +see [SQL Server ADO authentication](/{{< latest "flux" >}}/stdlib/sql/from/#sql-server-ado-authentication)._ +{{% /code-tab-content %}} + +{{% code-tab-content %}} +```js +import "sql" +sql.from( + driverName: "awsathena", + dataSourceName: "s3://myorgqueryresults/?accessID=12ab34cd56ef®ion=region-name&secretAccessKey=y0urSup3rs3crEtT0k3n", + query: "GO SELECT * FROM Example.Table", +) +``` + +_For information about parameters to include in the Athena DSN, +see [Athena connection string](/{{< latest "flux" >}}/stdlib/sql/from/#athena-connection-string)._ +{{% /code-tab-content %}} +{{% code-tab-content %}} +```js +import "sql" +sql.from( + driverName: "bigquery", + dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", + query: "SELECT * FROM exampleTable", +) +``` + +_For information about authenticating with BigQuery, see +[BigQuery authentication parameters](/{{< latest "flux" >}}/stdlib/sql/from/#bigquery-authentication-parameters)._ +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +_See the [`sql.from()` documentation](/{{< latest "flux" >}}/stdlib/sql/from/) for +information about required function parameters._ + +## Join SQL data with data in InfluxDB +One of the primary benefits of querying SQL data sources from InfluxDB +is the ability to enrich query results with data stored outside of InfluxDB. + +Using the [air sensor sample data](#sample-sensor-data) below, the following query +joins air sensor metrics stored in InfluxDB with sensor information stored in PostgreSQL. +The joined data lets you query and filter results based on sensor information +that isn't stored in InfluxDB. + +```js +// Import the "sql" package +import "sql" + +// Query data from PostgreSQL +sensorInfo = sql.from( + driverName: "postgres", + dataSourceName: "postgresql://localhost?sslmode=disable", + query: "SELECT * FROM sensors", +) + +// Query data from InfluxDB +sensorMetrics = from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "airSensors") + +// Join InfluxDB query results with PostgreSQL query results +join(tables: {metric: sensorMetrics, info: sensorInfo}, on: ["sensor_id"]) +``` + +## Use SQL results to populate dashboard variables +Use `sql.from()` to [create dashboard variables](/influxdb/v2.6/visualize-data/variables/create-variable/) +from SQL query results. +The following example uses the [air sensor sample data](#sample-sensor-data) below to +create a variable that lets you select the location of a sensor. + +```js +import "sql" + +sql.from( + driverName: "postgres", + dataSourceName: "postgresql://localhost?sslmode=disable", + query: "SELECT * FROM sensors", +) + |> rename(columns: {location: "_value"}) + |> keep(columns: ["_value"]) +``` + +Use the variable to manipulate queries in your dashboards. + +{{< img-hd src="/img/influxdb/2-0-sql-dashboard-variable.png" alt="Dashboard variable from SQL query results" />}} + +--- + +## Use secrets to store SQL database credentials +If your SQL database requires authentication, use [InfluxDB secrets](/influxdb/v2.6/security/secrets/) +to store and populate connection credentials. +By default, InfluxDB base64-encodes and stores secrets in its internal key-value store, BoltDB. +For added security, [store secrets in Vault](/influxdb/v2.6/security/secrets/use-vault/). + +### Store your database credentials as secrets +Use the [InfluxDB API](/influxdb/v2.6/reference/api/) or the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/secret/) +to store your database credentials as secrets. + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxDB API](#) +[influx CLI](#) +{{% /tabs %}} +{{% tab-content %}} +```sh +curl --request PATCH http://localhost:8086/api/v2/orgs//secrets \ + --header 'Authorization: Token YOURAUTHTOKEN' \ + --header 'Content-type: application/json' \ + --data '{ + "POSTGRES_HOST": "http://example.com", + "POSTGRES_USER": "example-username", + "POSTGRES_PASS": "example-password" +}' +``` + +**To store secrets, you need:** + +- [your organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) +- [your API token](/influxdb/v2.6/security/tokens/view-tokens/) +{{% /tab-content %}} +{{% tab-content %}} +```sh +# Syntax +influx secret update -k + +# Example +influx secret update -k POSTGRES_PASS +``` + +**When prompted, enter your secret value.** + +{{% warn %}} +You can provide the secret value with the `-v`, `--value` flag, but the **plain text +secret may appear in your shell history**. + +```sh +influx secret update -k -v +``` +{{% /warn %}} +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +### Use secrets in your query +Import the `influxdata/influxdb/secrets` package and use [string interpolation](/{{< latest "flux" >}}/spec/string-interpolation/) +to populate connection credentials with stored secrets in your Flux query. + +```js +import "sql" +import "influxdata/influxdb/secrets" + +POSTGRES_HOST = secrets.get(key: "POSTGRES_HOST") +POSTGRES_USER = secrets.get(key: "POSTGRES_USER") +POSTGRES_PASS = secrets.get(key: "POSTGRES_PASS") + +sql.from( + driverName: "postgres", + dataSourceName: "postgresql://${POSTGRES_USER}:${POSTGRES_PASS}@${POSTGRES_HOST}", + query: "SELECT * FROM sensors", +) +``` + +--- + +## Sample sensor data +The [air sensor sample data](#download-sample-air-sensor-data) and +[sample sensor information](#import-the-sample-sensor-information) simulate a +group of sensors that measure temperature, humidity, and carbon monoxide +in rooms throughout a building. +Each collected data point is stored in InfluxDB with a `sensor_id` tag that identifies +the specific sensor it came from. +Sample sensor information is stored in PostgreSQL. + +**Sample data includes:** + +- Simulated data collected from each sensor and stored in the `airSensors` measurement in **InfluxDB**: + - temperature + - humidity + - co + +- Information about each sensor stored in the `sensors` table in **PostgreSQL**: + - sensor_id + - location + - model_number + - last_inspected + +#### Download sample air sensor data + +1. [Create a bucket](/influxdb/v2.6/organizations/buckets/create-bucket/) to store the data. +2. [Create an InfluxDB task](/influxdb/v2.6/process-data/manage-tasks/create-task/) + and use the [`sample.data()` function](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/sample/data/) + to download sample air sensor data every 15 minutes. + Write the downloaded sample data to your new bucket: + + ```js + import "influxdata/influxdb/sample" + + option task = {name: "Collect sample air sensor data", every: 15m} + + sample.data(set: "airSensor") + |> to(org: "example-org", bucket: "example-bucket") + ``` + +3. [Query your target bucket](/influxdb/v2.6/query-data/execute-queries/) after + the first task run to ensure the sample data is writing successfully. + + ```js + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "airSensors") + ``` + +#### Import the sample sensor information +1. [Download and install PostgreSQL](https://www.postgresql.org/download/). +2. Download the sample sensor information CSV. + + Download sample sensor information + +3. Use a PostgreSQL client (`psql` or a GUI) to create the `sensors` table: + + ``` + CREATE TABLE sensors ( + sensor_id character varying(50), + location character varying(50), + model_number character varying(50), + last_inspected date + ); + ``` + +4. Import the downloaded CSV sample data. + _Update the `FROM` file path to the path of the downloaded CSV sample data._ + + ``` + COPY sensors(sensor_id,location,model_number,last_inspected) + FROM '/path/to/sample-sensor-info.csv' DELIMITER ',' CSV HEADER; + ``` + +5. Query the table to ensure the data was imported correctly: + + ``` + SELECT * FROM sensors; + ``` + +#### Import the sample data dashboard +Download and import the Air Sensors dashboard to visualize the generated data: + +View Air Sensors dashboard JSON + +_For information about importing a dashboard, see [Create a dashboard](/influxdb/v2.6/visualize-data/dashboards/create-dashboard)._ diff --git a/content/influxdb/v2.6/query-data/flux/window-aggregate.md b/content/influxdb/v2.6/query-data/flux/window-aggregate.md new file mode 100644 index 000000000..3f2e03bdd --- /dev/null +++ b/content/influxdb/v2.6/query-data/flux/window-aggregate.md @@ -0,0 +1,358 @@ +--- +title: Window and aggregate data with Flux +seotitle: Window and aggregate data in InfluxDB with Flux +list_title: Window & aggregate +description: > + This guide walks through windowing and aggregating data with Flux and outlines + how it shapes your data in the process. +menu: + influxdb_2_6: + name: Window & aggregate + parent: Query with Flux +weight: 204 +influxdb/v2.6/tags: [flux, aggregates] +aliases: + - /influxdb/v2.6/query-data/guides/window-aggregate/ + - /influxdb/v2.6/query-data/flux/windowing-aggregating/ +related: + - /{{< latest "flux" >}}/stdlib/universe/aggregatewindow + - /{{< latest "flux" >}}/stdlib/universe/window + - /{{< latest "flux" >}}/function-types/#aggregates, Flux aggregate functions + - /{{< latest "flux" >}}/function-types/#selectors, Flux selector functions +list_query_example: aggregate_window +--- + +A common operation performed with time series data is grouping data into windows of time, +or "windowing" data, then aggregating windowed values into a new value. +This guide walks through windowing and aggregating data with Flux and demonstrates +how data is shaped in the process. + +If you're just getting started with Flux queries, check out the following: + +- [Get started with Flux](/{{< latest "flux" >}}/get-started/) for a conceptual overview of Flux and parts of a Flux query. +- [Execute queries](/influxdb/v2.6/query-data/execute-queries/) to discover a variety of ways to run your queries. + +{{% note %}} +The following example is an in-depth walk-through of the steps required to window and aggregate data. +The [`aggregateWindow()` function](#summing-up) performs these operations for you, but understanding +how data is shaped in the process helps to successfully create your desired output. +{{% /note %}} + +## Data set +For the purposes of this guide, define a variable that represents your base data set. +The following example queries the memory usage of the host machine. + +```js +dataSet = from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> drop(columns: ["host"]) +``` + +{{% note %}} +This example drops the `host` column from the returned data since the memory data +is only tracked for a single host and it simplifies the output tables. +Dropping the `host` column is optional and not recommended if monitoring memory +on multiple hosts. +{{% /note %}} + +`dataSet` can now be used to represent your base data, which will look similar to the following: + +{{% truncate %}} +``` +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:00.000000000Z 71.11611366271973 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:10.000000000Z 67.39630699157715 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:20.000000000Z 64.16666507720947 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:30.000000000Z 64.19951915740967 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:40.000000000Z 64.2122745513916 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:50:50.000000000Z 64.22209739685059 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 64.6336555480957 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:10.000000000Z 64.16516304016113 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:20.000000000Z 64.18349742889404 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:30.000000000Z 64.20474052429199 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:40.000000000Z 68.65062713623047 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:50.000000000Z 67.20139980316162 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 70.9143877029419 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:10.000000000Z 64.14549350738525 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:20.000000000Z 64.15379047393799 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:30.000000000Z 64.1592264175415 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:40.000000000Z 64.18190002441406 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:50.000000000Z 64.28837776184082 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 64.29731845855713 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:10.000000000Z 64.36963081359863 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:20.000000000Z 64.37397003173828 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:30.000000000Z 64.44413661956787 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:40.000000000Z 64.42906856536865 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:50.000000000Z 64.44573402404785 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.48912620544434 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:10.000000000Z 64.49522972106934 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:20.000000000Z 64.48652744293213 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:30.000000000Z 64.49949741363525 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:40.000000000Z 64.4949197769165 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:50.000000000Z 64.49787616729736 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229 +``` +{{% /truncate %}} + +## Windowing data +Use the [`window()` function](/{{< latest "flux" >}}/stdlib/universe/window) +to group your data based on time bounds. +The most common parameter passed with the `window()` is `every` which +defines the duration of time between windows. +Other parameters are available, but for this example, window the base data +set into one minute windows. + +```js +dataSet + |> window(every: 1m) +``` + +{{% note %}} +The `every` parameter supports all [valid duration units](/{{< latest "flux" >}}/spec/types/#duration-types), +including **calendar months (`1mo`)** and **years (`1y`)**. +{{% /note %}} + +Each window of time is output in its own table containing all records that fall within the window. + +{{% truncate %}} +###### window() output tables +``` +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:00.000000000Z 71.11611366271973 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:10.000000000Z 67.39630699157715 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:20.000000000Z 64.16666507720947 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:30.000000000Z 64.19951915740967 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:40.000000000Z 64.2122745513916 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:50:50.000000000Z 64.22209739685059 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 64.6336555480957 +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:10.000000000Z 64.16516304016113 +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:20.000000000Z 64.18349742889404 +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:30.000000000Z 64.20474052429199 +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:40.000000000Z 68.65062713623047 +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:51:50.000000000Z 67.20139980316162 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 70.9143877029419 +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:10.000000000Z 64.14549350738525 +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:20.000000000Z 64.15379047393799 +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:30.000000000Z 64.1592264175415 +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:40.000000000Z 64.18190002441406 +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:52:50.000000000Z 64.28837776184082 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 64.29731845855713 +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:10.000000000Z 64.36963081359863 +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:20.000000000Z 64.37397003173828 +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:30.000000000Z 64.44413661956787 +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:40.000000000Z 64.42906856536865 +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:53:50.000000000Z 64.44573402404785 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.48912620544434 +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:10.000000000Z 64.49522972106934 +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:20.000000000Z 64.48652744293213 +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:30.000000000Z 64.49949741363525 +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:40.000000000Z 64.4949197769165 +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:50.000000000Z 64.49787616729736 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:55:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229 +``` +{{% /truncate %}} + +When visualized in the InfluxDB UI, each window table is displayed in a different color. + +![Windowed data](/img/flux/simple-windowed-data.png) + +## Aggregate data +[Aggregate functions](/{{< latest "flux" >}}/function-types#aggregates) take the values +of all rows in a table and use them to perform an aggregate operation. +The result is output as a new value in a single-row table. + +Since windowed data is split into separate tables, aggregate operations run against +each table separately and output new tables containing only the aggregated value. + +For this example, use the [`mean()` function](/{{< latest "flux" >}}/stdlib/universe/mean) +to output the average of each window: + +```js +dataSet + |> window(every: 1m) + |> mean() +``` + +{{% truncate %}} +###### mean() output tables +``` +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------------- +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 65.88549613952637 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------------- +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 65.50651391347249 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------------- +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 65.30719598134358 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------------- +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 64.39330975214641 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------------- +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 64.49386278788249 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------------- +2018-11-03T17:55:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 64.49816226959229 +``` +{{% /truncate %}} + +Because each data point is contained in its own table, when visualized, +they appear as single, unconnected points. + +![Aggregated windowed data](/img/flux/simple-windowed-aggregate-data.png) + +### Recreate the time column +**Notice the `_time` column is not in the [aggregated output tables](#mean-output-tables).** +Because records in each table are aggregated together, their timestamps no longer +apply and the column is removed from the group key and table. + +Also notice the `_start` and `_stop` columns still exist. +These represent the lower and upper bounds of the time window. + +Many Flux functions rely on the `_time` column. +To further process your data after an aggregate function, you need to re-add `_time`. +Use the [`duplicate()` function](/{{< latest "flux" >}}/stdlib/universe/duplicate) to +duplicate either the `_start` or `_stop` column as a new `_time` column. + +```js +dataSet + |> window(every: 1m) + |> mean() + |> duplicate(column: "_stop", as: "_time") +``` + +{{% truncate %}} +###### duplicate() output tables +``` +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:50:00.000000000Z 2018-11-03T17:51:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 65.88549613952637 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:51:00.000000000Z 2018-11-03T17:52:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 65.50651391347249 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:52:00.000000000Z 2018-11-03T17:53:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 65.30719598134358 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:53:00.000000000Z 2018-11-03T17:54:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.39330975214641 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:54:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49386278788249 + + +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:55:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229 +``` +{{% /truncate %}} + +## "Unwindow" aggregate tables +Keeping aggregate values in separate tables generally isn't the format in which you want your data. +Use the `window()` function to "unwindow" your data into a single infinite (`inf`) window. + +```js +dataSet + |> window(every: 1m) + |> mean() + |> duplicate(column: "_stop", as: "_time") + |> window(every: inf) +``` + +{{% note %}} +Windowing requires a `_time` column which is why it's necessary to +[recreate the `_time` column](#recreate-the-time-column) after an aggregation. +{{% /note %}} + +###### Unwindowed output table +``` +Table: keys: [_start, _stop, _field, _measurement] + _start:time _stop:time _field:string _measurement:string _time:time _value:float +------------------------------ ------------------------------ ---------------------- ---------------------- ------------------------------ ---------------------------- +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:51:00.000000000Z 65.88549613952637 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:52:00.000000000Z 65.50651391347249 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:53:00.000000000Z 65.30719598134358 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:54:00.000000000Z 64.39330975214641 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49386278788249 +2018-11-03T17:50:00.000000000Z 2018-11-03T17:55:00.000000000Z used_percent mem 2018-11-03T17:55:00.000000000Z 64.49816226959229 +``` + +With the aggregate values in a single table, data points in the visualization are connected. + +![Unwindowed aggregate data](/img/flux/simple-unwindowed-data.png) + +## Summing up +You have now created a Flux query that windows and aggregates data. +The data transformation process outlined in this guide should be used for all aggregation operations. + +Flux also provides the [`aggregateWindow()` function](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow) +which performs all these separate functions for you. + +The following Flux query will return the same results: + +###### aggregateWindow function +```js +dataSet + |> aggregateWindow(every: 1m, fn: mean) +``` diff --git a/content/influxdb/v2.6/query-data/get-started/_index.md b/content/influxdb/v2.6/query-data/get-started/_index.md new file mode 100644 index 000000000..cd5dc195f --- /dev/null +++ b/content/influxdb/v2.6/query-data/get-started/_index.md @@ -0,0 +1,35 @@ +--- +title: Get started with Flux and InfluxDB +description: > + Get started with Flux, the functional data scripting language, and learn the + basics of writing a Flux query that queries InfluxDB. +aliases: + - /influxdb/v2.6/query-data/get-started/getting-started +weight: 101 +influxdb/v2.6/tags: [query, flux, get-started] +menu: + influxdb_2_6: + name: Get started with Flux + parent: Query data +related: + - /{{< latest "flux" >}}/get-started/ + - /{{< latest "flux" >}}/ + - /{{< latest "flux" >}}/stdlib/ +--- + +Flux is InfluxData's functional data scripting language designed for querying, +analyzing, and acting on data. + +These guides walks through important concepts related to Flux and querying time +series data from InfluxDB using Flux. + +## Tools for working with Flux +The [Execute queries](/influxdb/v2.6/query-data/execute-queries) guide walks through +the different tools available for querying InfluxDB with Flux. + +## Before you start +To get a basic understanding of the Flux data model and syntax, see +[Get started with Flux](/{{< latest "flux" >}}/get-started/) in the +[Flux documentation](/{{< latest "flux" >}}/). + +{{< page-nav next="/influxdb/v2.6/query-data/get-started/query-influxdb/" >}} diff --git a/content/influxdb/v2.6/query-data/get-started/query-influxdb.md b/content/influxdb/v2.6/query-data/get-started/query-influxdb.md new file mode 100644 index 000000000..c624bf561 --- /dev/null +++ b/content/influxdb/v2.6/query-data/get-started/query-influxdb.md @@ -0,0 +1,133 @@ +--- +title: Query InfluxDB with Flux +description: Learn the basics of using Flux to query data from InfluxDB. +influxdb/v2.6/tags: [query, flux] +menu: + influxdb_2_6: + name: Query InfluxDB + parent: Get started with Flux +weight: 201 +related: + - /{{< latest "flux" >}}/get-started/query-basics/ + - /influxdb/v2.6/query-data/flux/ + - /{{< latest "flux" >}}/stdlib/influxdata/influxdb/from + - /{{< latest "flux" >}}/stdlib/universe/range + - /{{< latest "flux" >}}/stdlib/universe/filter +--- + +This guide walks through the basics of using Flux to query data from InfluxDB. +Every Flux query needs the following: + +1. [A data source](#1-define-your-data-source) +2. [A time range](#2-specify-a-time-range) +3. [Data filters](#3-filter-your-data) + + +## 1. Define your data source +Flux's [`from()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/) function defines an InfluxDB data source. +It requires a [`bucket`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/from/#bucket) parameter. +The following examples use `example-bucket` as the bucket name. + +```js +from(bucket:"example-bucket") +``` + +## 2. Specify a time range +Flux requires a time range when querying time series data. +"Unbounded" queries are very resource-intensive and as a protective measure, +Flux will not query the database without a specified range. + +Use the [pipe-forward operator](/{{< latest "flux" >}}/get-started/syntax-basics/#pipe-forward-operator) +(`|>`) to pipe data from your data source into +[`range()`](/{{< latest "flux" >}}/stdlib/universe/range), which specifies a time range for your query. +It accepts two parameters: `start` and `stop`. +Start and stop values can be **relative** using negative [durations](/{{< latest "flux" >}}/data-types/basic/duration/) +or **absolute** using [timestamps](/{{< latest "flux" >}}/data-types/basic/time/). + +###### Example relative time ranges +```js +// Relative time range with start only. Stop defaults to now. +from(bucket:"example-bucket") + |> range(start: -1h) + +// Relative time range with start and stop +from(bucket:"example-bucket") + |> range(start: -1h, stop: -10m) +``` + +{{% note %}} +Relative ranges are relative to "now." +{{% /note %}} + +###### Example absolute time range +```js +from(bucket:"example-bucket") + |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-01T12:00:00Z) +``` + +#### Use the following: +For this guide, use the relative time range, `-15m`, to limit query results to data from the last 15 minutes: + +```js +from(bucket:"example-bucket") + |> range(start: -15m) +``` + +## 3. Filter your data +Pass your ranged data into `filter()` to narrow results based on data attributes or columns. +`filter()` has one parameter, `fn`, which expects a +[predicate function](/{{< latest "flux" >}}/get-started/syntax-basics/#predicate-functions) +evaluates rows by column values. + +`filter()` iterates over every input row and structures row data as a Flux +[record](/{{< latest "flux" >}}/data-types/composite/record/). +The record is passed into the predicate function as `r` where it is evaluated using +[predicate expressions](/{{< latest "flux" >}}/get-started/syntax-basics/#predicate-expressions). + +Rows that evaluate to `false` are dropped from the output data. +Rows that evaluate to `true` persist in the output data. + +```js +// Pattern +(r) => (r.recordProperty comparisonOperator comparisonExpression) + +// Example with single filter +(r) => (r._measurement == "cpu") + +// Example with multiple filters +(r) => r._measurement == "cpu" and r._field != "usage_system") +``` + +#### Use the following: +For this example, filter by the `cpu` measurement, `usage_system` field, and +`cpu-total` tag value: + +```js +from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") +``` + +## 4. Yield your queried data +[`yield()`](/{{< latest "flux" >}}/stdlib/universe/yield/) outputs the result of the query. + +```js +from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> yield() +``` + +Flux automatically assumes a `yield()` function at +the end of each script to output and visualize the data. +Explicitly calling `yield()` is only necessary when including multiple queries +in the same Flux query. +Each set of returned data needs to be named using the `yield()` function. + +## Congratulations! +You have now queried data from InfluxDB using Flux. + +The query shown here is a basic example. +Flux queries can be extended in many ways to form powerful scripts. + +{{< page-nav prev="/influxdb/v2.6/query-data/get-started/" next="/influxdb/v2.6/query-data/get-started/transform-data/" >}} diff --git a/content/influxdb/v2.6/query-data/get-started/transform-data.md b/content/influxdb/v2.6/query-data/get-started/transform-data.md new file mode 100644 index 000000000..84a4174c0 --- /dev/null +++ b/content/influxdb/v2.6/query-data/get-started/transform-data.md @@ -0,0 +1,162 @@ +--- +title: Transform data with Flux +description: Learn the basics of using Flux to transform data queried from InfluxDB. +influxdb/v2.6/tags: [flux, transform, query] +menu: + influxdb_2_6: + name: Transform data + parent: Get started with Flux +weight: 202 +related: + - /{{< latest "flux" >}}/stdlib/universe/aggregatewindow + - /{{< latest "flux" >}}/stdlib/universe/window +--- + +When [querying data from InfluxDB](/influxdb/v2.6/query-data/get-started/query-influxdb), +you often need to transform that data in some way. +Common examples are aggregating data, downsampling data, etc. + +This guide demonstrates using [Flux functions](/{{< latest "flux" >}}/stdlib/) to transform your data. +It walks through creating a Flux script that partitions data into windows of time, +averages the `_value`s in each window, and outputs the averages as a new table. + +{{% note %}} +If you're not familiar with how Flux structures and operates on data, see +[Flux data model](/{{< latest "flux" >}}/get-started/data-model/). +{{% /note %}} + +## Query data +Use the query built in the previous [Query data from InfluxDB](/influxdb/v2.6/query-data/get-started/query-influxdb) +guide, but update the range to pull data from the last hour: + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") +``` + +## Flux functions +Flux provides a number of functions that perform specific operations, transformations, and tasks. +You can also [create custom functions](/influxdb/v2.6/query-data/flux/custom-functions) in your Flux queries. +_Functions are covered in detail in the [Flux standard library](/{{< latest "flux" >}}/stdlib/) documentation._ + +A common type of function used when transforming data queried from InfluxDB is an aggregate function. +Aggregate functions take a set of `_value`s in a table, aggregate them, and transform +them into a new value. + +This example uses the [`mean()` function](/{{< latest "flux" >}}/stdlib/universe/mean) +to average values within each time window. + +{{% note %}} +The following example walks through the steps required to window and aggregate data, +but there is a [`aggregateWindow()` helper function](#helper-functions) that does it for you. +It's just good to understand the steps in the process. +{{% /note %}} + +## Window your data +Flux's [`window()` function](/{{< latest "flux" >}}/stdlib/universe/window) partitions records based on a time value. +Use the `every` parameter to define a duration of each window. + +{{% note %}} +#### Calendar months and years +`every` supports all [valid duration units](/{{< latest "flux" >}}/spec/types/#duration-types), +including **calendar months (`1mo`)** and **years (`1y`)**. +{{% /note %}} + +For this example, window data in five minute intervals (`5m`). + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) +``` + +As data is gathered into windows of time, each window is output as its own table. +When visualized, each table is assigned a unique color. + +![Windowed data tables](/img/flux/windowed-data.png) + +## Aggregate windowed data +Flux aggregate functions take the `_value`s in each table and aggregate them in some way. +Use the [`mean()` function](/{{< latest "flux" >}}/stdlib/universe/mean) to average the `_value`s of each table. + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) + |> mean() +``` + +As rows in each window are aggregated, their output table contains only a single row with the aggregate value. +Windowed tables are all still separate and, when visualized, will appear as single, unconnected points. + +![Windowed aggregate data](/img/flux/windowed-aggregates.png) + +## Add times to your aggregates +As values are aggregated, the resulting tables do not have a `_time` column because +the records used for the aggregation all have different timestamps. +Aggregate functions don't infer what time should be used for the aggregate value. +Therefore the `_time` column is dropped. + +A `_time` column is required in the [next operation](#unwindow-aggregate-tables). +To add one, use the [`duplicate()` function](/{{< latest "flux" >}}/stdlib/universe/duplicate) +to duplicate the `_stop` column as the `_time` column for each windowed table. + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) + |> mean() + |> duplicate(column: "_stop", as: "_time") +``` + +## Unwindow aggregate tables + +Use the `window()` function with the `every: inf` parameter to gather all points +into a single, infinite window. + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) + |> mean() + |> duplicate(column: "_stop", as: "_time") + |> window(every: inf) +``` + +Once ungrouped and combined into a single table, the aggregate data points will appear connected in your visualization. + +![Unwindowed aggregate data](/img/flux/windowed-aggregates-ungrouped.png) + +## Helper functions +This may seem like a lot of coding just to build a query that aggregates data, however going through the +process helps to understand how data changes "shape" as it is passed through each function. + +Flux provides (and allows you to create) "helper" functions that abstract many of these steps. +The same operation performed in this guide can be accomplished using +[`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow). + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> aggregateWindow(every: 5m, fn: mean) +``` + +## Congratulations! +You have now constructed a Flux query that uses Flux functions to transform your data. +There are many more ways to manipulate your data using both Flux's primitive functions +and your own custom functions, but this is a good introduction into the basic syntax and query structure. + +--- + +_For a deeper dive into windowing and aggregating data with example data output for each transformation, +view the [Window and aggregate data](/influxdb/v2.6/query-data/flux/window-aggregate) guide._ + +--- + +{{< page-nav prev="/influxdb/v2.6/query-data/get-started/query-influxdb/" >}} diff --git a/content/influxdb/v2.6/query-data/influxql/_index.md b/content/influxdb/v2.6/query-data/influxql/_index.md new file mode 100644 index 000000000..b4a49f6bf --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/_index.md @@ -0,0 +1,160 @@ +--- +title: Query data with InfluxQL +description: > + Use the [InfluxDB 1.x `/query` compatibility endpoint](/influxdb/v2.6/reference/api/influxdb-1x/query) + to query data in InfluxDB Cloud and InfluxDB OSS 2.4 with **InfluxQL**. +weight: 102 +influxdb/v2.6/tags: [influxql, query] +menu: + influxdb_2_6: + name: Query with InfluxQL + parent: Query data +related: + - /influxdb/v2.6/reference/api/influxdb-1x/ + - /influxdb/v2.6/reference/api/influxdb-1x/query + - /influxdb/v2.6/reference/api/influxdb-1x/dbrp + - /influxdb/v2.6/tools/influxql-shell/ +--- + +Use InfluxQL (an SQL-like query language) to interact with InfluxDB, and query and analyze your times series data. + +In InfluxDB 1.x, data is stored in [databases](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database) +and [retention policies](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp). +In InfluxDB OSS {{< current-version >}}, data is stored in [buckets](/influxdb/v2.6/reference/glossary/#bucket). +Because InfluxQL uses the 1.x data model, a bucket must be mapped to a database and retention policy (DBRP) before it can be queried using InfluxQL. + +**To query data with InfluxQL, complete the following steps:** + +1. [Verify buckets have a mapping](#verify-buckets-have-a-mapping). +2. [Create DBRP mappings for unmapped buckets](#create-dbrp-mappings-for-unmapped-buckets). +3. [Query a mapped bucket with InfluxQL](#query-a-mapped-bucket-with-influxql). + +{{% note %}} + +#### InfluxQL reference documentation + +For complete InfluxQL reference documentation, see the +[InfluxQL specification for InfluxDB 2.x](/influxdb/v2.6/reference/syntax/influxql/spec/). +{{% /note %}} + +## Verify buckets have a mapping + +1. To verify the buckets you want to query are mapped to a database and retention policy, use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) or the [InfluxDB API](/influxdb/v2.6/reference/api/). +_For examples, see [List DBRP mappings](/influxdb/v2.6/query-data/influxql/dbrp/#list-dbrp-mappings)._ + +2. If you **do not find a DBRP mapping for a bucket**, [create a new DBRP mapping](/influxdb/v2.6/query-data/influxql/dbrp/#create-dbrp-mappings) to +map the unmapped bucket. + +## Create DBRP mappings for unmapped buckets + +- Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) or the [InfluxDB API](/influxdb/v2.6/reference/api/) +to manually create DBRP mappings for unmapped buckets. +_For examples, see [Create DBRP mappings](/influxdb/v2.6/query-data/influxql/dbrp/#create-dbrp-mappings)._ + +## Query a mapped bucket with InfluxQL + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxQL shell](#) +[InfluxDB API](#) +{{% /tabs %}} +{{% tab-content %}} + + +The [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) provides an [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/) where you can execute InfluxQL queries in an interactive Read-Eval-Print-Loop (REPL). + +1. If you haven't already, do the following: + + - [Download and install the `influx` CLI](/influxdb/v2.6/tools/influx-cli/#install-the-influx-cli) + - [Configure your authentication credentials](/influxdb/v2.6/tools/influx-cli/#provide-required-authentication-credentials) + +2. Use the following command to start an InfluxQL shell: + + ```sh + influx v1 shell + ``` + +3. Execute an InfluxQL query inside the InfluxQL shell. + + ```sql + SELECT used_percent FROM example-db.example-rp.example-measurement WHERE host=host1 + ``` + + For more information, see how to [use the InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/). For more information about DBRP mappings, see [Manage DBRP mappings](/influxdb/v2.6/query-data/influxql/dbrp/). + + +{{% /tab-content %}} +{{% tab-content %}} + + +The [InfluxDB 1.x compatibility API](/influxdb/v2.6/reference/api/influxdb-1x/) supports +all InfluxDB 1.x client libraries and integrations in InfluxDB {{< current-version >}}. + +1. To query a mapped bucket with InfluxQL, use the [`/query` 1.x compatibility endpoint](/influxdb/v2.6/reference/api/influxdb-1x/query/), and include the following in your request: + + - **Request method:** `GET` + - **Headers:** + - **Authorization:** _See [compatibility API authentication](/influxdb/v2.6/reference/api/influxdb-1x/#authentication)_ + - **Query parameters:** + - **db**: 1.x database to query + - **rp**: 1.x retention policy to query _(if no retention policy is specified, InfluxDB uses the default retention policy for the specified database)_ + - **q**: URL-encoded InfluxQL query + + {{% api/url-encode-note %}} + + ```sh + curl --get http://localhost:8086/query?db=example-db \ + --header "Authorization: Token YourAuthToken" \ + --data-urlencode "q=SELECT used_percent FROM example-db.example-rp.example-measurement WHERE host=host1" + ``` + + By default, the `/query` compatibility endpoint returns results in **JSON**. + +2. (Optional) To return results as **CSV**, include the `Accept: application/csv` header. + +For more information about DBRP mappings, see [Manage DBRP mappings](/influxdb/v2.6/query-data/influxql/dbrp/). + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +## InfluxQL support + +InfluxDB OSS 2.x supports the following InfluxQL statements and clauses. See supported and unsupported queries below. + +{{< flex >}} +{{< flex-content >}} +{{% note %}} +##### Supported InfluxQL queries + +- `DELETE`* +- `DROP MEASUREMENT`* +- `EXPLAIN ANALYZE` +- `SELECT` _(read-only)_ +- `SHOW DATABASES` +- `SHOW SERIES` +- `SHOW MEASUREMENTS` +- `SHOW TAG KEYS` +- `SHOW FIELD KEYS` +- `SHOW SERIES EXACT CARDINALITY` +- `SHOW TAG KEY CARDINALITY` +- `SHOW FIELD KEY CARDINALITY` + +\* These commands delete data. +{{% /note %}} +{{< /flex-content >}} +{{< flex-content >}} +{{% warn %}} + +##### Unsupported InfluxQL queries + +- `SELECT INTO` +- `ALTER` +- `CREATE` +- `DROP` _(limited support)_ +- `GRANT` +- `KILL` +- `REVOKE` +- `SHOW SERIES CARDINALITY` +{{% /warn %}} +{{< /flex-content >}} +{{< /flex >}} diff --git a/content/influxdb/v2.6/query-data/influxql/dbrp.md b/content/influxdb/v2.6/query-data/influxql/dbrp.md new file mode 100644 index 000000000..fbcbbd1ab --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/dbrp.md @@ -0,0 +1,351 @@ +--- +title: Manage DBRP mappings +seotitle: Manage database and retention policy mappings +description: > + Create and manage database and retention policy (DBRP) mappings to use + InfluxQL to query InfluxDB buckets. +menu: + influxdb_2_6: + parent: Query with InfluxQL +weight: 201 +influxdb/v2.6/tags: [influxql, dbrp] +--- + +InfluxQL requires a database and retention policy (DBRP) combination in order to query data. +In InfluxDB {{< current-version >}}, databases and retention policies have been +combined and replaced by InfluxDB [buckets](/influxdb/v2.6/reference/glossary/#bucket). +To query InfluxDB {{< current-version >}} with InfluxQL, the specified DBRP +combination must be mapped to a bucket. + +- [Automatic DBRP mapping](#automatic-dbrp-mapping) +- {{% oss-only %}}[Virtual DBRP mappings](#virtual-dbrp-mappings){{% /oss-only %}} +- [Create DBRP mappings](#create-dbrp-mappings) +- [List DBRP mappings](#list-dbrp-mappings) +- [Update a DBRP mapping](#update-a-dbrp-mapping) +- [Delete a DBRP mapping](#delete-a-dbrp-mapping) + +## Automatic DBRP mapping + +InfluxDB {{< current-version >}} will automatically create DBRP mappings for you +during the following operations: + +- Writing to the [`/write` v1 compatibility endpoint](/influxdb/v2.6/reference/api/influxdb-1x/write/) +- {{% cloud-only %}}[Upgrading from InfluxDB 1.x to InfluxDB Cloud](/influxdb/v2.6/upgrade/v1-to-cloud/){{% /cloud-only %}} +- {{% oss-only %}}[Upgrading from InfluxDB 1.x to {{< current-version >}}](/influxdb/v2.6/upgrade/v1-to-v2/){{% /oss-only %}} +- {{% oss-only %}}Creating a bucket ([virtual DBRPs](#virtual-dbrp-mappings)){{% /oss-only %}} + +For more information, see [Database and retention policy mapping](/influxdb/v2.6/reference/api/influxdb-1x/dbrp/). + +{{% oss-only %}} + +## Virtual DBRP mappings + +InfluxDB {{< current-version >}} provides "virtual" DBRP mappings for any bucket +that does not have an explicit DBRP mapping associated with it. +Virtual DBRP mappings use the bucket name to provide a DBRP mapping that can be +used without having to explicitly define a mapping. + +Virtual DBRP mappings are read-only. +To override a virtual DBRP mapping, [create an explicit mapping](#create-dbrp-mappings). + +For information about how virtual DBRP mappings are created, see +[Database and retention policy mapping – When creating a bucket](/influxdb/v2.6/reference/api/influxdb-1x/dbrp/#when-creating-a-bucket). + +{{% /oss-only %}} + +## Create DBRP mappings + +Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) or the +[InfluxDB API](/influxdb/v2.6/reference/api/) to create DBRP mappings. + +{{% note %}} +#### A DBRP combination can only be mapped to a single bucket +Each unique DBRP combination can only be mapped to a single bucket. +If you map a DBRP combination that is already mapped to another bucket, +it will overwrite the existing DBRP mapping. +{{% /note %}} + +{{< tabs-wrapper >}} +{{% tabs %}} +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} +{{% tab-content %}} + +Use the [`influx v1 dbrp create` command](/influxdb/v2.6/reference/cli/influx/v1/dbrp/create/) +to map an unmapped bucket to a database and retention policy. +Include the following: + +{{< req type="key" >}} + +- {{< req "\*" >}} **org** and **token** to authenticate. We recommend setting your organization and token to your active InfluxDB connection configuration in the influx CLI, so you don't have to add these parameters to each command. To set up your active InfluxDB configuration, see [`influx config set`](/influxdb/v2.6/reference/cli/influx/config/set/). +- {{< req "\*" >}} **database name** to map +- {{< req "\*" >}} **retention policy** name to map +- {{< req "\*" >}} [Bucket ID](/influxdb/v2.6/organizations/buckets/view-buckets/#view-buckets-in-the-influxdb-ui) to map to +- **Default flag** to set the provided retention policy as the default retention policy for the database + +```sh +influx v1 dbrp create \ + --db example-db \ + --rp example-rp \ + --bucket-id 00oxo0oXx000x0Xo \ + --default +``` + +{{% /tab-content %}} +{{% tab-content %}} + +Use the [`/api/v2/dbrps` API endpoint](/influxdb/v2.6/api/#operation/PostDBRP) to create a new DBRP mapping. + + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/dbrps" method="POST" >}} + + +Include the following: + +- **Request method:** `POST` +- **Headers:** + - **Authorization:** `Token` schema with your InfluxDB [API token](/influxdb/v2.6/security/tokens/) + - **Content-type:** `application/json` +- **Request body:** JSON object with the following fields: + {{< req type="key" >}} + - {{< req "\*" >}} **bucketID:** [bucket ID](/influxdb/v2.6/organizations/buckets/view-buckets/) + - {{< req "\*" >}} **database:** database name + - **default:** set the provided retention policy as the default retention policy for the database + - {{< req "\*" >}} **org** or **orgID:** organization name or [organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) + - {{< req "\*" >}} **retention_policy:** retention policy name + + +```sh +curl --request POST http://localhost:8086/api/v2/dbrps \ + --header "Authorization: Token YourAuthToken" \ + --header 'Content-type: application/json' \ + --data '{ + "bucketID": "00oxo0oXx000x0Xo", + "database": "example-db", + "default": true, + "orgID": "00oxo0oXx000x0Xo", + "retention_policy": "example-rp" + }' +``` + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +## List DBRP mappings + +Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) or the [InfluxDB API](/influxdb/v2.6/reference/api/) +to list all DBRP mappings and verify the buckets you want to query are mapped +to a database and retention policy. + +{{< tabs-wrapper >}} +{{% tabs %}} +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} +{{% tab-content %}} + +Use the [`influx v1 dbrp list` command](/influxdb/v2.6/reference/cli/influx/v1/dbrp/list/) to list DBRP mappings. + +{{% note %}} +The examples below assume that your organization and API token are +provided by the active [InfluxDB connection configuration](/influxdb/v2.6/reference/cli/influx/config/) in the `influx` CLI. +If not, include your organization (`--org`) and API token (`--token`) with each command. +{{% /note %}} + +##### View all DBRP mappings +```sh +influx v1 dbrp list +``` + +##### Filter DBRP mappings by database +```sh +influx v1 dbrp list --db example-db +``` + +##### Filter DBRP mappings by bucket ID +```sh +influx v1 dbrp list --bucket-id 00oxo0oXx000x0Xo +``` +{{% /tab-content %}} +{{% tab-content %}} +Use the [`/api/v2/dbrps` API endpoint](/influxdb/v2.6/api/#operation/GetDBRPs) to list DBRP mappings. + + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/dbrps" method="GET" >}} + + +Include the following: + +- **Request method:** `GET` +- **Headers:** + - **Authorization:** `Token` schema with your InfluxDB [API token](/influxdb/v2.6/security/tokens/) +- **Query parameters:** + {{< req type="key" >}} + - {{< req "\*" >}} **orgID:** [organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) + - **bucketID:** [bucket ID](/influxdb/v2.6/organizations/buckets/view-buckets/) _(to list DBRP mappings for a specific bucket)_ + - **database:** database name _(to list DBRP mappings with a specific database name)_ + - **rp:** retention policy name _(to list DBRP mappings with a specific retention policy name)_ + - **id:** DBRP mapping ID _(to list a specific DBRP mapping)_ + +##### View all DBRP mappings +```sh +curl --request GET \ + http://localhost:8086/api/v2/dbrps?orgID=00oxo0oXx000x0Xo \ + --header "Authorization: Token YourAuthToken" +``` + +##### Filter DBRP mappings by database +```sh +curl --request GET \ + http://localhost:8086/api/v2/dbrps?orgID=00oxo0oXx000x0Xo&db=example-db \ + --header "Authorization: Token YourAuthToken" +``` + +##### Filter DBRP mappings by bucket ID +```sh +curl --request GET \ + https://cloud2.influxdata.com/api/v2/dbrps?organization_id=00oxo0oXx000x0Xo&bucketID=00oxo0oXx000x0Xo \ + --header "Authorization: Token YourAuthToken" +``` +{{% /tab-content %}} +{{% /tabs-wrapper %}} + +## Update a DBRP mapping + +Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) or the +[InfluxDB API](/influxdb/v2.6/reference/api/) to update a DBRP mapping. + +{{% oss-only %}} + +{{% note %}} +Virtual DBRP mappings cannot be updated. +To override a virtual DBRP mapping, [create an explicit mapping](#create-dbrp-mappings). +{{% /note %}} + +{{% /oss-only %}} + +{{< tabs-wrapper >}} +{{% tabs %}} +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} +{{% tab-content %}} + +Use the [`influx v1 dbrp update` command](/influxdb/v2.6/reference/cli/influx/v1/dbrp/update/) +to update a DBRP mapping. +Include the following: + +{{< req type="key" >}} + +- {{< req "\*" >}} **org** and **token** to authenticate. We recommend setting your organization and token to your active InfluxDB connection configuration in the influx CLI, so you don't have to add these parameters to each command. To set up your active InfluxDB configuration, see [`influx config set`](/influxdb/v2.6/reference/cli/influx/config/set/). +- {{< req "\*" >}} **DBRP mapping ID** to update +- **Retention policy** name to update to +- **Default flag** to set the retention policy as the default retention policy for the database + +##### Update the default retention policy +```sh +influx v1 dbrp update \ + --id 00oxo0X0xx0XXoX0 + --rp example-rp \ + --default +``` + +{{% /tab-content %}} +{{% tab-content %}} + +Use the [`/api/v2/dbrps/{dbrpID}` API endpoint](/influxdb/v2.6/api/#operation/GetDBRPs) to update DBRP mappings. + + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/dbrps/{dbrpID}" method="PATCH" >}} + + +Include the following: + +{{< req type="key" >}} + +- **Request method:** `PATCH` +- **Headers:** + - {{< req "\*" >}} **Authorization:** `Token` schema with your InfluxDB [API token](/influxdb/v2.6/security/tokens/) +- **Path parameters:** + - {{< req "\*" >}} **id:** DBRP mapping ID to update +- **Query parameters:** + - {{< req "\*" >}} **orgID:** [organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) +- **Request body (JSON):** + - **rp:** retention policy name to update to + - **default:** set the retention policy as the default retention policy for the database + +##### Update the default retention policy +```sh +curl --request PATCH \ + http://localhost:8086/api/v2/dbrps/00oxo0X0xx0XXoX0?orgID=00oxo0oXx000x0Xo \ + --header "Authorization: Token YourAuthToken" + --data '{ + "rp": "example-rp", + "default": true + }' +``` +{{% /tab-content %}} +{{% /tabs-wrapper %}} + +## Delete a DBRP mapping + +Use the [`influx` CLI](/influxdb/v2.6/reference/cli/influx/) or the +[InfluxDB API](/influxdb/v2.6/reference/api/) to delete a DBRP mapping. + +{{% oss-only %}} + +{{% note %}} +Virtual DBRP mappings cannot be deleted. +{{% /note %}} + +{{% /oss-only %}} + +{{< tabs-wrapper >}} +{{% tabs %}} +[influx CLI](#) +[InfluxDB API](#) +{{% /tabs %}} +{{% tab-content %}} + +Use the [`influx v1 dbrp delete` command](/influxdb/v2.6/reference/cli/influx/v1/dbrp/delete/) +to delete a DBRP mapping. +Include the following: + +{{< req type="key" >}} + +- {{< req "\*" >}} **org** and **token** to authenticate. We recommend setting your organization and token to your active InfluxDB connection configuration in the influx CLI, so you don't have to add these parameters to each command. To set up your active InfluxDB configuration, see [`influx config set`](/influxdb/v2.6/reference/cli/influx/config/set/). +- {{< req "\*" >}} **DBRP mapping ID** to delete + +```sh +influx v1 dbrp delete --id 00oxo0X0xx0XXoX0 +``` + +{{% /tab-content %}} +{{% tab-content %}} + +Use the [`/api/v2/dbrps/{dbrpID}` API endpoint](/influxdb/v2.6/api/#operation/DeleteDBRPID) to delete a DBRP mapping. + + +{{< api-endpoint endpoint="http://localhost:8086/api/v2/dbrps/{dbrpID}" method="DELETE" >}} + + +Include the following: + +{{< req type="key" >}} + +- **Request method:** `PATCH` +- **Headers:** + - {{< req "\*" >}} **Authorization:** `Token` schema with your InfluxDB [API token](/influxdb/v2.6/security/tokens/) +- **Path parameters:** + - {{< req "\*" >}} **id:** DBRP mapping ID to update +- **Query parameters:** + - {{< req "\*" >}} **orgID:** [organization ID](/influxdb/v2.6/organizations/view-orgs/#view-your-organization-id) + +```sh +curl --request DELETE \ + http://localhost:8086/api/v2/dbrps/00oxo0X0xx0XXoX0?orgID=00oxo0oXx000x0Xo \ + --header "Authorization: Token YourAuthToken" +``` +{{% /tab-content %}} +{{% /tabs-wrapper %}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/_index.md b/content/influxdb/v2.6/query-data/influxql/explore-data/_index.md new file mode 100644 index 000000000..60ff15455 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/_index.md @@ -0,0 +1,73 @@ +--- +title: Explore data using InfluxQL +description: > + Explore time series data using InfluxQL, InfluxData's SQL-like query language. + Use the `SELECT` statement to query data from measurements, tags, and fields. +menu: + influxdb_2_6: + name: Explore data + parent: Query with InfluxQL +weight: 202 +--- + +To start exploring data with InfluxQL, do the following: + +1. Verify your bucket has a database and retention policy (DBRP) mapping by [listing DBRP mappings for your bucket](/influxdb/v2.6/query-data/influxql/dbrp/#list-dbrp-mappings). If not, [create a new DBRP mapping](/influxdb/v2.6/query-data/influxql/dbrp/#create-dbrp-mappings). + +2. [Configure timestamps in the InfluxQL shell](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/). + +3. _(Optional)_ If you would like to use the data used in the examples below, [download the NOAA sample data](#download-sample-data). + +4. Use the InfluxQL `SELECT` statement with other key clauses to explore your data. + +{{< children type="anchored-list" >}} + +{{< children readmore=true hr=true >}} + +## Download sample data + +The example InfluxQL queries in this documentation use publicly available [National Oceanic and Atmospheric Administration (NOAA)](https://www.noaa.gov/) data. + +To download a subset of NOAA data used in examples, run the script under [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data) (for example, copy and paste the script into your Data Explorer - Script Editor), and replace "example-org" in the script with the name of your InfluxDB organization. + +Let's get acquainted with this subsample of the data in the `h2o_feet` measurement: + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | level description | location | water_level | +| :------------------- | :------------------ | :----------------------- |----------------------:| +| 2019-08-18T00:00:00Z | between 6 and 9 feet |coyote_creek | 8.1200000000 | +| 2019-08-18T00:00:00Z | below 3 feet | santa_monica | 2.0640000000 | +| 2019-08-18T00:06:00Z | between 6 and 9 feet | coyote_creek | 8.0050000000 | +| 2019-08-18T00:06:00Z | below 3 feet| santa_monica | 2.1160000000 | +| 2019-08-18T00:12:00Z | between 6 and 9 feet| coyote_creek | 7.8870000000 | +| 2019-08-18T00:12:00Z | below 3 feet | santa_monica | 2.0280000000 | + +The data in the `h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement) +occurs at six-minute time intervals. +This measurement has one [tag key](/influxdb/v2.6/reference/glossary/#tag-key) +(`location`) which has two [tag values](/influxdb/v2.6/reference/glossary/#tag-value): +`coyote_creek` and `santa_monica`. +The measurement also has two [fields](/influxdb/v2.6/reference/glossary/#field): +`level description` stores string [field values](/influxdb/v2.6/reference/glossary/#field-value) +and `water_level` stores float field values. + + +### Configure timestamps in the InfluxQL shell + +By default, the [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/) returns timestamps in +nanosecond UNIX epoch format by default. +To return human-readable RFC3339 timestamps instead of Unix nanosecond timestamps, +use the [precision helper command](/influxdb/v2.6/tools/influxql-shell/#precision) ` to configure +the timestamp format: + +```sql +precision rfc3339 +``` + +The [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/) returns timestamps +in [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) format by default. +Specify alternative formats with the [`epoch` query string parameter](/influxdb/v2.6/reference/api/influxdb-1x/). diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/group-by.md b/content/influxdb/v2.6/query-data/influxql/explore-data/group-by.md new file mode 100644 index 000000000..59ac0b577 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/group-by.md @@ -0,0 +1,1193 @@ +--- +title: GROUP BY clause +description: > + Use the `GROUP BY` clause to group query results by one or more specified [tags](/influxdb/v2.6/reference/glossary/#tag) and/or a specified time interval. +menu: + influxdb_2_6: + name: GROUP BY clause + parent: Explore data +weight: 303 +list_code_example: | + ```sql + SELECT_clause FROM_clause [WHERE_clause] GROUP BY [* | [,` groups query results by one or more specified [tags](/influxdb/v2.6/reference/glossary/#tag). + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] GROUP BY [* | [,]] +``` + + - `GROUP BY *` - Groups results by all [tags](/influxdb/v2.6/reference/glossary/#tag) + - `GROUP BY ` - Groups results by a specific tag + - `GROUP BY ,` - Groups results by more than one tag. The order of the [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) is irrelevant. + - `GROUP BY \regex\` - Groups results by tags that match the regular expression. + +If the query includes a `WHERE` clause, the `GROUP BY` clause must appear after the `WHERE` clause. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Group query results by a single tag" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" GROUP BY "location" +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 1970-01-01T00:00:00Z | 5.3591424203 | + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 1970-01-01T00:00:00Z | 3.5307120942 | + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/) +to calculate the average `water_level` for each +[tag value](/influxdb/v2.6/reference/glossary/#tag-value) of `location` in +the `h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement). +InfluxDB returns results in two [series](/influxdb/v2.6/reference/glossary/#series): one for each tag value of `location`. + +{{% note %}} +**Note:** In InfluxDB, [epoch 0](https://en.wikipedia.org/wiki/Unix_time) (`1970-01-01T00:00:00Z`) is often used as a null timestamp equivalent. +If you request a query that has no timestamp to return, such as an [aggregation function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) with an unbounded time range, InfluxDB returns epoch 0 as the timestamp. +{{% /note %}} + +{{% /expand %}} + +{{% expand "Group query results by more than one tag" %}} + +```sql +SELECT MEAN("index") FROM "h2o_quality" GROUP BY "location","randtag" +``` +Output: +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek, randtag=1 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 50.6903376019 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek, randtag=2 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.6618675442 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek, randtag=3 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.3609399076 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica, randtag=1 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.1327124563 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica, randtag=2 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 50.2937984496 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica, randtag=3 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.9991990388 | + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `index` for +each combination of the `location` tag and the `randtag` tag in the +`h2o_quality` measurement. +Separate multiple tags with a comma in the `GROUP BY` clause. + +{{% /expand %}} + +{{% expand "Group query results by all tags" %}} + +```sql +SELECT MEAN("index") FROM "h2o_quality" GROUP BY * +``` +Output: +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek, randtag=1 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 50.6903376019 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek, randtag=2 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.6618675442 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek, randtag=3 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.3609399076 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica, randtag=1 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.1327124563 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica, randtag=2 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 50.2937984496 | + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica, randtag=3 +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 49.9991990388 | + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `index` for every possible +[tag](/influxdb/v2.6/reference/glossary/#tag) combination in the `h2o_quality` +measurement. + +{{% /expand %}} + +{{% expand "Group query results by tags that start with `l`" %}} + +```sql +SELECT "water_level",location FROM "h2o_feet" GROUP BY /l/ +``` + +This query uses a regular expression to group by tags that start with `l`. With the sample NOAA water dataset, results are grouped by the `location` tag. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## GROUP BY time intervals + +`GROUP BY time()` group query results by a user-specified time interval. +When using aggregate or selector functions in the `SELECT` clause, the operation is applied to each interval. + +### Basic GROUP BY time() syntax + +#### Syntax + +```sql +SELECT () FROM_clause WHERE GROUP BY time(),[tag_key] [fill()] +``` + +Basic `GROUP BY time()` queries require an InfluxQL [function](/influxdb/v2.6/query-data/influxql/functions) +in the `SELECT` clause and a time range in the `WHERE` clause. +Note that the `GROUP BY` clause must come **after** the `WHERE` clause. + +##### `time(time_interval)` + +The `time_interval` in the `GROUP BY time()` clause is a +[duration literal](/influxdb/v2.6/reference/glossary/#duration). +It determines how InfluxDB groups query results over time. +For example, a `time_interval` of `5m` groups query results into five-minute +time groups across the time range specified in the `WHERE` clause. + +##### `fill()` + +`fill()` is optional. +It changes the value reported for time intervals with no data. +See [GROUP BY time intervals and `fill()`](#group-by-time-intervals-and-fill) +for more information. + +**Coverage:** + +Basic `GROUP BY time()` queries rely on the `time_interval` and InfluxDB's +preset time boundaries to determine the raw data included in each time interval +and the timestamps returned by the query. + +### Examples of basic syntax + +The examples below use the following subsample of the sample data: + +```sql +SELECT "water_level","location" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | location | +| :------------------- | -----------: | :----------- | +| 2019-08-18T00:00:00Z | 8.5040000000 | coyote_creek | +| 2019-08-18T00:00:00Z | 2.3520000000 | santa_monica | +| 2019-08-18T00:06:00Z | 8.4190000000 | coyote_creek | +| 2019-08-18T00:06:00Z | 2.3790000000 | santa_monica | +| 2019-08-18T00:12:00Z | 8.3200000000 | coyote_creek | +| 2019-08-18T00:12:00Z | 2.3430000000 | santa_monica | +| 2019-08-18T00:18:00Z | 8.2250000000 | coyote_creek | +| 2019-08-18T00:18:00Z | 2.3290000000 | santa_monica | +| 2019-08-18T00:24:00Z | 8.1300000000 | coyote_creek | +| 2019-08-18T00:24:00Z | 2.2640000000 | santa_monica | +| 2019-08-18T00:30:00Z | 8.0120000000 | coyote_creek | +| 2019-08-18T00:30:00Z | 2.2670000000 | santa_monica | + +{{< expand-wrapper >}} + +{{% expand "Group query results into 12 minute intervals" %}} + +```sql +SELECT COUNT("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | :----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | + +The query uses the InfluxQL [COUNT() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count) to count the number of `water_level` points per location, per 12-minute interval. + +Each output row represents a single 12 minute interval. +The count for the first timestamp covers the raw data between `2019-08-18T00:00:00Z` +and up to, but not including, `2019-08-18T00:12:00Z`. +The count for the second timestamp covers the raw data between `2019-08-18T00:12:00Z` +and up to, but not including, `2019-08-18T00:24:00Z.` + +{{% /expand %}} + +{{% expand "Group query results into 12 minute intervals and by a tag key" %}} + +```sql +SELECT COUNT("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),"location" +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | + + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | + +The query uses the InfluxQL [COUNT() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count) +to count the number of `water_level` points per location, per 12 minute interval. +Note that the time interval and the tag key are separated by a comma in the +`GROUP BY` clause. + +The query returns two [series](/influxdb/v2.6/reference/glossary/#series) of results: one for each +[tag value](/influxdb/v2.6/reference/glossary/#tag-value) of the `location` tag. +The result for each timestamp represents a single 12 minute interval. +Each output row represents a single 12 minute interval. +and up to, but not including, `2019-08-18T00:12:00Z`. +The count for the second timestamp covers the raw data between `2019-08-18T00:12:00Z` +and up to, but not including, `2019-08-18T00:24:00Z.` + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with basic syntax + +##### Unexpected timestamps and values in query results + +With the basic syntax, InfluxDB relies on the `GROUP BY time()` interval +and on the system's preset time boundaries to determine the raw data included +in each time interval and the timestamps returned by the query. +In some cases, this can lead to unexpected results. + +**Example** + +Raw data: + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:18:00Z' +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 8.5040000000 | +| 2019-08-18T00:06:00Z | 8.4190000000 | +| 2019-08-18T00:12:00Z | 8.3200000000 | +| 2019-08-18T00:18:00Z | 8.2250000000 | + +Query and results: + +The following example queries a 12-minute time range and groups results into 12-minute time intervals, but it returns **two** results: + +```sql +SELECT COUNT("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time < '2019-08-18T00:18:00Z' GROUP BY time(12m) +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 1.0000000000 | +| 2019-08-18T00:12:00Z | 1.0000000000 | + +{{% note %}} +**Note:** The timestamp in the first row of data occurs before the start of the queried time range. +{{% /note %}} + +Explanation: + +InfluxDB uses preset round-number time boundaries for `GROUP BY` intervals that are +independent of any time conditions in the `WHERE` clause. +When it calculates the results, all returned data must occur within the query's +explicit time range but the `GROUP BY` intervals will be based on the preset +time boundaries. + +The table below shows the preset time boundary, the relevant `GROUP BY time()` interval, the +points included, and the returned timestamp for each `GROUP BY time()` +interval in the results. + +| Time Interval Number | Preset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------------------------------------------- | :-------------- | :--------------------- | +| 1 | `time >= 2019-08-18T00:00:00Z AND time < 2019-08-18T00:12:00Z` | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:12:00Z` | `8.005` | `2019-08-18T00:00:00Z` | +| 2 | `time >= 2019-08-12T00:12:00Z AND time < 2019-08-18T00:24:00Z` | `time >= 2019-08-12T00:12:00Z AND time < 2019-08-18T00:18:00Z` | `7.887` | `2019-08-18T00:12:00Z` | + +The first preset 12-minute time boundary begins at `00:00` and ends just before +`00:12`. +Only one raw point (`8.005`) falls both within the query's first `GROUP BY time()` interval and in that +first time boundary. +Note that while the returned timestamp occurs before the start of the queried time range, +the query result excludes data that occur before the queried time range. + +The second preset 12-minute time boundary begins at `00:12` and ends just before +`00:24`. +Only one raw point (`7.887`) falls both within the query's second `GROUP BY time()` interval and in that +second time boundary. + +The [advanced `GROUP BY time()` syntax](#advanced-group-by-time-syntax) allows users to shift +the start time of the InfluxDB database's preset time boundaries. It shifts forward the preset time boundaries by six minutes such that +InfluxDB returns: + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | ----: | +| 2019-08-18T00:06:00Z | 2 | + +### Advanced GROUP BY time() syntax + +#### Syntax + +```sql +SELECT () FROM_clause WHERE GROUP BY time(,),[tag_key] [fill()] +``` + +Advanced `GROUP BY time()` queries require an InfluxQL [function](/influxdb/v2.6/query-data/influxql/functions/) +in the `SELECT` clause and a time range in the +`WHERE` clause). Note that the `GROUP BY` clause must come after the `WHERE` clause. + +##### `time(time_interval,offset_interval)` + +See the [Basic GROUP BY time() Syntax](#basic-group-by-time-syntax) +for details on the `time_interval`. + +The `offset_interval` is a [duration literal](/influxdb/v2.6/reference/glossary/#duration). +It shifts forward or back the InfluxDB database's preset time boundaries. +The `offset_interval` can be positive or negative. + +##### `fill()` + +`fill()` is optional. +It changes the value reported for time intervals with no data. +See [GROUP BY time intervals and `fill()`](#group-by-time-intervals-and-fill) +for more information. + +**Coverage:** + +Advanced `GROUP BY time()` queries rely on the `time_interval`, the `offset_interval` +, and on the InfluxDB database's preset time boundaries to determine the raw data included in each time interval +and the timestamps returned by the query. + +### Examples of advanced syntax + +The examples below use the following subsample of the sample data: + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:54:00Z' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 8.5040000000 | +| 2019-08-18T00:06:00Z | 8.4190000000 | +| 2019-08-18T00:12:00Z | 8.3200000000 | +| 2019-08-18T00:18:00Z | 8.2250000000 | +| 2019-08-18T00:24:00Z | 8.1300000000 | +| 2019-08-18T00:30:00Z | 8.0120000000 | +| 2019-08-18T00:36:00Z | 7.8940000000 | +| 2019-08-18T00:42:00Z | 7.7720000000 | +| 2019-08-18T00:48:00Z | 7.6380000000 | +| 2019-08-18T00:54:00Z | 7.5100000000 | + +{{< expand-wrapper >}} + +{{% expand "Group query results into 18 minute intervals and shift the preset time boundaries forward" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(18m,6m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 2019-08-18T00:06:00Z | 8.3213333333 | +| 2019-08-18T00:24:00Z | 8.0120000000 | +| 2019-08-18T00:42:00Z | 7.6400000000 | + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) +to calculate the average `water_level`, grouping results into 18 minute time intervals, and offsetting the preset time boundaries by 6 minutes. + +The time boundaries and returned timestamps for the query **without** the `offset_interval` adhere to the InfluxDB database's preset time boundaries. Let's first examine the results without the offset: + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(18m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 8.3695000000 | +| 2019-08-18T00:18:00Z | 8.1223333333 | +| 2019-08-18T00:36:00Z | 7.7680000000 | +| 2019-08-18T00:54:00Z | 7.5100000000 | + +The time boundaries and returned timestamps for the query **without** the +`offset_interval` adhere to the InfluxDB database's preset time boundaries: + +| Time Interval Number | Preset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------------------------------------------- | :--------------------- | :--------------------- | +| 1 | `time >= 2019-08-18T00:00:00Z AND time < 2019-08-18T00:18:00Z` | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:18:00Z` | `8.005`,`7.887` | `2019-08-18T00:00:00Z` | +| 2 | `time >= 2019-08-18T00:18:00Z AND time < 2019-08-18T00:36:00Z` | <--- same | `7.762`,`7.635`,`7.5` | `2019-08-18T00:18:00Z` | +| 3 | `time >= 2019-08-18T00:36:00Z AND time < 2019-08-18T00:54:00Z` | <--- same | `7.372`,`7.234`,`7.11` | `2019-08-18T00:36:00Z` | +| 4 | `time >= 2019-08-18T00:54:00Z AND time < 2019-08-18T01:12:00Z` | `time = 2019-08-18T00:54:00Z` | `6.982` | `2019-08-18T00:54:00Z` | + +The first preset 18-minute time boundary begins at `00:00` and ends just before +`00:18`. Two raw points (`8.005` and `7.887`) fall both within the first `GROUP BY time()` interval and in that +first time boundary. While the returned timestamp occurs before the start of the queried time range, +the query result excludes data that occur before the queried time range. + +The second preset 18-minute time boundary begins at `00:18` and ends just before +`00:36`. Three raw points (`7.762` and `7.635` and `7.5`) fall both within the second `GROUP BY time()` interval and in that +second time boundary. In this case, the boundary time range and the interval's time range are the same. + +The fourth preset 18-minute time boundary begins at `00:54` and ends just before +`1:12:00`. One raw point (`6.982`) falls both within the fourth `GROUP BY time()` interval and in that +fourth time boundary. + +The time boundaries and returned timestamps for the query **with** the `offset_interval` adhere to the offset time boundaries: + +| Time Interval Number | Offset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------- | :---------------------- | ---------------------- | +| 1 | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:24:00Z` | <--- same | `8.005`,`7.887`,`7.762` | `2019-08-18T00:06:00Z` | +| 2 | `time >= 2019-08-18T00:24:00Z AND time < 2019-08-18T00:42:00Z` | <--- same | `7.635`,`7.5`,`7.372` | `2019-08-18T00:24:00Z` | +| 3 | `time >= 2019-08-18T00:42:00Z AND time < 2019-08-18T01:00:00Z` | <--- same | `7.234`,`7.11`,`6.982` | `2019-08-18T00:42:00Z` | +| 4 | `time >= 2019-08-18T01:00:00Z AND time < 2019-08-18T01:18:00Z` | NA | NA | NA | + +The six-minute offset interval shifts forward the preset boundary's time range +such that the boundary time ranges and the relevant `GROUP BY time()` interval time ranges are +always the same. +With the offset, each interval performs the calculation on three points, and +the timestamp returned matches both the start of the boundary time range and the +start of the `GROUP BY time()` interval time range. + +Note that `offset_interval` forces the fourth time boundary to be outside +the queried time range so the query returns no results for that last interval. + +{{% /expand %}} + +{{% expand "Group query results into 12 minute intervals and shift the preset time boundaries back" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(18m,-12m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 2019-08-18T00:06:00Z | 8.3213333333 | +| 2019-08-18T00:24:00Z | 8.0120000000 | +| 2019-08-18T00:42:00Z | 7.6400000000 | + + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `water_level`, grouping results into 18 minute +time intervals, and offsetting the preset time boundaries by -12 minutes. + +{{% note %}} +**Note:** The query in Example 2 returns the same results as the query in Example 1, but +the query in Example 2 uses a negative `offset_interval` instead of a positive +`offset_interval`. +There are no performance differences between the two queries; feel free to choose the most +intuitive option when deciding between a positive and negative `offset_interval`. +{{% /note %}} + +The time boundaries and returned timestamps for the query **without** the `offset_interval` adhere to InfluxDB database's preset time boundaries. Let's first examine the results without the offset: + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(18m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 8.3695000000 | +| 2019-08-18T00:18:00Z | 8.1223333333 | +| 2019-08-18T00:36:00Z | 7.7680000000 | +| 2019-08-18T00:54:00Z | 7.5100000000 | + +The time boundaries and returned timestamps for the query **without** the +`offset_interval` adhere to the InfluxDB database's preset time boundaries: + +| Time Interval Number | Preset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------------------------------------------- | :--------------------- | :--------------------- | +| 1 | `time >= 2019-08-18T00:00:00Z AND time < 2019-08-18T00:18:00Z` | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:18:00Z` | `8.005`,`7.887` | `2019-08-18T00:00:00Z` | +| 2 | `time >= 2019-08-18T00:18:00Z AND time < 2019-08-18T00:36:00Z` | <--- same | `7.762`,`7.635`,`7.5` | `2019-08-18T00:18:00Z` | +| 3 | `time >= 2019-08-18T00:36:00Z AND time < 2019-08-18T00:54:00Z` | <--- same | `7.372`,`7.234`,`7.11` | `2019-08-18T00:36:00Z` | +| 4 | `time >= 2019-08-18T00:54:00Z AND time < 2019-08-18T01:12:00Z` | `time = 2019-08-18T00:54:00Z` | `6.982` | `2019-08-18T00:54:00Z` | + +The first preset 18-minute time boundary begins at `00:00` and ends just before +`00:18`. +Two raw points (`8.005` and `7.887`) fall both within the first `GROUP BY time()` interval and in that +first time boundary. +Note that while the returned timestamp occurs before the start of the queried time range, +the query result excludes data that occur before the queried time range. + +The second preset 18-minute time boundary begins at `00:18` and ends just before +`00:36`. +Three raw points (`7.762` and `7.635` and `7.5`) fall both within the second `GROUP BY time()` interval and in that +second time boundary. In this case, the boundary time range and the interval's time range are the same. + +The fourth preset 18-minute time boundary begins at `00:54` and ends just before +`1:12:00`. +One raw point (`6.982`) falls both within the fourth `GROUP BY time()` interval and in that +fourth time boundary. + +The time boundaries and returned timestamps for the query **with** the +`offset_interval` adhere to the offset time boundaries: + +| Time Interval Number | Offset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------- | :---------------------- | ---------------------- | +| 1 | `time >= 2019-08-17T23:48:00Z AND time < 2019-08-18T00:06:00Z` | NA | NA | NA | +| 2 | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:24:00Z` | <--- same | `8.005`,`7.887`,`7.762` | `2019-08-18T00:06:00Z` | +| 3 | `time >= 2019-08-18T00:24:00Z AND time < 2019-08-18T00:42:00Z` | <--- same | `7.635`,`7.5`,`7.372` | `2019-08-18T00:24:00Z` | +| 4 | `time >= 2019-08-18T00:42:00Z AND time < 2019-08-18T01:00:00Z` | <--- same | `7.234`,`7.11`,`6.982` | `2019-08-18T00:42:00Z` | + +The negative 12-minute offset interval shifts back the preset boundary's time range +such that the boundary time ranges and the relevant `GROUP BY time()` interval time ranges are always the +same. +With the offset, each interval performs the calculation on three points, and +the timestamp returned matches both the start of the boundary time range and the +start of the `GROUP BY time()` interval time range. + +Note that `offset_interval` forces the first time boundary to be outside +the queried time range so the query returns no results for that first interval. + +{{% /expand %}} + +{{% expand "Group query results into 12 minute intervals and shift the preset time boundaries forward" %}} + +This example is a continuation of the scenario outlined in [Common Issues with Basic Syntax](#common-issues-with-basic-syntax). + +```sql +SELECT COUNT("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time < '2019-08-18T00:18:00Z' GROUP BY time(12m,6m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | -----------: | +| 2019-08-18T00:06:00Z | 2.0000000000 | + +The query uses the InfluxQL [COUNT() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count) to count the number of `water_level` points per location, per 12-minute interval, and offset the preset time boundaries by six minutes. + +The time boundaries and returned timestamps for the query **without** the `offset_interval` adhere to InfluxDB database's preset time boundaries. Let's first examine the results without the offset: + +```sql +SELECT COUNT("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' AND time >= '2019-08-18T00:06:00Z' AND time < '2019-08-18T00:18:00Z' GROUP BY time(12m) +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 1.0000000000 | +| 2019-08-18T00:12:00Z | 1.0000000000 | + +The time boundaries and returned timestamps for the query **without** the `offset_interval` adhere to InfluxDB database's preset time boundaries: + +| Time Interval Number | Preset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------------------------------------------- | :-------------- | :--------------------- | +| 1 | `time >= 2019-08-18T00:00:00Z AND time < 2019-08-18T00:12:00Z` | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:12:00Z` | `8.005` | `2019-08-18T00:00:00Z` | +| 2 | `time >= 2019-08-12T00:12:00Z AND time < 2019-08-18T00:24:00Z` | `time >= 2019-08-12T00:12:00Z AND time < 2019-08-18T00:18:00Z` | `7.887` | `2019-08-18T00:12:00Z` | + +The first preset 12-minute time boundary begins at `00:00` and ends just before +`00:12`. +Only one raw point (`8.005`) falls both within the query's first `GROUP BY time()` interval and in that +first time boundary. +Note that while the returned timestamp occurs before the start of the queried time range, +the query result excludes data that occur before the queried time range. + +The second preset 12-minute time boundary begins at `00:12` and ends just before +`00:24`. +Only one raw point (`7.887`) falls both within the query's second `GROUP BY time()` interval and in that +second time boundary. + +The time boundaries and returned timestamps for the query **with** the +`offset_interval` adhere to the offset time boundaries: + +| Time Interval Number | Offset Time Boundary | `GROUP BY time()` Interval | Points Included | Returned Timestamp | +| :------------------- | :------------------------------------------------------------- | :------------------------- | :-------------- | :--------------------- | +| 1 | `time >= 2019-08-18T00:06:00Z AND time < 2019-08-18T00:18:00Z` | <--- same | `8.005`,`7.887` | `2019-08-18T00:06:00Z` | +| 2 | `time >= 2019-08-18T00:18:00Z AND time < 2019-08-18T00:30:00Z` | NA | NA | NA | + +The six-minute offset interval shifts forward the preset boundary's time range +such that the preset boundary time range and the relevant `GROUP BY time()` interval time range are the +same. With the offset, the query returns a single result, and the timestamp returned +matches both the start of the boundary time range and the start of the `GROUP BY time()` interval +time range. + +Note that `offset_interval` forces the second time boundary to be outside +the queried time range so the query returns no results for that second interval. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## `GROUP BY` time intervals and `fill()` + +`fill()` changes the value reported for time intervals with no data. + +#### Syntax + +```sql +SELECT () FROM_clause WHERE GROUP BY time(time_interval,[])[,tag_key] [fill()] +``` + +By default, a `GROUP BY time()` interval with no data reports `null` as its +value in the output column. +`fill()` changes the value reported for time intervals with no data. +Note that `fill()` must go at the end of the `GROUP BY` clause if you're +`GROUP(ing) BY` several things (for example, both [tags](/influxdb/v2.6/reference/glossary/#tag) and a time interval). + +##### fill_option + + - Any numerical value - Reports the given numerical value for time intervals with no data. + - `linear` - Reports the results of [linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation) for time intervals with no data. + - `none` - Reports no timestamp and no value for time intervals with no data. + - `null` - Reports null for time intervals with no data but returns a timestamp. This is the same as the default behavior. + - `previous` - Reports the value from the previous time interval for time intervals with no data. + +### Examples + +{{< tabs-wrapper >}} +{{% tabs "even-wrap" %}} +[fill(100)](#) +[fill(linear)](#) +[fill(none)](#) +[fill(null)](#) +[fill(previous)](#) +{{% /tabs %}} +{{% tab-content %}} + +Without `fill(100)`: + +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 2019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | | +| 2019-08-19T09:30:00Z | 6.0000000000 | + +With `fill(100)`: +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) fill(100) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 2019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | 100.0000000000 | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | 100.0000000000 | +| 2019-08-19T09:30:00Z | 6.0000000000 | + +`fill(100)` changes the value reported for the time interval with no data to `100`. + +{{% /tab-content %}} + +{{% tab-content %}} + +Without `fill(linear)`: + +```sql +SELECT MEAN("tadpoles") FROM "pond" WHERE time >= '2019-11-11T21:00:00Z' AND time <= '2019-11-11T22:06:00Z' GROUP BY time(12m) +``` +Output: +{{% influxql/table-meta %}} +Name: pond +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ---: | +| 2019-11-11T21:00:00Z | 1 | +| 2019-11-11T21:12:00Z | | +| 2019-11-11T21:24:00Z | 3 | +| 2019-11-11T21:36:00Z | | +| 2019-11-11T21:48:00Z | | +| 2019-11-11T22:00:00Z | 6 | + +With `fill(linear)`: + +```sql +SELECT MEAN("tadpoles") FROM "pond" WHERE time >= '2019-11-11T21:00:00Z' AND time <= '2019-11-11T22:06:00Z' GROUP BY time(12m) fill(linear) +``` + +Output: +{{% influxql/table-meta %}} +Name: pond +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ---: | +| 2019-11-11T21:00:00Z | 1 | +| 2019-11-11T21:12:00Z | 2 | +| 2019-11-11T21:24:00Z | 3 | +| 2019-11-11T21:36:00Z | 4 | +| 2019-11-11T21:48:00Z | 5 | +| 2019-11-11T22:00:00Z | 6 | + +`fill(linear)` changes the value reported for the time interval with no data +to the results of [linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation). + +{{% note %}} +**Note:** The data in this example is not in the `noaa` database. +{{% /note %}} + +{{% /tab-content %}} + +{{% tab-content %}} + +Without `fill(none)`: + +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | | +| 2019-08-19T09:30:00Z | 6.0000000000 | + +With `fill(none)`: + +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) fill(none) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 2019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:30:00Z | 6.0000000000 | +``` + +`fill(none)` reports no value and no timestamp for the time interval with no data. + +{{% /tab-content %}} + +{{% tab-content %}} + +Without `fill(null)`: + +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | | +| 2019-08-19T09:30:00Z | 6.0000000000 | + +With `fill(null)`: + +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) fill(null) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | null | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | null | +| 2019-08-19T09:30:00Z | 6.0000000000 | + +`fill(null)` reports `null` as the value for the time interval with no data. +That result matches the result of the query without `fill(null)`. + +{{% /tab-content %}} + +{{% tab-content %}} + +Without `fill(previous)`: + +```sql + SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) + ``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | | + +With `fill(previous)`: + +```sql +SELECT MEAN("index") FROM "h2o_quality" WHERE "location"='santa_monica' AND time >= '2019-08-19T08:42:00Z' AND time <= '2019-08-19T09:30:00Z' GROUP BY time(5m) fill(previous) + ``` +Output: +{{% influxql/table-meta %}} +Name: h2o_quality +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ------------: | +| 2019-08-19T08:40:00Z | 68.0000000000 | +| 019-08-19T08:45:00Z | 29.0000000000 | +| 2019-08-19T08:50:00Z | 47.0000000000 | +| 2019-08-19T08:55:00Z | 47.0000000000 | +| 2019-08-19T09:00:00Z | 84.0000000000 | +| 2019-08-19T09:05:00Z | 0.0000000000 | +| 2019-08-19T09:10:00Z | 41.0000000000 | +| 2019-08-19T09:15:00Z | 13.0000000000 | +| 2019-08-19T09:20:00Z | 9.0000000000 | +| 2019-08-19T09:25:00Z | 9.0000000000 | + +`fill(previous)` changes the value reported for the time interval with no data to `3.235`, +the value from the previous time interval. + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +### Common issues with `fill()` + +##### Queries with no data in the queried time range + +Currently, queries ignore `fill()` if no data exists in the queried time range. +This is the expected behavior. An open +[feature request](https://github.com/influxdata/influxdb/issues/6967) on GitHub +proposes that `fill()` should force a return of values even if the queried time +range covers no data. + +**Example** + +The following query returns no data because `water_level` has no points within +the queried time range. +Note that `fill(800)` has no effect on the query results. + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2019-09-18T22:00:00Z' AND time <= '2019-09-18T22:18:00Z' GROUP BY time(12m) fill(800) +> no results +``` + +##### Queries with `fill(previous)` when the previous result is outside the queried time range + +`fill(previous)` doesn’t fill the result for a time interval if the previous +value is outside the query’s time range. + +**Example** + +The following example queries the time range between `2019-09-18T16:24:00Z` and `2019-09-18T16:54:00Z`. +Note that `fill(previous)` fills the result for `2019-09-18T16:36:00Z` with the +result from `2019-09-18T16:24:00Z`. + +```sql +SELECT MAX("water_level") FROM "h2o_feet" WHERE location = 'coyote_creek' AND time >= '2019-09-18T16:24:00Z' AND time <= '2019-09-18T16:54:00Z' GROUP BY time(12m) fill(previous) +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | max | +| :------------------- | ----: | +| 2019-09-18T16:24:00Z | 3.235 | +| 2019-09-18T16:36:00Z | 3.235 | +| 2019-09-18T16:48:00Z | 4 | + + +The next example queries the time range between `2019-09-18T16:36:00Z` and `2019-09-18T16:54:00Z`. +Note that `fill(previous)` doesn't fill the result for `2019-09-18T16:36:00Z` with the +result from `2019-09-18T16:24:00Z`; the result for `2019-09-18T16:24:00Z` is outside the query's +shorter time range. + +```sql +SELECT MAX("water_level") FROM "h2o_feet" WHERE location = 'coyote_creek' AND time >= '2019-09-18T16:36:00Z' AND time <= '2019-09-18T16:54:00Z' GROUP BY time(12m) fill(previous) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | max | +| :------------------- | --: | +| 2019-09-18T16:36:00Z | | +| 2019-09-18T16:48:00Z | 4 | + +##### `fill(linear)` when the previous or following result is outside the queried time range + +`fill(linear)` doesn't fill the result for a time interval with no data if the +previous result or the following result is outside the queried time range. + +**Example** + +The following example queries the time range between `2019-11-11T21:24:00Z` and +`2019-11-11T22:06:00Z`. Note that `fill(linear)` fills the results for the +`2019-11-11T21:36:00Z` time interval and the `2019-11-11T21:48:00Z` time interval +using the values from the `2019-11-11T21:24:00Z` time interval and the +`2019-11-11T22:00:00Z` time interval. + +```sql +SELECT MEAN("tadpoles") FROM "pond" WHERE time > '2019-11-11T21:24:00Z' AND time <= '2019-11-11T22:06:00Z' GROUP BY time(12m) fill(linear) +``` +Output: +{{% influxql/table-meta %}} +Name: pond +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ---: | +| 2019-11-11T21:24:00Z | 3 | +| 2019-11-11T21:36:00Z | 4 | +| 2019-11-11T21:48:00Z | 5 | +| 2019-11-11T22:00:00Z | 6 | + +The next query shortens the time range in the previous query. +It now covers the time between `2019-11-11T21:36:00Z` and `2019-11-11T22:06:00Z`. +Note that `fill()` previous doesn't fill the results for the `2019-11-11T21:36:00Z` +time interval and the `2019-11-11T21:48:00Z` time interval; the result for +`2019-11-11T21:24:00Z` is outside the query's shorter time range and InfluxDB +cannot perform the linear interpolation. + +```sql +SELECT MEAN("tadpoles") FROM "pond" WHERE time >= '2019-11-11T21:36:00Z' AND time <= '2019-11-11T22:06:00Z' GROUP BY time(12m) fill(linear) +``` +Output: +{{% influxql/table-meta %}} +Name: pond +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | ---: | +| 2019-11-11T21:36:00Z | | +| 2019-11-11T21:48:00Z | | +| 2019-11-11T22:00:00Z | 6 | + +{{% note %}} +**Note:** The data in Issue 3 are not in `NOAA` database. We had to create a dataset with less regular data to work with `fill(linear)`. +{{% /note %}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit.md b/content/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit.md new file mode 100644 index 000000000..d30bef1c8 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit.md @@ -0,0 +1,241 @@ +--- +title: LIMIT and SLIMIT clauses +description: > + Use the `LIMIT` and `SLIMIT` clauses to limit the number of [points](/influxdb/v2.6/reference/glossary/#point) and [series](/influxdb/v2.6/reference/glossary/#series) returned in queries. +menu: + influxdb_2_6: + name: LIMIT and SLIMIT clauses + parent: Explore data +weight: 305 +list_code_example: | + ```sql + SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] LIMIT + ``` +--- + +Use `LIMIT` and `SLIMIT` to limit the number of [points](/influxdb/v2.6/reference/glossary/#point) and [series](/influxdb/v2.6/reference/glossary/#series) returned per query. + +- [LIMIT clause](#limit-clause) + - [Syntax](#syntax) + - [Examples](#examples) +- [SLIMIT clause](#slimit-clause) + - [Syntax](#syntax-1) + - [Examples](#examples-2) +- [Use LIMIT and SLIMIT together](#use-limit-and-slimit-together) + +## LIMIT clause + +`LIMIT ` returns the first `N` points from the specified [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] LIMIT +``` + +`N` specifies the number of points to return from the specified measurement . If `N` is greater than the number of points in a measurement, InfluxDB returns all points from the measurement. + +{{% note %}} +**IMPORTANT:** The `LIMIT` clause must appear in the order outlined in the syntax above. +{{% /note %}} + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Limit the number of points returned" %}} + +```sql +SELECT "water_level","location" FROM "h2o_feet" LIMIT 3 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | location | +| :-------------- | :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | 8.1200000000 | coyote_creek| +| 2019-08-17T00:00:00Z | 2.0640000000 |santa_monica | +| 2019-08-17T00:06:00Z | 8.0050000000 |coyote_creek | + +The query returns the three oldest points, determined by timestamp, from the `h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +{{% /expand %}} + +{{% expand "Limit the number of points returned and include a `GROUP BY` clause" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:42:00Z' GROUP BY *,time(12m) LIMIT 2 +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | :-------------------------- | +| 2019-08-18T00:00:00Z | 8.4615000000 | +| 2019-08-18T00:12:00Z | 8.2725000000 | + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | :-------------------------- | +| 2019-08-18T00:00:00Z | 2.3655000000 | +| 2019-08-18T00:12:00Z | 2.3360000000 | + +This query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) and a `GROUP BY` clause to calculate the average `water_level` for each [tag](/influxdb/v2.6/reference/glossary/#tag) and for each 12-minute interval in the queried time range. `LIMIT 2` requests the two oldest 12-minute averages (determined by timestamp). + +Note that without `LIMIT 2`, the query would return four points per series; one for each 12-minute interval in the queried time range. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## SLIMIT clause + +`SLIMIT ` returns every [point](/influxdb/v2.6/reference/glossary/#point) from `N` [series](//influxdb/v2.6/reference/glossary/#series) in the specified [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] GROUP BY *[,time()] [ORDER_BY_clause] SLIMIT +``` + +`N` specifies the number of series to return from the specified measurement. If `N` is greater than the number of series in a measurement, InfluxDB returns all series from that measurement. + +`SLIMIT` queries must include `GROUP BY *`. Note that the `SLIMIT` clause must appear in the order outlined in the syntax above. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Limit the number of series returned" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" GROUP BY * SLIMIT 1 +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ---------------------:| +| 2019-08-17T00:00:00Z | 8.1200000000| +| 2019-08-17T00:06:00Z | 8.0050000000| +| 2019-08-17T00:12:00Z | 7.8870000000| +| 2019-08-17T00:18:00Z | 7.7620000000| +| 2019-08-17T00:24:00Z | 7.6350000000| +| 2019-08-17T00:30:00Z | 7.5000000000| +| 2019-08-17T00:36:00Z | 7.3720000000| + +The results above include only the first few rows, as the data set is quite large. The query returns all `water_level` [points](/influxdb/v2.6/reference/glossary/#point) from one of the [series](/influxdb/v2.6/reference/glossary/#series) associated with the `h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +{{% /expand %}} + +{{% expand "Limit the number of series returned and include a `GROUP BY time()` clause" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:42:00Z' GROUP BY *,time(12m) SLIMIT 1 +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 2019-08-18T00:00:00Z | 8.4615000000| +| 2019-08-18T00:12:00Z | 8.2725000000| +| 2019-08-18T00:24:00Z | 8.0710000000| +| 2019-08-18T00:36:00Z | 7.8330000000| + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) +and a time interval in the [GROUP BY clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +to calculate the average `water_level` for each 12-minute +interval in the queried time range. + +`SLIMIT 1` requests a single series associated with the `h2o_feet` measurement. + +Note that without `SLIMIT 1`, the query would return results for the two series +associated with the `h2o_feet` measurement: `location=coyote_creek` and +`location=santa_monica`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Use LIMIT and SLIMIT together + +`LIMIT ` followed by `SLIMIT <2>` returns the first `N1` [points](/influxdb/v2.6/reference/glossary/#point) from `N2` series in the specified measurement. + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] GROUP BY *[,time()] [ORDER_BY_clause] LIMIT SLIMIT +``` + +`N1` specifies the number of points to return per measurement. If `N1` is greater than the number of points in a measurement, InfluxDB returns all points from that measurement. + +`N2` specifies the number of series to return from the specified measurement. If `N2` is greater than the number of series in a measurement, InfluxDB returns all series from that measurement. + +`SLIMIT` queries must include `GROUP BY *`. Note that the `SLIMIT` clause must appear in the order outlined in the syntax above. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Limit the number of points and series returned" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" GROUP BY * LIMIT 3 SLIMIT 1 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +Tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ---------------------:| +| 2019-08-17T00:00:00Z | 8.1200000000| +| 2019-08-17T00:06:00Z | 8.0050000000| +| 2019-08-17T00:12:00Z | 7.8870000000| + +The query returns the three oldest points, determined by timestamp, from one of the series associated with the measurement `h2o_feet`. + +{{% /expand %}} + +{{% expand "Limit the number of points and series returned and include a `GROUP BY time()` clause" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:42:00Z' GROUP BY *,time(12m) LIMIT 2 SLIMIT 1 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +Tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 2019-08-18T00:00:00Z | 8.4615000000| +| 2019-08-18T00:12:00Z | 8.2725000000| + +The query uses the InfluxQL function MEAN() and a time interval in the GROUP BY clause to calculate the average `water_level` for each 12-minute interval in the queried time range. `LIMIT 2` requests the two oldest 12-minute averages (determined by +timestamp) and `SLIMIT 1` requests a single series associated with the `h2o_feet` measurement. + +Note that without `LIMIT 2 SLIMIT 1`, the query would return four points for each of the two series associated with the `h2o_feet` measurement. + +{{% /expand %}} + +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset.md b/content/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset.md new file mode 100644 index 000000000..5beb6d10b --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset.md @@ -0,0 +1,206 @@ +--- +title: OFFSET and SOFFSET clauses +description: > + Use the `OFFSET` and `SOFFSET` clauses to paginate [points](/influxdb/v2.6/reference/glossary/#point) and [series](/influxdb/v2.6/reference/glossary/#series). +menu: + influxdb_2_6: + name: OFFSET and SOFFSET clauses + parent: Explore data +weight: 306 +list_code_example: | + ```sql + SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] LIMIT_clause OFFSET [SLIMIT_clause] + ``` +--- + +Use `OFFSET` and `SOFFSET` to paginate [points](/influxdb/v2.6/reference/glossary/#point) and [series](/influxdb/v2.6/reference/glossary/#series) returned. + + - [OFFSET clause](#offset-clause) + - [Syntax](#syntax) + - [Examples](#examples) + - [The SOFFSET clause](#soffset-clause) + - [Syntax](#syntax-1) + - [Examples](#examples-1) + +## `OFFSET` clause + +`OFFSET ` paginates `N` [points](/influxdb/v2.6/reference/glossary/#point) in the query results. + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] LIMIT_clause OFFSET [SLIMIT_clause] +``` + +`N` specifies the number of points to paginate. The `OFFSET` clause requires a [`LIMIT` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/#limit-clause). + +{{% note %}} +**Note:** InfluxDB returns no results if the `WHERE clause` includes a time range and the `OFFSET clause` would cause InfluxDB to return points with timestamps outside of that time range. +{{% /note %}} + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Paginate points" %}} + +```sql +SELECT "water_level","location" FROM "h2o_feet" LIMIT 3 OFFSET 3 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | location | +| :-------------- | -------------------:| :------------------| +| 2019-08-17T00:06:00Z | 2.1160000000 | santa_monica| +| 2019-08-17T00:12:00Z | 7.8870000000 | coyote_creek| +| 2019-08-17T00:12:00Z | 2.0280000000 | santa_monica| + +The query returns the fourth, fifth, and sixth points from the `h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement). If the query did not include `OFFSET 3`, it would return the first, second, +and third points from that measurement. + +{{% /expand %}} + +{{% expand "Paginate points and include several clauses" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:42:00Z' GROUP BY *,time(12m) ORDER BY time DESC LIMIT 2 OFFSET 2 SLIMIT 1 +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 2019-08-18T00:12:00Z | 8.2725000000 | +| 2019-08-18T00:00:00Z | 8.4615000000 | + +In this example: + + - The [`SELECT clause`](/influxdb/v2.6/query-data/influxql/explore-data/select/) specifies the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean). + - The [`FROM clause`] (/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause) specifies a single measurement. + - The [`WHERE` clause](/influxdb/v2.6/query-data/influxql/explore-data/where/) specifies the time range for the query. + - The [`GROUP BY` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) groups results by all tags (`*`) and into 12-minute intervals. + - The [`ORDER BY time DESC` clause](/influxdb/v2.6/query-data/influxql/explore-data/order-by/#order-by-time-desc) returns results in descending timestamp order. + - The [`LIMIT 2` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) limits the number of points returned to two. + - The `OFFSET 2` clause excludes the first two averages from the query results. + - The [`SLIMIT 1` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) limits the number of series returned to one. + +Without `OFFSET 2`, the query would return the first two averages of the query results: + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 2019-08-18T00:36:00Z | 7.8330000000 | +| 2019-08-18T00:24:00Z | 8.0710000000 | + +{{< /expand >}} + +{{< /expand-wrapper >}} + +## `SOFFSET` clause + +`SOFFSET ` paginates `N` [series](/influxdb/v2.6/reference/glossary/#series) in the query results. + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] GROUP BY *[,time(time_interval)] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] SLIMIT_clause SOFFSET +``` + +`N` specifies the number of [series](/influxdb/v2.6/reference/glossary/#series) to paginate. +The `SOFFSET` clause requires an [`SLIMIT` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/. +Using the `SOFFSET` clause without an `SLIMIT` clause can cause [inconsistent +query results](https://github.com/influxdata/influxdb/issues/7578). +`SLIMIT` queries must include `GROUP BY *`. +{{% note %}} +**Note:** InfluxDB returns no results if the `SOFFSET` clause paginates through more than the total number of series. +{{% /note %}} + +### Examples + +{{% expand-wrapper %}} + +{{% expand "Paginate series" %}} + +#### Paginate series + +```sql +SELECT "water_level" FROM "h2o_feet" GROUP BY * SLIMIT 1 SOFFSET 1 +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ---------------------:| +| 2019-08-17T00:00:00Z | 2.0640000000| +| 2019-08-17T00:06:00Z | 2.1160000000| +| 2019-08-17T00:12:00Z | 2.0280000000| +| 2019-08-17T00:18:00Z | 2.1260000000| +| 2019-08-17T00:24:00Z | 2.0410000000| +| 2019-08-17T00:30:00Z | 2.0510000000| +| 2019-08-17T00:36:00Z | 2.0670000000| +| 2019-08-17T00:42:00Z | 2.0570000000| + +The results above are partial, as the data set is quite large. The query returns data for the series associated with the `h2o_feet` +measurement and the `location = santa_monica` tag. Without `SOFFSET 1`, the query returns data for the series associated with the `h2o_feet` measurement and the `location = coyote_creek` tag. + +{{% /expand %}} + +{{% expand "Paginate points and include several clauses" %}} + +#### Paginate series and include all clauses + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:42:00Z' GROUP BY *,time(12m) ORDER BY time DESC LIMIT 2 OFFSET 2 SLIMIT 1 SOFFSET 1 +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 2019-08-18T00:12:00Z | 2.3360000000| +| 2019-08-18T00:00:00Z | 2.3655000000| + +In this example: + + - The [`SELECT` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/) specifies an InfluxQL [function](/influxdb/v2.6/query-data/influxql/functions/). + - The [`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause) specifies a single measurement. + - The [`WHERE` clause](/influxdb/v2.6/query-data/influxql/explore-data/where/) specifies the time range for the query. + - The [`GROUP BY` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) groups results by all tags (`*`) and into 12-minute intervals. + - The [`ORDER BY time DESC` clause](/influxdb/v2.6/query-data/influxql/explore-data/order-by/#order-by-time-desc) returns results in descending timestamp order. + - The [`LIMIT 2` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) limits the number of points returned to two. + - The [`OFFSET 2` clause](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) excludes the first two averages from the query results. + - The [`SLIMIT 1` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) limits the number of series returned to one. + - The [`SOFFSET 1`](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) clause paginates the series returned. + +Without `SOFFSET 1`, the query would return the results for a different series: + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek + +| time | mean | +| :------------------ | ---------------------:| +| 2019-08-18T00:12:00Z | 8.2725000000 | +| 2019-08-18T00:00:00Z | 8.4615000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/order-by.md b/content/influxdb/v2.6/query-data/influxql/explore-data/order-by.md new file mode 100644 index 000000000..227849b82 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/order-by.md @@ -0,0 +1,119 @@ +--- +title: ORDER BY clause +list_title: ORDER BY clause +description: > + Use the `ORDER BY` clause to sort data in ascending or descending order. +menu: + influxdb_2_6: + name: ORDER BY clause + parent: Explore data +weight: 304 +list_code_example: | + ```sql + SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] ORDER BY time DESC + ``` +--- + +Use the `ORDER BY` clause to sort data. + +- [Syntax](#syntax) +- [Examples](#examples) + +## ORDER BY time DESC + +By default, InfluxDB returns results in ascending time order; the first [point](/influxdb/v2.6/reference/glossary/#point) +returned has the oldest [timestamp](/influxdb/v2.6/reference/glossary/#timestamp) and +the last point returned has the most recent timestamp. +`ORDER BY time DESC` reverses that order such that InfluxDB returns the points +with the most recent timestamps first. + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] ORDER BY time DESC +``` + +If the query includes a `GROUP BY` clause, `ORDER by time DESC` must appear **after** the `GROUP BY` clause. +If the query includes a `WHERE` clause and no `GROUP BY` clause, `ORDER by time DESC` must appear **after** the `WHERE` clause. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Return the newest points first" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' ORDER BY time DESC +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :-------------- | ------------------:| +| 2019-09-17T21:42:00Z | 4.9380000000| +| 2019-09-17T21:36:00Z | 5.0660000000| +| 2019-09-17T21:30:00Z | 5.0100000000| +| 2019-09-17T21:24:00Z | 5.0130000000| +| 2019-09-17T21:18:00Z | 5.0720000000| + +The query returns the points with the most recent timestamps from the +`h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement) first. +Without `ORDER by time DESC`, the query would return the following output: + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :-------------- | ------------------:| +| 2019-08-17T00:00:00Z | 2.0640000000| +| 2019-08-17T00:06:00Z | 2.1160000000| +| 2019-08-17T00:12:00Z | 2.0280000000| +| 2019-08-17T00:18:00Z | 2.1260000000| + +{{% /expand %}} + +{{% expand "Return the newest points first and include a `GROUP BY time()` clause" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:42:00Z' GROUP BY time(12m) ORDER BY time DESC +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :-------------- | ------------------:| +| 2019-08-18T00:36:00Z | 4.9712860355| +| 2019-08-18T00:24:00Z | 5.1682500000| +| 2019-08-18T00:12:00Z | 5.3042500000| +| 2019-08-18T00:00:00Z | 5.4135000000| + +The query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) +and a time interval in the [GROUP BY clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +to calculate the average `water_level` for each 12-minute +interval in the queried time range. +[`ORDER BY time DESC`](/influxdb/v2.6/query-data/influxql/explore-data/order-by/#order-by-time-desc) returns the most recent 12-minute time intervals +first. + +Without `ORDER BY time DESC`, the query would return the following output: + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :-------------- | ------------------:| +| 2019-08-18T00:00:00Z | 5.4135000000| +| 2019-08-18T00:12:00Z | 5.3042500000| +| 2019-08-18T00:24:00Z | 5.1682500000| +| 2019-08-18T00:36:00Z | 4.9712860355| + +{{% /expand %}} + +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions.md b/content/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions.md new file mode 100644 index 000000000..b8972161c --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions.md @@ -0,0 +1,216 @@ +--- +title: Regular expressions +list_title: Regular expressions +description: > + Use `regular expressions` to match patterns in your data. +menu: + influxdb_2_6: + name: Regular expressions + identifier: influxql-regular-expressions + parent: Explore data +weight: 313 +list_code_example: | + ```sql + SELECT // FROM // WHERE [ // | //] GROUP BY // + ``` +--- + +InfluxQL supports using regular expressions when specifying: + +- [field keys](/influxdb/v2.6/reference/glossary/#field-key) and [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) in the [`SELECT` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/). +- [measurements](/influxdb/v2.6/reference/glossary/#measurement) in the [`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause). +- [tag values](/influxdb/v2.6/reference/glossary/#tag-value) and string [field values](/influxdb/v2.6/reference/glossary/#field-value) in the [`WHERE` clause](/influxdb/v2.6/query-data/influxql/explore-data/where/). +- [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) in the [`GROUP BY` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) + +Regular expressions in InfluxQL only support string comparisons and can only evaluate [fields](/influxdb/v2.6/reference/glossary/#field) with string values. + +{{% note %}} +**Note:** Regular expression comparisons are more computationally intensive than exact +string comparisons. Queries with regular expressions are not as performant +as those without. +{{% /note %}} + +- [Syntax](#syntax) +- [Examples](#examples) + +## Syntax + +```sql +SELECT // FROM // WHERE [ // | //] GROUP BY // +``` + +Regular expressions are surrounded by `/` characters and use +[Golang's regular expression syntax](http://golang.org/pkg/regexp/syntax/). + +## Supported operators + +`=~`: matches against +`!~`: doesn't match against + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Use a regular expression to specify field keys and tag keys in the SELECT clause" %}} + +```sql +SELECT /l/ FROM "h2o_feet" LIMIT 1 +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level| +| :------------ | :----------------| :--------------| --------------:| +| 2019-08-17T00:00:00Z | below 3 feet | santa_monica | 2.0640000000| + +The query selects all field keys and tag keys that include an `l`. +Note that the regular expression in the `SELECT` clause must match at least one +field key in order to return results for a tag key that matches the regular +expression. + +Currently, there is no syntax to distinguish between regular expressions for +field keys and regular expressions for tag keys in the `SELECT` clause. +The syntax `//::[field | tag]` is not supported. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify measurements in the FROM clause" %}} + +```sql +SELECT MEAN("degrees") FROM /temperature/ +``` + +Output: + +{{% influxql/table-meta %}} +Name: average_temperature +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 1970-01-01T00:00:00Z | 79.9847293223 | + +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 1970-01-01T00:00:00Z | 64.9980273540 | + +This query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `degrees` for every [measurement](/influxdb/v2.6/reference/glossary/#measurement) in the [NOAA sample data] that contains the word `temperature`. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify tag values in the WHERE clause" %}} + +```sql +SELECT MEAN(water_level) FROM "h2o_feet" WHERE "location" =~ /[m]/ AND "water_level" > 3 +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 1970-01-01T00:00:00Z | 4.4710766395| + +This query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `water_level` where the [tag value](/influxdb/v2.6/reference/glossary/#measurement) of `location` includes an `m` and `water_level` is greater than three. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify a tag with no value in the WHERE clause" %}} + +```sql +SELECT * FROM "h2o_feet" WHERE "location" !~ /./ +> +``` + +The query selects all data from the `h2o_feet` measurement where the `location` +[tag](/influxdb/v2.6/reference/glossary/#tag) has no value. +Every data [point](/influxdb/v2.6/reference/glossary/#point) in the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data) has a tag value for `location`. +It's possible to perform this same query without a regular expression. +See the [Frequently Asked Questions](/influxdb/v2.6/reference/faq/#how-do-i-query-data-by-a-tag-with-a-null-value) +document for more information. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify a tag with a value in the WHERE clause" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" =~ /./ +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 1970-01-01T00:00:00Z | 4.4418434585| + +This query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `water_level` across all data with a tag value for `location`. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify a field value in the WHERE clause" %}} + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND "level description" =~ /between/ +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ | ---------------------:| +| 1970-01-01T00:00:00Z | 4.4713666916 + + +This query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) +to calculate the average `water_level` for all data where the field value of `level description` includes the word `between`. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify tag keys in the GROUP BY clause" %}} + +```sql +SELECT FIRST("index") FROM "h2o_quality" GROUP BY /l/ +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ |-------------------:| +| 2019-08-17T00:00:00Z | 41.0000000000 | + + +{{% influxql/table-meta %}} +name: h2o_quality +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ |-------------------:| +| 2019-08-17T00:00:00Z | 99.0000000000 | + +This query uses the InfluxQL [FIRST() function](/influxdb/v2.6/query-data/influxql/functions/selectors/#first) + +to select the first value of `index` for every tag that includes the letter `l` +in its tag key. + +{{% /expand %}} + +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/select.md b/content/influxdb/v2.6/query-data/influxql/explore-data/select.md new file mode 100644 index 000000000..7f8664c6b --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/select.md @@ -0,0 +1,681 @@ +--- +title: SELECT statement +list_title: SELECT statement +description: > + Use the `SELECT` statement to query data from a particular [measurement](/influxdb/v2.6/reference/glossary/#measurement) or measurements. +menu: + influxdb_2_6: + name: SELECT statement + parent: Explore data +weight: 301 +list_code_example: | + ```sql + SELECT [,,] FROM [,] + ``` +--- + +Use the `SELECT` statement to query data from a particular [measurement](/influxdb/v2.6/reference/glossary/#measurement) or measurements. + +- [Syntax](#syntax) +- [Examples](#examples) +- [Common issues](#common-issues-with-the-select-statement) +- [Regular expressions](#regular-expressions) +- [Data types and cast operations](#data-types-and-cast-operations) +- [Merge behavior](#merge-behavior) +- [Multiple statements](#multiple-statements) + +## Syntax + +```sql +SELECT [,,] FROM [,] +``` +{{% note %}} +**Note:** The `SELECT` statement **requires** a `SELECT` clause and a `FROM` clause. +{{% /note %}} + +### `SELECT` clause + +The `SELECT` clause supports several formats for specifying data: + +- `SELECT *` - Returns all [fields](/influxdb/v2.6/reference/glossary/#field) and [tags](/influxdb/v2.6/reference/glossary/#tag). +- `SELECT ""` - Returns a specific field. +- `SELECT "",""` - Returns more than one field. +- `SELECT "",""` - Returns a specific field and tag. The `SELECT` clause must specify at least one field when it includes a tag. +- `SELECT ""::field,""::tag` - Returns a specific field and tag. +The `::[field | tag]` syntax specifies the [identifier's](/influxdb/v2.6/reference/syntax/influxql/spec/#identifiers) type. +Use this syntax to differentiate between field keys and tag keys with the same name. + +Other supported features include: + +- [Functions](/influxdb/v2.6/query-data/influxql/functions/) +- [Basic cast operations](#data-types-and-cast-operations) +- [Regular expressions](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) + +{{% note %}} +**Note:** The SELECT statement cannot include an aggregate function **and** a non-aggregate function, field key, or tag key. For more information, see [error about mixing aggregate and non-aggregate queries](/enterprise_influxdb/v1.9/troubleshooting/errors/#error-parsing-query-mixing-aggregate-and-non-aggregate-queries-is-not-supported). +{{% /note %}} + +### `FROM` clause + +The `SELECT` clause specifies the measurement to query. +This clause supports several formats for specifying a [measurement(s)](/influxdb/v2.6/reference/glossary/#measurement): + +- `FROM ` - Returns data from a measurement. +- `FROM ,` - Returns data from more than one measurement. +- `FROM ..` - Returns data from a fully qualified measurement. +- `FROM ..` - Returns data from a measurement. + +#### Quoting + +[Identifiers](/influxdb/v2.6/reference/syntax/influxql/spec/#identifiers) **must** be double quoted if they contain characters other than `[A-z,0-9,_]`, +begin with a digit, or are an [InfluxQL keyword](https://github.com/influxdata/influxql/blob/master/README.md#keywords). +While not always necessary, we recommend that you double quote identifiers. + +{{% note %}} +**Note:** InfluxQL quoting guidelines differ from [line protocol quoting guidelines](/influxdb/v2.6/reference/syntax/line-protocol/#quotes). +Please review the [rules for single and double-quoting](/influxdb/v2.6/reference/syntax/line-protocol/#quotes) in queries. +{{% /note %}} + +### Examples + +{{< expand-wrapper >}} +{{% expand "Select all fields and tags from a measurement" %}} + +```sql +SELECT * FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- |:----------------------| :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | below 3 feet |santa_monica | 2.0640000000| +| 2019-08-17T00:00:00Z | between 6 and 9 feet | coyote_creek | 8.1200000000| +| 2019-08-17T00:06:00Z | below 3 feet| santa_monica | 2.1160000000| +| 2019- 08-17T00:06:00Z | between 6 and 9 feet |coyote_creek |8.0050000000| +| 2019-08-17T00:12:00Z | below 3 feet | santa_monica | 2.0280000000| +| 2019-08-17T00:12:00Z | between 6 and 9 feet | coyote_creek | 7.8870000000| +| 2019-08-17T00:18:00Z | below 3 feet |santa_monica | 2.1260000000| + +The data above is a partial listing of the query output, as the result set is quite large. The query selects all [fields](/influxdb/v2.6/reference/glossary/#field) and +[tags](/influxdb/v2.6/reference/glossary/#tag) from the `h2o_feet` +[measurement](/influxdb/v2.6/reference/glossary/#measurement). + +{{% /expand %}} + +{{% expand "Select specific tags and fields from a measurement" %}} + +```sql +SELECT "level description","location","water_level" FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- |:----------------------| :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | below 3 feet |santa_monica | 2.0640000000| +| 2019-08-17T00:00:00Z | between 6 and 9 feet | coyote_creek | 8.1200000000| + +The query selects the `level description` field, the `location` tag, and the +`water_level` field. + +{{% note %}} +**Note:** The `SELECT` clause must specify at least one field when it includes +a tag. +{{% /note %}} + +{{% /expand %}} + +{{% expand "Select specific tags and fields from a measurement and provide their identifier type" %}} + +```sql +SELECT "level description"::field,"location"::tag,"water_level"::field FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- |:----------------------| :-------------------| ------------------:| +| 2019-08-17T00:24:00Z | between 6 and 9 feet | coyote_creek | 7.6350000000| +| 2019-08-17T00:30:00Z | below 3 feet | santa_monica | 2.0510000000| +| 2019-08-17T00:30:00Z | between 6 and 9 feet | coyote_creek | 7.5000000000| +| 2019-08-17T00:36:00Z | below 3 feet | santa_monica | 2.0670000000 | +| 2019-08-17T00:36:00Z | between 6 and 9 feet | coyote_creek | 7.3720000000 | +| 2019-08-17T00:42:00Z | below 3 feet | santa_monica | 2.0570000000 | + +The query selects the `level description` field, the `location` tag, and the +`water_level` field from the `h2o_feet` measurement. +The `::[field | tag]` syntax specifies if the +[identifier](/influxdb/v2.6/reference/syntax/influxql/spec/#identifiers) is a field or tag. +Use `::[field | tag]` to differentiate between [an identical field key and tag key ](/v2.4/reference/faq/#how-do-i-query-data-with-an-identical-tag-key-and-field-key). +That syntax is not required for most use cases. + +{{% /expand %}} + +{{% expand "Select all fields from a measurement" %}} + +```sql +SELECT *::field FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description| water_level | +| :-------------- | :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | below 3 feet | 2.0640000000 | +| 2019-08-17T00:00:00Z | between 6 and 9 feet | 8.1200000000| +| 2019-08-17T00:06:00Z | below 3 feet | 2.1160000000| +| 2019-08-17T00:06:00Z | between 6 and 9 feet | 8.0050000000| +| 2019-08-17T00:12:00Z | below 3 feet | 2.0280000000| +| 2019-08-17T00:12:00Z | between 6 and 9 feet | 7.8870000000| + +The query selects all fields from the `h2o_feet` measurement. +The `SELECT` clause supports combining the `*` syntax with the `::` syntax. + +{{% /expand %}} + +{{% expand "Select a specific field from a measurement and perform basic arithmetic" %}} + +```sql +SELECT ("water_level" * 2) + 4 FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :-------------- | ------------------:| +| 2019-08-17T00:00:00Z | 20.2400000000 | +| 2019-08-17T00:00:00Z | 8.1280000000 | +| 2019-08-17T00:06:00Z | 20.0100000000 | +| 2019-08-17T00:06:00Z | 8.2320000000 | +| 2019-08-17T00:12:00Z | 19.7740000000 | +| 2019-08-17T00:12:00Z | 8.0560000000 | + +The query multiplies `water_level`'s field values by two and adds four to those +values. + +{{% note %}} +**Note:** InfluxDB follows the standard order of operations. +See [InfluxQL mathematical operators](/influxdb/v2.6/query-data/influxql/math-operators/) +for more on supported operators. +{{% /note %}} + +{{% /expand %}} + +{{% expand "Select all data from more than one measurement" %}} + +```sql +SELECT * FROM "h2o_feet","h2o_pH" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | pH | water_level | +| :-------------- |:-------------| :----------------| :-------------| --------------:| +| 2019-08-17T00:00:00Z | below 3 feet | santa_monica | | 2.0640000000| +| 2019-08-17T00:00:00Z | between 6 and 9 feet | coyote_creek | | 8.1200000000| +| 2019-08-17T00:06:00Z | below 3 feet | santa_monica | | 2.1160000000| +| 2019-08-17T00:06:00Z | between 6 and 9 feet | coyote_creek | | 8.0050000000| +| 2019-08-17T00:12:00Z | below 3 feet | santa_monica | | 2.0280000000 | +| 2019-08-17T00:12:00Z | between 6 and 9 feet | coyote_creek | | 7.8870000000| +| 2019-08-17T00:18:00Z | below 3 feet | santa_monica | | 2.1260000000| +| 2019-08-17T00:18:00Z | between 6 and 9 feet | coyote_creek | | 7.7620000000| + +{{% influxql/table-meta %}} +Name: h2o_pH +{{% /influxql/table-meta %}} + +| time | level description | location | pH | water_level | +| :-------------- |:-------------| :----------------| :-------------| --------------:| +| 2019-08-17T00:00:00Z | | coyote_creek | 7.00| | +| 2019-08-17T00:06:00Z | |coyote_creek | 8.00 | | +| 2019-08-17T00:06:00Z | |santa_monica | 6.00 | | +| 2019-08-17T00:12:00Z | |coyote_creek |8.00 | | + + +The query selects all fields and tags from two measurements: `h2o_feet` and +`h2o_pH`. +Separate multiple measurements with a comma (`,`). + +{{% /expand %}} + +{{% expand "Select all data from a measurement in a particular database" %}} + +```sql +SELECT * FROM noaa.."h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- |:----------------------| :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | below 3 feet |santa_monica | 2.0640000000| +| 2019-08-17T00:00:00Z | between 6 and 9 feet | coyote_creek | 8.1200000000| +| 2019-08-17T00:06:00Z | below 3 feet| santa_monica | 2.1160000000| +| 2019- 08-17T00:06:00Z | between 6 and 9 feet |coyote_creek |8.0050000000| +| 2019-08-17T00:12:00Z | below 3 feet | santa_monica | 2.0280000000| +| 2019-08-17T00:12:00Z | between 6 and 9 feet | coyote_creek | 7.8870000000| + +The query selects data from the `h2o_feet` measurement in the `noaa` database. +The `..` indicates the `DEFAULT` retention policy for the specified database. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Common issues with the SELECT statement + +### Selecting tag keys in the SELECT statement + +A query requires at least one [field key](/influxdb/v2.6/reference/glossary/#field-key) +in the `SELECT` clause to return data. +If the `SELECT` clause only includes a single [tag key](/influxdb/v2.6/reference/glossary/#tag-key) or several tag keys, the +query returns an empty response. + +#### Example + +The following query returns no data because it specifies a single tag key (`location`) in +the `SELECT` clause: + +```sql +SELECT "location" FROM "h2o_feet" +> No results +``` +To return any data associated with the `location` tag key, the query's `SELECT` +clause must include at least one field key (`water_level`): + +```sql +SELECT "water_level","location" FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | location | +| :-------------- | :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | 8.1200000000 | coyote_creek | +| 2019-08-17T00:00:00Z | 2.0640000000 | santa_monica | +| 2019-08-17T 00:06:00Z | 8.0050000000 | coyote_creek | +| 2019-08-17T00:06:00Z | 2.1160000000 | santa_monica | +| 2019-08-17T00:12:00Z | 7.8870000000 | coyote_creek | +| 2019-08-17T00:12:00Z | 2.0280000000 | santa_monica | +| 2019-08-17T00:18:00Z | 7.7620000000 | coyote_creek | +| 2019-08-17T00:18:00Z | 2.1260000000 | santa_monica | + +## Regular expressions + +InfluxQL supports using regular expressions when specifying: +- [field keys](/influxdb/v2.6/reference/glossary/#field-key) and [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) in the [`SELECT` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/) +- [measurements](/influxdb/v2.6/reference/glossary/#measurement) in the [`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause) +- [tag values](/influxdb/v2.6/reference/glossary/#tag-value) and string [field values](/influxdb/v2.6/reference/glossary/#field-value) in the [`WHERE` clause](/influxdb/v2.6/query-data/influxql/explore-data/where/). +- [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) in the [`GROUP BY` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) + +Currently, InfluxQL does not support using regular expressions to match +non-string field values in the +`WHERE` clause, +[databases](/influxdb/v2.6/reference/glossary/#database), and +[retention policies](/influxdb/v2.6/reference/glossary/#retention-policy-rp). + +{{% note %}} +**Note:** Regular expression comparisons are more computationally intensive than exact +string comparisons. Queries with regular expressions are not as performant +as those without. +{{% /note %}} + +## Syntax + +```sql +SELECT // FROM // WHERE [ // | //] GROUP BY // +``` + +Regular expressions are surrounded by `/` characters and use +[Golang's regular expression syntax](http://golang.org/pkg/regexp/syntax/). + +## Supported operators + +`=~`: matches against +`!~`: doesn't match against + +## Examples + +{{< expand-wrapper >}} +{{% expand "Use a regular expression to specify field keys and tag keys in the SELECT statement" %}} +#### Use a regular expression to specify field keys and tag keys in the SELECT statement + +```sql +SELECT /l/ FROM "h2o_feet" LIMIT 1 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- |:----------------------| :-------------------| ------------------:| +| 2019-08-17T00:00:00Z | below 3 feet | santa_monica | 2.0640000000 | + +The query selects all [field keys](/influxdb/v2.6/reference/glossary/#field-key) +and [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) that include an `l`. +Note that the regular expression in the `SELECT` clause must match at least one +field key in order to return results for a tag key that matches the regular +expression. + +Currently, there is no syntax to distinguish between regular expressions for +field keys and regular expressions for tag keys in the `SELECT` clause. +The syntax `//::[field | tag]` is not supported. + +{{% /expand %}} + +{{% expand "Use a regular expression to specify measurements in the FROM clause" %}} + +```sql +SELECT MEAN("degrees") FROM /temperature/ +``` +Output: +{{% influxql/table-meta %}} +Name: average_temperature +{{% /influxql/table-meta %}} + +| time | mean| +| :-------------- |----------------------:| +| 1970-01-01T00:00:00Z | 79.9847293223 + + +{{% influxql/table-meta %}} +Name: h2o_temperature +{{% /influxql/table-meta %}} + +| time | mean| +| :-------------- |----------------------:| +| 1970-01-01T00:00:00Z | 64.9980273540 | + + +This query uses the InfluxQL [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) to calculate the average `degrees` for every [measurement](/influxdb/v2.6/reference/glossary/#measurement) in the `noaa` database that contains the word `temperature`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Data types and cast operations + +The [`SELECT` clause](#select-clause) supports specifying a [field's](/influxdb/v2.6/reference/glossary/#field) type and basic cast operations with the `::` syntax. + + - [Data types](#data-types) + - [Cast operations](#cast-operations) + +### Data types + +[Field values](/influxdb/v2.6/reference/glossary/#field-value) can be floats, integers, strings, or booleans. +The `::` syntax allows users to specify the field's type in a query. + +{{% note %}} +**Note:** Generally, it is not necessary to specify the field value type in the [`SELECT` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/). In most cases, InfluxDB rejects any writes that attempt to write a [field value](/influxdb/v2.6/reference/glossary/#field-value) to a field that previously accepted field values of a different type. +{{% /note %}} + +It is possible for field value types to differ across [shard groups](/influxdb/v2.6/reference/glossary/#shard-group). +In these cases, it may be necessary to specify the field value type in the +`SELECT` clause. +Please see the +[Frequently Asked Questions](/influxdb/v2.6/reference/faq/#how-does-influxdb-handle-field-type-discrepancies-across-shards) +document for more information on how InfluxDB handles field value type discrepancies. + +### Syntax + +```sql +SELECT_clause :: FROM_clause +``` + +`type` can be `float`, `integer`, `string`, or `boolean`. +In most cases, InfluxDB returns no data if the `field_key` does not store data of the specified +`type`. See [Cast Operations](#cast-operations) for more information. + +### Example + +```sql +SELECT "water_level"::float FROM "h2o_feet" LIMIT 4 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | water_level | +| :------------------ |-------------------:| +| 2019-08-17T00:00:00Z | 8.1200000000 | +| 2019-08-17T00:00:00Z | 2.0640000000 | +| 2019-08-17T00:06:00Z | 8.0050000000 | +| 2019-08-17T00:06:00Z | 2.1160000000 | + +The query returns values of the `water_level` field key that are floats. + +## Cast operations + +The `::` syntax allows users to perform basic cast operations in queries. +Currently, InfluxDB supports casting [field values](/influxdb/v2.6/reference/glossary/#field-value) from integers to +floats or from floats to integers. + +### Syntax + +```sql +SELECT_clause :: FROM_clause +``` + +`type` can be `float` or `integer`. + +InfluxDB returns no data if the query attempts to cast an integer or float to a string or boolean. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Cast float field values to integers" %}} + +```sql +SELECT "water_level"::integer FROM "h2o_feet" LIMIT 4 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | water_level | +| :------------------ |-------------------:| +| 2019-08-17T00:00:00Z | 8.0000000000 | +| 2019-08-17T00:00:00Z | 2.0000000000 | +| 2019-08-17T00:06:00Z | 8.0000000000 | +| 2019-08-17T00:06:00Z | 2.0000000000 | + +The query returns the integer form of `water_level`'s float [field values](/influxdb/v2.6/reference/glossary/#field-value). + +{{% /expand %}} + +{{% expand "Cast float field values to strings (this functionality is not supported)" %}} + +```sql +SELECT "water_level"::string FROM "h2o_feet" LIMIT 4 +> No results +``` + +The query returns no data as casting a float field value to a string is not yet supported. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Merge behavior + +InfluxQL merges [series](/influxdb/v2.6/reference/glossary/#series) automatically. + +### Example + +{{< expand-wrapper >}} + +{{% expand "Merge behavior" %}} + +The `h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement) in the `noaa` is part of two [series](/influxdb/v2.6/reference/glossary/#series). +The first series is made up of the `h2o_feet` measurement and the `location = coyote_creek` [tag](/influxdb/v2.6/reference/glossary/#tag). The second series is made of up the `h2o_feet` measurement and the `location = santa_monica` tag. + +The following query automatically merges those two series when it calculates the average `water_level` using the [MEAN() function](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean): + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | mean | +| :------------------ |-------------------:| +| 1970-01-01T00:00:00Z | 4.4419314021 | + +If you want the average `water_level` for the first series only, specify the relevant tag in the [`WHERE` clause](/influxdb/v2.6/query-data/influxql/explore-data/where/): + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | mean | +| :------------------ |-------------------:| +| 1970-01-01T00:00:00Z | 5.3591424203 | + +If you want the average `water_level` for each individual series, include a [`GROUP BY` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/): + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" GROUP BY "location" +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ |-------------------:| + | 1970-01-01T00:00:00Z | 5.3591424203 | + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------ |-------------------:| +| 1970-01-01T00:00:00Z | 3.5307120942 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Multiple statements + +Separate multiple `SELECT` statements in a query with a semicolon (`;`). + +### Examples + +{{< tabs-wrapper >}} +{{% tabs %}} +[InfluxQL shell](#) +[InfluxDB API](#) +{{% /tabs %}} + +{{% tab-content %}} + +In the [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/): + +```sql +SELECT MEAN("water_level") FROM "h2o_feet"; SELECT "water_level" FROM "h2o_feet" LIMIT 2 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | mean | +| :------------------ |-------------------:| +| 1970-01-01T00:00:00Z | 4.4419314021 | + + +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +|time | water_level | +| :------------------ |-------------------:| +| 2019-08-17T00:00:00Z | 8.12 | +| 2015-08-18T00:00:00Z | 2.064 | + +{{% /tab-content %}} + +{{% tab-content %}} + +With the [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/): + +```json +{ + "results": [ + { + "statement_id": 0, + "series": [ + { + "name": "h2o_feet", + "columns": [ + "time", + "mean" + ], + "values": [ + [ + "1970-01-01T00:00:00Z", + 4.442107025822522 + ] + ] + } + ] + }, + { + "statement_id": 1, + "series": [ + { + "name": "h2o_feet", + "columns": [ + "time", + "water_level" + ], + "values": [ + [ + "2015-08-18T00:00:00Z", + 8.12 + ], + [ + "2015-08-18T00:00:00Z", + 2.064 + ] + ] + } + ] + } + ] +} +``` + +{{% /tab-content %}} +{{< /tabs-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/subqueries.md b/content/influxdb/v2.6/query-data/influxql/explore-data/subqueries.md new file mode 100644 index 000000000..50690e203 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/subqueries.md @@ -0,0 +1,294 @@ +--- +title: Subqueries +description: > + Use a `subquery` to apply a query as a condition in the enclosing query. +menu: + influxdb_2_6: + name: Subqueries + parent: Explore data +weight: 310 +list_code_example: | + ```sql + SELECT_clause FROM ( SELECT_statement ) [...] + ``` +--- + +A subquery is a query that is nested in the `FROM` clause of another query. Use a subquery to apply a query as a condition in the enclosing query. Subqueries offer functionality similar to nested functions and the SQL [`HAVING` clause](https://en.wikipedia.org/wiki/Having_%28SQL%29). + +{{% note %}} +**Note:** InfluxQL does not support a `HAVING` clause. +{{% /note %}} + +- [Syntax](#syntax) +- [Examples](#examples) +- [Common Issues](#common-issues-with-subqueries) + +### Syntax + +```sql +SELECT_clause FROM ( SELECT_statement ) [...] +``` + +InfluxDB **performs the subquery first** and the main query second. + +The main query surrounds the subquery and requires at least the [`SELECT` clause](/influxdb//v2.4/query-data/influxql/explore-data/select/) and the [`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause). +The main query supports all clauses listed in InfluxQL 2.x documentation. + +The subquery appears in the main query's `FROM` clause, and it requires surrounding parentheses. +The subquery also supports all clauses listed in InfluxQL 2.x documentation. + +InfluxQL supports multiple nested subqueries per main query. +Sample syntax for multiple subqueries: + +```sql +SELECT_clause FROM ( SELECT_clause FROM ( SELECT_statement ) [...] ) [...] +``` + +{{% note %}} +**Note:** #### Improve performance of time-bound subqueries +To improve the performance of InfluxQL queries with time-bound subqueries, +apply the `WHERE time` clause to the outer query instead of the inner query. +For example, the following queries return the same results, but **the query with +time bounds on the outer query is more performant than the query with time +bounds on the inner query**: + +##### Time bounds on the outer query (recommended) +```sql +SELECT inner_value AS value FROM (SELECT raw_value as inner_value) +WHERE time >= '2022-07-19T21:00:00Z' +AND time <= '2022-07-20T22:00:00Z' +``` + +##### Time bounds on the inner query +```sql +SELECT inner_value AS value FROM ( + SELECT raw_value as inner_value + WHERE time >= '2022-07-19T21:00:00Z' + AND time <= '2022-07-20T22:00:00Z' +) +``` +{{% /note %}} + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the SUM() of several MAX() values" %}} + +```sql +SELECT SUM("max") FROM (SELECT MAX("water_level") FROM "h2o_feet" GROUP BY "location") +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sum | +| :--------------| ------------------:| +|1970-01-01T00:00:00Z | 17.169 | + + +The query returns the sum of the maximum `water_level` values across every tag value of `location`. + +InfluxDB first performs the subquery; it calculates the maximum value of `water_level` for each tag value of `location`: + +```sql +SELECT MAX("water_level") FROM "h2o_feet" GROUP BY "location" +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | max | +| :--------------------------- | ------------------: | +| 2015-08-29T07:24:00Z | 9.9640000000 | + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | max | +| :--------------------------- | ------------------: | +| 2015-08-29T03:54:00Z | 7.2050000000 | + +Next, InfluxDB performs the main query and calculates the sum of those maximum values: `9.9640000000` + `7.2050000000` = `17.169`. +Notice that the main query specifies `max`, not `water_level`, as the field key in the `SUM()` function. + +{{% /expand %}} + +{{% expand "Calculate the MEAN() difference between two fields" %}} + +```sql +SELECT MEAN("difference") FROM (SELECT "cats" - "dogs" AS "difference" FROM "pet_daycare") +``` + +Output: +{{% influxql/table-meta %}} +Name: pet_daycare +{{% /influxql/table-meta %}} + +| time | max | +| :--------------------------- | ------------------: | +| 1970-01-01T00:00:00Z | 1.75 | + + +The query returns the average of the differences between the number of `cats` and `dogs` in the `pet_daycare` measurement. + +InfluxDB first performs the subquery. +The subquery calculates the difference between the values in the `cats` field and the values in the `dogs` field, +and it names the output column `difference`: + +```sql +SELECT "cats" - "dogs" AS "difference" FROM "pet_daycare" +``` +Output: +{{% influxql/table-meta %}} +Name: pet_daycare +{{% /influxql/table-meta %}} + +| time | difference | +| :--------------------------- | ------------------: | +| 2017-01-20T00:55:56Z | -1 | +2017-01-21T00:55:56Z | -49 | +2017-01-22T00:55:56Z | 66 | +2017-01-23T00:55:56Z | -9 | + + +Next, InfluxDB performs the main query and calculates the average of those differences. +Notice that the main query specifies `difference` as the field key in the [`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) function. + +{{% /expand %}} + +{{% expand "Calculate several MEAN() values and place a condition on those mean values" %}} + +```sql +SELECT "all_the_means" FROM (SELECT MEAN("water_level") AS "all_the_means" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) ) WHERE "all_the_means" > 5 +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | all_the_means | +| :--------------------------- | ------------------: | +| 2019-08-18T00:00:00Z | 5.4135000000 | +| 2019-08-18T00:12:00Z | 5.3042500000 | +| 2019-08-18T00:24:00Z | 5.1682500000 | + + +The query returns all mean values of the `water_level` field that are greater than five. + +InfluxDB first performs the subquery. +The subquery calculates `MEAN()` values of `water_level` from `2019-08-18T00:00:00Z` through `2019-08-18T00:30:00Z` and groups the results into 12-minute intervals. It also names the output column `all_the_means`: + +```sql +SELECT MEAN("water_level") AS "all_the_means" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | all_the_means | +| :--------------------------- | ------------------: | +| 2019-08-18T00:00:00Z | 5.4135000000 | +| 2019-08-18T00:12:00Z | 5.3042500000 | +| 2019-08-18T00:24:00Z | 5.1682500000 | + +Next, InfluxDB performs the main query and returns only those mean values that are greater than five. +Notice that the main query specifies `all_the_means` as the field key in the `SELECT` clause. + +{{% /expand %}} + +{{% expand "Calculate the SUM() of several DERIVATIVE() values" %}} + +```sql +SELECT SUM("water_level_derivative") AS "sum_derivative" FROM (SELECT DERIVATIVE(MEAN("water_level")) AS "water_level_derivative" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),"location") GROUP BY "location" +``` + +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | sum_derivative | +| :--------------------------- | ------------------: | +| 1970-01-01T00:00:00Z | -0.5315000000 | + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | sum_derivative | +| :--------------------------- | ------------------: | +| 1970-01-01T00:00:00Z | -0.2375000000 | + +The query returns the sum of the derivative of average `water_level` values for each tag value of `location`. + +InfluxDB first performs the subquery. +The subquery calculates the derivative of average `water_level` values taken at 12-minute intervals. +It performs that calculation for each tag value of `location` and names the output column `water_level_derivative`: + +```sql +SELECT DERIVATIVE(MEAN("water_level")) AS "water_level_derivative" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),"location" +``` +Output: +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | water_level_derivative | +| :--------------------------- | ------------------: | +| 2019-08-18T00:00:00Z | -0.1410000000 | +| 2019-08-18T00:12:00Z | -0.1890000000 | +| 2019-08-18T00:24:00Z | -0.2015000000 | + + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | water_level_derivative | +| :--------------------------- | ------------------: | +| 2019-08-18T00:00:00Z | -0.1375000000 | +| 2019-08-18T00:12:00Z | -0.0295000000 | +| 2019-08-18T00:24:00Z | -0.0705000000 | + + +Next, InfluxDB performs the main query and calculates the sum of the `water_level_derivative` values for each tag value of `location`. +Notice that the main query specifies `water_level_derivative`, not `water_level` or `derivative`, as the field key in the [`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum) function. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with subqueries + +#### Multiple statements in a subquery + +InfluxQL supports multiple nested subqueries per main query: + +```sql +SELECT_clause FROM ( SELECT_clause FROM ( SELECT_statement ) [...] ) [...] + ------------------ ---------------- + Subquery 1 Subquery 2 +``` + +InfluxQL does not support multiple [`SELECT` statements](/influxdb/v2.6/query-data/influxql/explore-data/select/) per subquery: + +```sql +SELECT_clause FROM (SELECT_statement; SELECT_statement) [...] +``` + +The system returns a parsing error if a subquery includes multiple `SELECT` statements. diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone.md b/content/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone.md new file mode 100644 index 000000000..cac9bdd8e --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone.md @@ -0,0 +1,441 @@ +--- +title: Time and timezone queries +list_title: Time and timezone queries +description: > + Explore InfluxQL features used specifically for working with time. Use the `tz` (timezone) clause to return the UTC offset for the specified timezone. +menu: + influxdb_2_6: + name: Time and timezone + parent: Explore data +weight: 308 +list_code_example: | + ```sql + SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] tz('') + ``` +--- + +InfluxQL is designed for working with time series data and includes features specifically for working with time. +You can review the following ways to work with time and timestamps in your InfluxQL queries: + +- [Configuring returned timestamps](#configuring-returned-timestamps) +- [Time syntax](#time-syntax) +- [Absolute time](#absolute-time) +- [Relative time](#relative-time) +- [The Time Zone clause](#the-time-zone-clause) +- [Common issues with time syntax](#common-issues-with-time-syntax) + +## Configuring returned timestamps + +The [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/) returns timestamps in +nanosecond UNIX epoch format by default. +Specify alternative formats with the +[`precision ` command](/influxdb/v2.6/tools/influxql-shell/#precision). + +If you are using the [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/), use the precision helper command `precision rfc3339` to view results in human readable format. + +The [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/) returns timestamps +in [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) format by default. +Specify alternative formats with the +[`epoch` query string parameter](/influxdb/v2.6/reference/api/influxdb-1x/). + +## Time syntax + +For most `SELECT` statements, the default time range is between [`1677-09-21 00:12:43.145224194` and `2262-04-11T23:47:16.854775806Z` UTC](/influxdb/v2.6/reference/faq/#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store). +For `SELECT` statements with a [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/), +the default time range is between `1677-09-21 00:12:43.145224194` UTC and [`now()`](/influxdb/v2.6/reference/glossary/#now). +The following sections detail how to specify alternative time ranges in the `SELECT` +statement's [`WHERE` clause](/influxdb/v2.6/query-data/influxql/explore-data/where/). + +Other supported features include: +[Absolute time](#absolute-time) +[Relative time](#relative-time) + +## Absolute time + +Specify absolute time with date-time strings and epoch time. + +### Syntax + +```sql +SELECT_clause FROM_clause WHERE time ['' | '' | ] [AND ['' | '' | ] [...]] +``` + +#### Supported operators + +| Operator | Meaning | +|:--------:|:------- | +| `=` | equal to | +| `<>` | not equal to | +| `!=` | not equal to | +| `>` | greater than | +| `>=` | greater than or equal to | +| `<` | less than | +| `<=` | less than or equal to | + +Currently, InfluxDB does not support using `OR` with absolute time in the `WHERE` +clause. See the [Frequently Asked Questions](/influxdb/v2.6/reference/faq/#why-is-my-query-with-a-where-or-time-clause-returning-empty-results) +document and the [GitHub Issue](https://github.com/influxdata/influxdb/issues/7530) +for more information. + +#### `rfc3339_date_time_string` + +```sql +'YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ' +``` + +`.nnnnnnnnn` is optional and is set to `.000000000` if not included. +The [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) date-time string requires single quotes. + +#### `rfc3339_like_date_time_string` + +```sql +'YYYY-MM-DD HH:MM:SS.nnnnnnnnn' +``` + +`HH:MM:SS.nnnnnnnnn.nnnnnnnnn` is optional and is set to `00:00:00.000000000` if not included. +The RFC3339-like date-time string requires single quotes. + +#### `epoch_time` + +Epoch time is the amount of time that has elapsed since 00:00:00 +Coordinated Universal Time (UTC), Thursday, 1 January 1970. + +By default, InfluxDB assumes that all epoch timestamps are in **nanoseconds**. Include a [duration literal](/influxdb/v2.6/reference/glossary/#duration) at the end of the epoch timestamp to indicate a precision other than nanoseconds. + +#### Basic arithmetic + +All timestamp formats support basic arithmetic. +Add (`+`) or subtract (`-`) a time from a timestamp with a [duration literal](/influxdb/v2.6/reference/glossary/#duration). +Note that InfluxQL requires a whitespace between the `+` or `-` and the +duration literal. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Specify a time range with RFC3339 date-time strings" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00.000000000Z' AND time <= '2019-08-18T00:12:00Z' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ------------------:| +| 2019-08-18T00:00:00Z | 2.3520000000| +| 2019-08-18T00:06:00Z | 2.3790000000| +| 2019-08-18T00:12:00Z | 2.3430000000| + +The query returns data with timestamps between August 18, 2019 at 00:00:00.000000000 and +August 18, 2019 at 00:12:00. + +Note that the single quotes around the RFC3339 date-time strings are required. + +{{% /expand %}} + +{{% expand "Specify a time range with RFC3339-like date-time strings" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18' AND time <= '2019-08-18 00:12:00' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ------------------:| +| 2019-08-18T00:00:00Z | 2.3520000000| +| 2019-08-18T00:06:00Z | 2.3790000000| +| 2019-08-18T00:12:00Z | 2.3430000000| + +The query returns data with timestamps between August 18, 2019 at 00:00:00 and August 18, 2019 +at 00:12:00. +The first date-time string does not include a time; InfluxDB assumes the time +is 00:00:00. + +Note that the single quotes around the RFC3339-like date-time strings are +required. + +{{% /expand %}} + +{{% expand "Specify a time range with epock timestamps" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= 1564635600000000000 AND time <= 1566190800000000000 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ------------------:| +| 2019-08-17T00:00:00Z | 2.0640000000| +| 2019-08-17T00:06:00Z | 2.1160000000| +| 2019-08-17T00:12:00Z | 2.0280000000| +| 2019-08-17T00:18:00Z | 2.1260000000| +| 2019-08-17T00:24:00Z | 2.0410000000| +| 2019-08-17T00:30:00Z | 2.0510000000| +| 2019-08-17T00:36:00Z | 2.0670000000| +| 2019-08-17T00:42:00Z | 2.0570000000| +| 2019-08-17T00:48:00Z | 1.9910000000| +| 2019-08-17T00:54:00Z | 2.0540000000| +| 2019-08-17T01:00:00Z | 2.0180000000| +| 2019-08-17T01:06:00Z | 2.0960000000| +| 2019-08-17T01:12:00Z | 2.1000000000| +| 2019-08-17T01:18:00Z | 2.1060000000| +| 2019-08-17T01:24:00Z | 2.1261441460| + +The query returns data with timestamps that occur between August 1, 2019 +at 00:00:00 and August 19, 2019 at 00:12:00. By default InfluxDB assumes epoch timestamps are in nanoseconds. + +{{% /expand %}} + +{{% expand "Specify a time range with second-precision epoch timestamps" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= 1566190800s AND time <= 1566191520s +``` +Output: +| time | water_level | +| :------------------ | ------------------:| +| 2019-08-19T05:00:00Z | 3.2320000000| +| 2019-08-19T05:06:00Z | 3.2320000000| +| 2019-08-19T05:12:00Z | 3.2910000000| + +The query returns data with timestamps that occur between August 19, 2019 +at 00:00:00 and August 19, 2019 at 00:12:00. +The `s` duration literal at the end of the epoch timestamps indicate that the epoch timestamps are in seconds. + +{{% /expand %}} + +{{% expand "Perform basic arithmetic on an RFC3339-like date-time string" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time > '2019-09-17T21:24:00Z' + 6m +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ------------------:| +| 2019-09-17T21:36:00Z | 5.0660000000| +| 2019-09-17T21:42:00Z | 4.9380000000| + +The query returns data with timestamps that occur at least six minutes after +September 17, 2019 at 21:24:00. +Note that the whitespace between the `+` and `6m` is required. + +{{% /expand %}} + +{{% expand "Perform basic arithmetic on an epock timestamp" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time > 24043524m - 6m +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------ | ------------------:| +| 2019-08-17T00:00:00Z | 8.1200000000| +| 2019-08-17T00:00:00Z | 2.0640000000| +| 2019-08-17T00:06:00Z | 8.0050000000| +| 2019-08-17T00:06:00Z | 2.1160000000| +| 2019-08-17T00:12:00Z | 7.8870000000| +| 2019-08-17T00:12:00Z | 2.0280000000| +| 2019-08-17T00:18:00Z | 7.7620000000| +| 2019-08-17T00:18:00Z | 2.1260000000| + +The query returns data with timestamps that occur at least six minutes before +September 18, 2019 at 21:24:00. Note that the whitespace between the `-` and `6m` is required. Note that the results above are partial as the dataset is large. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Relative time + +Use [`now()`](/influxdb/v2.6/reference/glossary/#now) to query data with [timestamps](/influxdb/v2.6/reference/glossary/#timestamp) relative to the server's current timestamp. + +### Syntax + +```sql +SELECT_clause FROM_clause WHERE time now() [[ - | + ] ] [(AND|OR) now() [...]] +``` + +`now()` is the Unix time of the server at the time the query is executed on that server. +The whitespace between `-` or `+` and the [duration literal](/influxdb/v2.6/reference/glossary/#duration) is required. + +#### Supported operators +| Operator | Meaning | +|:--------:|:------- | +| `=` | equal to | +| `<>` | not equal to | +| `!=` | not equal to | +| `>` | greater than | +| `>=` | greater than or equal to | +| `<` | less than | +| `<=` | less than or equal to | + +#### `duration_literal` + +- microseconds: `u` or `µ` +- milliseconds: `ms` +- seconds`s` +- minutes`m` +- hours:`h` +- days:`d` +- weeks:`w` + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Specify a time range with relative time" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h +``` + +The query returns data with timestamps that occur within the past hour. +The whitespace between `-` and `1h` is required. + +{{% /expand %}} + +{{% expand "Specify a time range with absolute time and relative time" %}} + +#### Specify a time range with absolute time and relative time + +```sql +SELECT "level description" FROM "h2o_feet" WHERE time > '2019-09-17T21:18:00Z' AND time < now() + 1000d +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | +| :------------------ |--------------------:| +|2019-09-17T21:24:00Z | between 3 and 6 feet | +|2019-09-17T21:30:00Z | between 3 and 6 feet | +|2019-09-17T21:36:00Z | between 3 and 6 feet | +|2019-09-17T21:42:00Z | between 3 and 6 feet | + +The query returns data with timestamps that occur between September 17, 2019 at 21:18:00 and 1000 days from `now()`. The whitespace between `+` and `1000d` is required. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## The Time Zone clause + +Use the `tz()` clause to return the UTC offset for the specified timezone. + +### Syntax + +```sql +SELECT_clause FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] tz('') +``` + +By default, InfluxDB stores and returns timestamps in UTC. +The `tz()` clause includes the UTC offset or, if applicable, the UTC Daylight Savings Time (DST) offset to the query's returned timestamps. The returned timestamps must be in `RFC3339` format for the UTC offset or UTC DST to appear. +The `time_zone` parameter follows the TZ syntax in the [Internet Assigned Numbers Authority time zone database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List) and it requires single quotes. + +### Examples + +{{< expand-wrapper >}} + +{{% expand "Return the UTC offset for Chicago's time zone" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:18:00Z' tz('America/Chicago') +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :-------------- | -------------------:| +| 2019-08-17T19:00:00-05:00 | 2.3520000000| +| 2019-08-17T19:06:00-05:00 | 2.3790000000| +| 2019-08-17T19:12:00-05:00 | 2.3430000000| +| 2019-08-17T19:18:00-05:00 | 2.3290000000| + +The query results include the UTC offset (`-05:00`) for the `America/Chicago` time zone in the timestamps. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Common issues with time syntax + +### Using `OR` to select time multiple time intervals + +InfluxDB does not support using the `OR` operator in the `WHERE` clause to specify multiple time intervals. + +For more information, see [Frequently asked questions](/influxdb/v2.6/reference/faq/#why-is-my-query-with-a-where-or-time-clause-returning-empty-results). + +### Querying data that occur after `now()` with a `GROUP BY time()` clause + +Most `SELECT` statements have a default time range between [`1677-09-21 00:12:43.145224194` and `2262-04-11T23:47:16.854775806Z` UTC](/influxdb/v2.6/reference/faq/#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store). +For `SELECT` statements with a [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals), +the default time range is between `1677-09-21 00:12:43.145224194` UTC and [`now()`](/influxdb/v2.6/reference/glossary/#now). + +To query data with timestamps that occur after `now()`, `SELECT` statements with +a `GROUP BY time()` clause must provide an alternative upper bound in the +`WHERE` clause. + + diff --git a/content/influxdb/v2.6/query-data/influxql/explore-data/where.md b/content/influxdb/v2.6/query-data/influxql/explore-data/where.md new file mode 100644 index 000000000..4f18cd7a0 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-data/where.md @@ -0,0 +1,314 @@ +--- +title: The WHERE clause +list_title: WHERE clause +description: > + Use the `WHERE` clause to filter data based on [fields](/influxdb/v2.6/reference/glossary/#field), [tags](/influxdb/v2.6/reference/glossary/#tag), and/or [timestamps](/influxdb/v2.6/reference/glossary/#timestamp). +menu: + influxdb_2_6: + name: WHERE clause + parent: Explore data +weight: 302 +list_code_example: | + ```sql + SELECT_clause FROM_clause WHERE [(AND|OR) [...]] + ``` +--- + +Use the `WHERE` clause to filter data based on +[fields](/influxdb/v2.6/reference/glossary/#field), +[tags](/influxdb/v2.6/reference/glossary/#tag), and/or +[timestamps](/influxdb/v2.6/reference/glossary/#timestamp). + +- [Syntax](#syntax) +- [Examples](#examples) +- [Common issues](#common-issues-with-the-where-clause) + +### Syntax + +```sql +SELECT_clause FROM_clause WHERE [(AND|OR) [...]] +``` + +The `WHERE` clause supports `conditional_expressions` on fields, tags, and timestamps. + +{{% note %}} +**Note:** InfluxDB does not support using OR in the WHERE clause to specify multiple time ranges. For example, InfluxDB returns an empty response for the following query: + +```sql +SELECT * FROM "mydb" WHERE time = '2020-07-31T20:07:00Z' OR time = '2020-07-31T23:07:17Z'` +``` +{{% /note %}} + +#### Fields + +``` +field_key ['string' | boolean | float | integer] +``` + +The `WHERE` clause supports comparisons against string, boolean, float, and integer [field values](/influxdb/v2.6/reference/glossary/#field-value). + +Single quote string field values in the `WHERE` clause. +Queries with unquoted string field values or double quoted string field values will not return any data and, in most cases, +[will not return an error](#common-issues-with-the-where-clause). + +#### Supported operators + +| Operator | Meaning | +|:--------:|:-------- | +| `=` | equal to | +| `<>` | not equal to | +| `!=` | not equal to | +| `>` | greater than | +| `>=` | greater than or equal to | +| `<` | less than | +| `<=` | less than or equal to | + +InfluxQL also supports [Regular Expressions](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +#### Tags + +```sql +tag_key ['tag_value'] +``` + +Single quote [tag values](/influxdb/v2.6/reference/glossary/#tag-value) in +the `WHERE` clause. +Queries with unquoted tag values or double quoted tag values will not return +any data and, in most cases, +[will not return an error](#common-issues-with-the-where-clause). + +#### Supported operators + +| Operator | Meaning | +|:--------:|:------- | +| `=` | equal to | +| `<>` | not equal to | +| `!=` | not equal to | + +#### Timestamps + +For most `SELECT` statements, the default time range is between [`1677-09-21 00:12:43.145224194` and `2262-04-11T23:47:16.854775806Z` UTC](/influxdb/v2.6/reference/faq/#what-are-the-minimum-and-maximum-integers-that-influxdb-can-store). +For `SELECT` statements with a [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/), the default time +range is between `1677-09-21 00:12:43.145224194` UTC and [`now()`](/influxdb/v2.6/reference/glossary/#now). + +See [Time Syntax](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) for information on how to specify alternative time ranges in the `WHERE` clause. + +### Examples + +{{< expand-wrapper >}} +{{% expand "Select data with specific field key-values" %}} + + +```sql +SELECT * FROM "h2o_feet" WHERE "water_level" > 9 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- | :-------------------| :------------------| -------: | +| 2019-08-25T04:00:00Z | at or greater than 9 feet | coyote_creek | 9.0320000000| +| 2019-08-25T04:06:00Z | at or greater than 9 feet | coyote_creek | 9.0780000000| +| 2019-08-25T04:12:00Z | at or greater than 9 feet | coyote_creek | 9.1110000000| +| 2019-08-25T04:18:00Z | at or greater than 9 feet | coyote_creek | 9.1500000000| +| 2019-08-25T04:24:00Z | at or greater than 9 feet | coyote_creek | 9.1800000000| + +The query returns data from the `h2o_feet` measurement with field values of `water_level` that are greater than nine. +This is a partial data set. + +{{% /expand %}} + +{{% expand "Select data with a specific string field key-value" %}} + +```sql +SELECT * FROM "h2o_feet" WHERE "level description" = 'below 3 feet' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- | :-------------------| :------------------| :------------------ | +| 2019-08-17T00:00:00Z | below 3 feet | santa_monica | 2.0640000000| +| 2019-08-17T00:06:00Z | below 3 feet | santa_monica | 2.1160000000| +| 2019-08-17T00:12:00Z | below 3 feet | santa_monica | 2.0280000000| +| 2019-08-17T00:18:00Z | below 3 feet | santa_monica | 2.1260000000| +| 2019-08-17T00:24:00Z | below 3 feet | santa_monica | 2.0410000000| +| 2019-08-17T00:30:00Z | below 3 feet | santa_monica | 2.0510000000| + +The query returns data from the `h2o_feet` measurement with field values of `level description` that equal the `below 3 feet` string. InfluxQL requires single quotes around string field values in the `WHERE` clause. + +{{% /expand %}} + +{{% expand "Select data with a specific field key-value and perform basic arithmetic" %}} + +```sql +SELECT * FROM "h2o_feet" WHERE "water_level" + 2 > 11.9 +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level description | location | water_level | +| :-------------- | :-------------------| :------------------|---------------: | +| 2019-08-28T07:06:00Z | at or greater than 9 feet | coyote_creek | 9.9020000000| +| 2019-08-28T07:12:00Z | at or greater than 9 feet | coyote_creek | 9.9380000000| +| 2019-08-28T07:18:00Z | at or greater than 9 feet | coyote_creek | 9.9570000000| +| 2019-08-28T07:24:00Z | at or greater than 9 feet | coyote_creek | 9.9640000000| +| 2019-08-28T07:30:00Z | at or greater than 9 feet | coyote_creek | 9.9540000000| +| 2019-08-28T07:36:00Z | at or greater than 9 feet | coyote_creek | 9.9410000000| +| 2019-08-28T07:42:00Z | at or greater than 9 feet | coyote_creek | 9.9250000000| +| 2019-08-28T07:48:00Z | at or greater than 9 feet | coyote_creek | 9.9020000000| +| 2019-09-01T23:30:00Z | at or greater than 9 feet | coyote_creek | 9.9020000000| + +The query returns data from the `h2o_feet` measurement with field values of +`water_level` plus two that are greater than 11.9. Note that InfluxDB follows the standard order of operations. + +See [Mathematical operators](/influxdb/v2.6/query-data/influxql/math-operators/) +for more on supported operators. + +{{% /expand %}} + +{{% expand "Select data with a specific tag key-value" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :-------------- | -------------------:| +| 2019-08-17T00:00:00Z | 2.0640000000| +| 2019-08-17T00:06:00Z | 2.1160000000| +| 2019-08-17T00:12:00Z | 2.0280000000| +| 2019-08-17T00:18:00Z | 2.1260000000| +| 2019-08-17T00:24:00Z | 2.0410000000| + + + +The query returns data from the `h2o_feet` measurement where the +[tag key](/influxdb/v2.6/reference/glossary/#tag-key) `location` is set to `santa_monica`. +InfluxQL requires single quotes around tag values in the `WHERE` clause. + +{{% /expand %}} + +{{% expand "Select data with specific field key-values and tag key-valuest" %}} + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" <> 'santa_monica' AND (water_level < -0.59 OR water_level > 9.95) +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-28T07:18:00Z | 9.9570000000| +| 2019-08-28T07:24:00Z | 9.9640000000| +| 2019-08-28T07:30:00Z | 9.9540000000| +| 2019-08-28T14:30:00Z | -0.6100000000| +| 2019-08-28T14:36:00Z | -0.5910000000| +| 2019-08-29T15:18:00Z | -0.5940000000| + +The query returns data from the `h2o_feet` measurement where the tag key +`location` is not set to `santa_monica` and where the field values of +`water_level` are either less than -0.59 or greater than 9.95. +The `WHERE` clause supports the operators `AND` and `OR`, and supports +separating logic with parentheses. + +{{% /expand %}} + + +{{< /expand-wrapper >}} + +```sql +SELECT * FROM "h2o_feet" WHERE time > now() - 7d +``` + +The query returns data from the `h2o_feet` measurement with [timestamps](/influxdb/v2.6/reference/glossary/#timestamp) +within the past seven days. See [Time Syntax](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) for more in-depth information on supported time syntax in the `WHERE` clause. + +### Common issues with the `WHERE` clause + +#### A `WHERE` clause query unexpectedly returns no data + +In most cases, this issue is the result of missing single quotes around +tag values or string field values. +Queries with unquoted or double quoted tag values or string field values will +not return any data and, in most cases, will not return an error. + +The first two queries in the code block below attempt to specify the tag value +`santa_monica` without any quotes and with double quotes. +Those queries return no results. +The third query single quotes `santa_monica` (this is the supported syntax) +and returns the expected results. + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = santa_monica +No results + +SELECT "water_level" FROM "h2o_feet" WHERE "location" = "santa_monica" +No results + +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' +``` +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-17T00:00:00Z | 2.0640000000 | +| 2019-08-17T00:06:00Z | 2.1160000000 | +| 2019-08-17T00:12:00Z | 2.0280000000 | +| 2019-08-17T00:18:00Z | 2.1260000000 | +| 2019-08-17T00:24:00Z | 2.0410000000 | +| 2019-08-17T00:30:00Z | 2.0510000000 | + +The first two queries in the code block below attempt to specify the string +field value `at or greater than 9 feet` without any quotes and with double +quotes. +The first query returns an error because the string field value includes +white spaces. +The second query returns no results. +The third query single quotes `at or greater than 9 feet` (this is the +supported syntax) and returns the expected results. + +```sql +SELECT "level description" FROM "h2o_feet" WHERE "level description" = at or greater than 9 feet +ERR: 400 Bad Request: failed to parse query: found than, expected ; at line 1, char 86 + +SELECT "level description" FROM "h2o_feet" WHERE "level description" = "at or greater than 9 feet" +No results + +SELECT "level description" FROM "h2o_feet" WHERE "level description" = 'at or greater than 9 feet' +``` + +Output: +{{% influxql/table-meta %}} +Name: h2o_feet +{{% /influxql/table-meta %}} + +| time | level_description | +| :---------------------------| ------: | +| 2019-08-25T04:00:00Z | at or greater than 9 feet | +| 2019-08-25T04:06:00Z | at or greater than 9 feet | +| 019-08-25T04:12:00Z | at or greater than 9 feet | +| 2019-08-25T04:18:00Z | at or greater than 9 feet | +| 2019-08-25T04:24:00Z | at or greater than 9 feet | \ No newline at end of file diff --git a/content/influxdb/v2.6/query-data/influxql/explore-schema.md b/content/influxdb/v2.6/query-data/influxql/explore-schema.md new file mode 100644 index 000000000..606d0924a --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/explore-schema.md @@ -0,0 +1,592 @@ +--- +title: Explore your schema using InfluxQL +description: > + Learn to use InfluxQL to explore the schema of your time series data. +menu: + influxdb_2_6: + name: Explore your schema + parent: Query with InfluxQL + identifier: explore-schema-influxql +weight: 202 +--- + +Use InfluxQL to explore the schema of your time series data. +Use the following InfluxQL commands to explore your schema: + +- [SHOW SERIES](#show-series) +- [SHOW MEASUREMENTS](#show-measurements) +- [SHOW TAG KEYS](#show-tag-keys) +- [SHOW TAG VALUES](#show-tag-values) +- [SHOW FIELD KEYS](#show-field-keys) +- [SHOW FIELD KEY CARDINALITY](#show-field-key-cardinality) +- [SHOW TAG KEY CARDINALITY](#show-tag-key-cardinality) + +{{% note %}} +Command examples use the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). +{{% /note %}} + +## SHOW SERIES + +Return a list of [series](/influxdb/v2.6/reference/glossary/#series) for +the specified [database](/influxdb/v2.6/reference/glossary/#database). + +### Syntax + +```sql +SHOW SERIES [ON ] [FROM_clause] [WHERE [ '' | ]] [LIMIT_clause] [OFFSET_clause] +``` + +- `ON ` is optional. + If the query does not include `ON `, you must specify the + database with the `db` query string parameter in the + [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/) request. +- `FROM`, `WHERE`, `LIMIT`, and `OFFSET` clauses are optional. +- The `WHERE` clause in `SHOW SERIES` supports tag comparisons but not field comparisons. + + **Supported operators in the `WHERE` clause**: + + - `=`: equal to + - `<>`: not equal to + - `!=`: not equal to + - `=~`: matches against + - `!~`: doesn't match against + +See [Explore data using InfluxQL](/influxdb/v2.6/query-data/influxql/explore-data/) for documentation on the +[`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause), +[`LIMIT` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/), +[`OFFSET` clause](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/), +and [Regular Expressions](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +### Examples + +#### Run SHOW SERIES with the ON clause + +```sql +SHOW SERIES ON noaa +``` + +**Output:** + +The query returns all series in the `noaa` database. +The query's output is similar to the [line protocol](/influxdb/v2.6/reference/syntax/line-protocol/) format. +Everything before the first comma is the [measurement](/influxdb/v2.6/reference/glossary/#measurement) name. +Everything after the first comma is either a [tag key](/influxdb/v2.6/reference/glossary/#tag-key) or a [tag value](/influxdb/v2.6/reference/glossary/#tag-value). +The `noaa` database has 5 different measurements and 13 different series. + +| key | +| :------------------------------------------ | +| average_temperature,location=coyote_creek | +| average_temperature,location=santa_monica | +| h2o_feet,location=coyote_creek | +| h2o_feet,location=santa_monica | +| h2o_pH,location=coyote_creek | +| h2o_pH,location=santa_monica | +| h2o_quality,location=coyote_creek,randtag=1 | +| h2o_quality,location=coyote_creek,randtag=2 | +| h2o_quality,location=coyote_creek,randtag=3 | +| h2o_quality,location=santa_monica,randtag=1 | +| h2o_quality,location=santa_monica,randtag=2 | +| h2o_quality,location=santa_monica,randtag=3 | +| h2o_temperature,location=coyote_creek | + +#### Run SHOW SERIES with several clauses + +```sql +SHOW SERIES ON noaa FROM "h2o_quality" WHERE "location" = 'coyote_creek' LIMIT 2 +``` + +**Output:** + +The query returns all series in the `noaa` database that are +associated with the `h2o_quality` measurement and the tag `location = coyote_creek`. +The `LIMIT` clause limits the number of series returned to two. + +| key | +| :------------------------------------------ | +|h2o_quality,location=coyote_creek,randtag=1 | +|h2o_quality,location=coyote_creek,randtag=2 | + +## SHOW MEASUREMENTS + +Returns a list of [measurements](/influxdb/v2.6/reference/glossary/#measurement) +for the specified [database](/influxdb/v2.6/reference/glossary/#database). + +### Syntax + +```sql +SHOW MEASUREMENTS [ON ] [WITH MEASUREMENT ['' | ]] [WHERE ['' | ]] [LIMIT_clause] [OFFSET_clause] +``` + +- `ON ` is optional. + If the query does not include `ON `, you must specify the + database with the `db` query string parameter in the + [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/) request. + +- The `WITH`, `WHERE`, `LIMIT` and `OFFSET` clauses are optional. +- The `WHERE` in `SHOW MEASURMENTS` supports tag comparisons, but not field comparisons. + + **Supported operators in the `WHERE` clause:** + + - `=` : equal to + - `<>`: not equal to + - `!=`: not equal to + - `=~`: matches against + - `!~`: doesn't match against + +See [Explore data using InfluxQL](/influxdb/v2.6/query-data/influxql/explore-data/) for documentation on the +[`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause), +[`LIMIT` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/), +[`OFFSET` clause](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/), +and [Regular Expressions](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +### Examples + +#### Run SHOW MEASUREMENTS with the ON clause + +```sql +SHOW MEASUREMENTS ON noaa +``` + +**Output:** + +The query returns the list of measurements in the `noaa` database. +The database has five measurements: `average_temperature`, `h2o_feet`, `h2o_pH`, +`h2o_quality`, and `h2o_temperature`. + +| name | +| :------------------ | +| average_temperature | +| h2o_feet | +| h2o_pH | +| h2o_quality | +| h2o_temperature | + + +#### Run SHOW MEASUREMENTS with several clauses (i) + +```sql +SHOW MEASUREMENTS ON noaa WITH MEASUREMENT =~ /h2o.*/ LIMIT 2 OFFSET 1 +``` + +**Output:** + +The query returns the measurements in the `noaa` database that start with `h2o`. +The `LIMIT` and `OFFSET` clauses limit the number of measurement names returned to +two and offset the results by one, skipping the `h2o_feet` measurement. + +| name | +| :---------- | +| h2o_pH | +| h2o_quality | + +#### Run SHOW MEASUREMENTS with several clauses (ii) + +```sql +SHOW MEASUREMENTS ON noaa WITH MEASUREMENT =~ /h2o.*/ WHERE "randtag" =~ /\d/ +``` + +The query returns all measurements in the `noaa` that start with `h2o` and have +values for the tag key `randtag` that include an integer. + +| name | +| :---------- | +| h2o_quality | + +## SHOW TAG KEYS + +Returns a list of [tag keys](/influxdb/v2.6/reference/glossary/#tag-key) +associated with the specified [database](/influxdb/v2.6/reference/glossary/#database). + +### Syntax + +```sql +SHOW TAG KEYS [ON ] [FROM_clause] WITH KEY [ [ "" | ] | [IN ("","")]] [WHERE ['' | ]] [LIMIT_clause] [OFFSET_clause] +``` + +- `ON ` is optional. + If the query does not include `ON `, you must specify the + database with `db` query string parameter in the [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/) request. +- The `FROM` clause and the `WHERE` clause are optional. +- The `WHERE` clause in `SHOW TAG KEYS` supports tag comparisons, but not field comparisons. + + **Supported operators in the `WHERE` clause:** + + - `=` : equal to + - `<>`: not equal to + - `!=`: not equal to + - `=~`: matches against + - `!~`: doesn't match against + +See [Explore data using InfluxQL](/influxdb/v2.6/query-data/influxql/explore-data/) for documentation on the +[`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause), +[`LIMIT` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/), +[`OFFSET` clause](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/), +and [Regular Expressions](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +### Examples + +#### Run SHOW TAG KEYS with the ON clause + +```sql +SHOW TAG KEYS ON noaa +``` + +**Output:** + +The query returns the list of tag keys in the `noaa` database. +The output groups tag keys by measurement name; +it shows that every measurement has the `location` tag key and that the +`h2o_quality` measurement has an additional `randtag` tag key. + +| name | tagKey | +| :------------------ | :------- | +| average_temperature | location | +| h2o_feet | location | +| h2o_pH | location | +| h2o_quality | location | +| h2o_quality | randtag | +| h2o_temperature | location | + + +#### Run SHOW TAG KEYS with several clauses + +```sql +SHOW TAG KEYS ON noaa FROM "h2o_quality" LIMIT 1 OFFSET 1 +``` + +**Output:** + +The query returns tag keys from the `h2o_quality` measurement in the `noaa` database. +The `LIMIT` and `OFFSET` clauses limit the number of tag keys returned to one +and offsets the results by one. + +| name | tagKey | +| :---------- | :------ | +| h2o_quality | randtag | + +#### Run SHOW TAG KEYS with a WITH KEY IN clause + +```sql +SHOW TAG KEYS ON noaa WITH KEY IN ("location") +``` + +**Output:** + +| measurement | tagKey | +| :------------------ | :------- | +| average_temperature | location | +| h2o_feet | location | +| h2o_pH | location | +| h2o_quality | location | +| h2o_quality | randtag | +| h2o_temperature | location | + + +## SHOW TAG VALUES + +Returns the list of [tag values](/influxdb/v2.6/reference/glossary/#tag-value) +for the specified [tag key(s)](/influxdb/v2.6/reference/glossary/#tag-key) in the database. + +### Syntax + +```sql +SHOW TAG VALUES [ON ][FROM_clause] WITH KEY [ [ "" | ] | [IN ("","")]] [WHERE ['' | ]] [LIMIT_clause] [OFFSET_clause] +``` + +- `ON ` is optional. + If the query does not include `ON `, you must specify the + database with the `db` query string parameter in the [InfluxDB API](/influxdb/v2.6/reference/api/influxdb-1x/) request. +- The `WITH` clause is required. + It supports specifying a single tag key, a regular expression, and multiple tag keys. +- The `FROM`, `WHERE`, `LIMIT`, and `OFFSET` clauses are optional. +- The `WHERE` clause in `SHOW TAG KEYS` supports tag comparisons, but not field comparisons. + + **Supported operators in the `WITH` and `WHERE` clauses:** + + - `=` : equal to + - `<>`: not equal to + - `!=`: not equal to + - `=~`: matches against + - `!~`: doesn't match against + +See [Explore data using InfluxQL](/influxdb/v2.6/query-data/influxql/explore-data/) for documentation on the +[`FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause), +[`LIMIT` clause](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/), +[`OFFSET` clause](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/), +and [Regular Expressions](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +### Examples + +#### Run SHOW TAG VALUES with the ON clause + +```sql +SHOW TAG VALUES ON noaa WITH KEY = "randtag" +``` + +**Output:** + +The query returns all tag values of the `randtag` tag key in the `noaa` database. +`SHOW TAG VALUES` groups query results by measurement name. + +{{% influxql/table-meta %}} +name: h2o_quality +{{% /influxql/table-meta %}} + +| key | value | +| :------ | ----: | +| randtag | 1 | +| randtag | 2 | +| randtag | 3 | + +#### Run a `SHOW TAG VALUES` query with several clauses + +```sql +SHOW TAG VALUES ON noaa WITH KEY IN ("location","randtag") WHERE "randtag" =~ /./ LIMIT 3 +``` + +**Output:** + +The query returns the tag values of the tag keys `location` and `randtag` for +all measurements in the `noaa` database where `randtag` has tag values. +The `LIMIT` clause limits the number of tag values returned to three. + +{{% influxql/table-meta %}} +name: h2o_quality +{{% /influxql/table-meta %}} + +| key | value | +| :------- | -----------: | +| location | coyote_creek | +| location | santa_monica | +| randtag | 1 | + +## SHOW FIELD KEYS + +Returns the [field keys](/influxdb/v2.6/reference/glossary/#field-key) and the +[data type](/influxdb/v2.6/reference/glossary/#data-type) of their +[field values](/influxdb/v2.6/reference/glossary/#field-value). + +### Syntax + +```sql +SHOW FIELD KEYS [ON ] [FROM ] +``` + +- `ON ` is optional. + If the query does not include `ON `, you must specify the + database with `USE ` when using the [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/) + or with the `db` query string parameter in the + [InfluxDB 1.x compatibility API](/influxdb/v2.6/reference/api/influxdb-1x/) request. +- The `FROM` clause is optional. + See the Data Exploration page for documentation on the +[` FROM` clause](/influxdb/v2.6/query-data/influxql/explore-data/select/#from-clause). + +{{% note %}} +**Note:** A field's data type [can differ](/influxdb/v2.6/reference/faq/#how-does-influxdb-handle-field-type-discrepancies-across-shards) across +[shards](/influxdb/v2.6/reference/glossary/#shard). +If your field has more than one type, `SHOW FIELD KEYS` returns the type that +occurs first in the following list: float, integer, string, boolean. +{{% /note %}} + +### Examples + +#### Run SHOW FIELD KEYS with the ON clause + +```sql +SHOW FIELD KEYS ON noaa +``` + +**Output:** + +The query returns the field keys and field value data types for each +measurement in the `noaa` database. + +| name | fieldKey | fieldType | +| :------------------ | :---------------- | :-------- | +| average_temperature | degrees | float | +| h2o_feet | level description | string | +| h2o_feet | water_level | float | +| h2o_pH | pH | float | +| h2o_quality | index | float | +| hh2o_temperature | degrees | float | + +#### Run SHOW FIELD KEYS with the FROM clause + +```sql +SHOW FIELD KEYS ON noaa FROM h2o_feet +``` + +**Output:** + +The query returns the fields keys and field value data types for the `h2o_feet` +measurement in the `noaa` database. + +| name | fieldKey | fieldType | +| :------- | :---------------- | :-------- | +| h2o_feet | level description | string | +| h2o_feet | water_level | float | + +### Common Issues with SHOW FIELD KEYS + +#### SHOW FIELD KEYS and field type discrepancies + +Field value [data types](/influxdb/v2.6/reference/glossary/#data-type) +cannot differ within a [shard](/influxdb/v2.6/reference/glossary/#shard) but they +can differ across shards. +`SHOW FIELD KEYS` returns every data type, across every shard, associated with +the field key. + +##### Example + +The `all_the_types` field stores four different data types: + +```sql +SHOW FIELD KEYS +``` + +{{% influxql/table-meta %}} +name: mymeas +{{% /influxql/table-meta %}} + +| fieldKey | fieldType | +| :------------ | :-------- | +| all_the_types | integer | +| all_the_types | float | +| all_the_types | string | +| all_the_types | boolean | + +Note that `SHOW FIELD KEYS` handles field type discrepancies differently from +`SELECT` statements. +For more information, see the +[How does InfluxDB handle field type discrepancies across shards?](/enterprise_influxdb/v1.9/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-field-type-discrepancies-across-shards). + +## SHOW FIELD KEY CARDINALITY + +Cardinality is the product of all unique databases, retention policies, measurements, field keys and tag values in your Influx instance. Managing cardinality is important, as high cardinality leads to greater resource usage. + +```sql +-- show estimated cardinality of the field key set of current database +SHOW FIELD KEY CARDINALITY +-- show exact cardinality on field key set of specified database +SHOW FIELD KEY EXACT CARDINALITY ON noaa +``` + +## SHOW TAG KEY CARDINALITY + +```sql +-- show estimated tag key cardinality +SHOW TAG KEY CARDINALITY +-- show exact tag key cardinality +SHOW TAG KEY EXACT CARDINALITY +``` + + + + diff --git a/content/influxdb/v2.6/query-data/influxql/functions/_index.md b/content/influxdb/v2.6/query-data/influxql/functions/_index.md new file mode 100644 index 000000000..37a240d42 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/functions/_index.md @@ -0,0 +1,75 @@ +--- +title: View InfluxQL functions +description: > + Aggregate, select, transform, and predict data with InfluxQL functions. +menu: + influxdb_2_6: + name: InfluxQL functions + parent: Query with InfluxQL +weight: 203 +--- + +Use InfluxQL functions to aggregate, select, transform, analyze, and predict data. + +{{% note %}} +To query with InfluxQL, the bucket you query must be mapped to a database and retention policy (DBRP). For more information, see how to [Query data with InfluxQL](/influxdb/v2.6/query-data/influxql/). +{{%/ note %}} + +## InfluxQL functions (by type) + +- [Aggregates](/influxdb/v2.6/query-data/influxql/functions/aggregates/) + - [COUNT()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count) + - [DISTINCT()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#distinct) + - [INTEGRAL()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#integral) + - [MEAN()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) + - [MEDIAN()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median) + - [MODE()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode) + - [SPREAD()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#spread) + - [STDDEV()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#stddev) + - [SUM()](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum) +- [Selectors](/influxdb/v2.6/query-data/influxql/functions/selectors/) + - [BOTTOM()](/influxdb/v2.6/query-data/influxql/functions/selectors/#bottom) + - [FIRST()](/influxdb/v2.6/query-data/influxql/functions/selectors/#first) + - [LAST()](/influxdb/v2.6/query-data/influxql/functions/selectors/#last) + - [MAX()](/influxdb/v2.6/query-data/influxql/functions/selectors/#max) + - [MIN()](/influxdb/v2.6/query-data/influxql/functions/selectors/#min) + - [PERCENTILE()](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile) + - [SAMPLE()](/influxdb/v2.6/query-data/influxql/functions/selectors/#sample) + - [TOP()](/influxdb/v2.6/query-data/influxql/functions/selectors/#top) +- [Transformations](/influxdb/v2.6/query-data/influxql/functions/transformations/) + - [ABS()](/influxdb/v2.6/query-data/influxql/functions/transformations/#abs) + - [ACOS()](/influxdb/v2.6/query-data/influxql/functions/transformations/#acos) + - [ASIN()](/influxdb/v2.6/query-data/influxql/functions/transformations/#asin) + - [ATAN()](/influxdb/v2.6/query-data/influxql/functions/transformations/#atan) + - [ATAN2()](/influxdb/v2.6/query-data/influxql/functions/transformations/#atan2) + - [CEIL()](/influxdb/v2.6/query-data/influxql/functions/transformations/#ceil) + - [COS()](/influxdb/v2.6/query-data/influxql/functions/transformations/#cos) + - [CUMULATIVE_SUM()](/influxdb/v2.6/query-data/influxql/functions/transformations/#cumulative_sum) + - [DERIVATIVE()](/influxdb/v2.6/query-data/influxql/functions/transformations/#derivative) + - [DIFFERENCE()](/influxdb/v2.6/query-data/influxql/functions/transformations/#difference) + - [ELAPSED()](/influxdb/v2.6/query-data/influxql/functions/transformations/#elapsed) + - [EXP()](/influxdb/v2.6/query-data/influxql/functions/transformations/#exp) + - [FLOOR()](/influxdb/v2.6/query-data/influxql/functions/transformations/#floor) + - [HISTOGRAM()](/influxdb/v2.6/query-data/influxql/functions/transformations/#histogram) + - [LN()](/influxdb/v2.6/query-data/influxql/functions/transformations/#ln) + - [LOG()](/influxdb/v2.6/query-data/influxql/functions/transformations/#log) + - [LOG2()](/influxdb/v2.6/query-data/influxql/functions/transformations/#log2) + - [LOG10()](/influxdb/v2.6/query-data/influxql/functions/transformations/#log10) + - [MOVING_AVERAGE()](/influxdb/v2.6/query-data/influxql/functions/transformations/#moving_average) + - [NON_NEGATIVE_DERIVATIVE()](/influxdb/v2.6/query-data/influxql/functions/transformations/#non_negative_derivative) + - [NON_NEGATIVE_DIFFERENCE()](/influxdb/v2.6/query-data/influxql/functions/transformations/#non_negative_difference) + - [POW()](/influxdb/v2.6/query-data/influxql/functions/transformations/#pow) + - [ROUND()](/influxdb/v2.6/query-data/influxql/functions/transformations/#round) + - [SIN()](/influxdb/v2.6/query-data/influxql/functions/transformations/#sin) + - [SQRT()](/influxdb/v2.6/query-data/influxql/functions/transformations/#sqrt) + - [TAN()](/influxdb/v2.6/query-data/influxql/functions/transformations/#tan) +- [Technical analysis](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/) + - (Predictive analysis) [HOLT_WINTERS()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#holt_winters) + - [CHANDE_MOMENTUM_OSCILLATOR()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#chande_momentum_oscillator) + - [EXPONENTIAL_MOVING_AVERAGE()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#exponential_moving_average) + - [DOUBLE_EXPONENTIAL_MOVING_AVERAGE()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#double_exponential_moving_average) + - [KAUFMANS_EFFICIENCY_RATIO()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#kaufmans_adaptive_moving_average) + - [KAUFMANS_ADAPTIVE_MOVING_AVERAGE()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#kaufmans_adaptive_moving_average) + - [TRIPLE_EXPONENTIAL_MOVING_AVERAGE()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#triple_exponential_moving_average) + - [TRIPLE_EXPONENTIAL_DERIVATIVE()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#triple_exponential_derivative) + - [RELATIVE_STRENGTH_INDEX()](/influxdb/v2.6/query-data/influxql/functions/technical-analysis/#relative_strength_index) diff --git a/content/influxdb/v2.6/query-data/influxql/functions/aggregates.md b/content/influxdb/v2.6/query-data/influxql/functions/aggregates.md new file mode 100644 index 000000000..e66457f77 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/functions/aggregates.md @@ -0,0 +1,1025 @@ +--- +title: InfluxQL aggregate functions +list_title: Aggregate functions +description: > + Aggregate data with InfluxQL aggregate functions. +menu: + influxdb_2_6: + name: Aggregates + parent: InfluxQL functions +weight: 205 +--- + +Use aggregate functions to assess, aggregate, and return values in your data. +Aggregate functions return one row containing the aggregate values from each InfluxQL group. + +Each aggregate function below covers **syntax** including parameters to pass to the function, and **examples** of how to use the function. Examples use [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). + +- [COUNT()](#count) +- [DISTINCT()](#distinct) +- [INTEGRAL()](#integral) +- [MEAN()](#mean) +- [MEDIAN()](#median) +- [MODE()](#mode) +- [SPREAD()](#spread) +- [STDDEV()](#stddev) +- [SUM()](#sum) + +## COUNT() + +Returns the number of non-null [field values](/influxdb/v2.6/reference/glossary/#field-value). Supports all field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +### Syntax + +```sql +SELECT COUNT( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`COUNT(*)` + +Returns the number of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`COUNT(field_key)` + +Returns the number of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`COUNT(/regular_expression/)` + +Returns the number of field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Count values for a field" %}} + +Return the number of non-null field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT COUNT("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | ---------------: | +| 1970-01-01T00:00:00Z | 61026.0000000000 | + +{{% /expand %}} + +{{% expand "Count values for each field in a measurement" %}} + +Return the number of non-null field values for each field key associated with the `h2o_feet` measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT COUNT(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count_level description | count_water_level | +| :------------------- | ----------------------: | ----------------: | +| 1970-01-01T00:00:00Z | 61026.0000000000 | 61026.0000000000 | + +{{% /expand %}} + +{{% expand "Count the values that match a regular expression" %}} + +Return the number of non-null field values for every field key that contains the +word `water` in the `h2o_feet` measurement. + +```sql +SELECT COUNT(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count_water_level | +| :------------------- | ----------------: | +| 1970-01-01T00:00:00Z | 61026.0000000000 | + +{{% /expand %}} + +{{% expand "Count distinct values for a field" %}} + +Return the number of unique field values for the `level description` field key +and the `h2o_feet` measurement. +InfluxQL supports nesting [DISTINCT()](#distinct) in `COUNT()`. + +```sql +SELECT COUNT(DISTINCT("level description")) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | count | +| :------------------- | -----------: | +| 1970-01-01T00:00:00Z | 4.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## DISTINCT() + +Returns the list of unique [field values](/influxdb/v2.6/reference/glossary/#field-value). +Supports all field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +InfluxQL supports nesting `DISTINCT()` with [`COUNT()`](#count). + +### Syntax + +```sql +SELECT DISTINCT( [ | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`DISTINCT(field_key)` + +Returns the unique field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +#### Examples + +{{< expand-wrapper >}} +{{% expand "List the distinct field values associated with a field key" %}} + +Return a tabular list of the unique field values in the `level description` +field key in the `h2o_feet` measurement. + +```sql +SELECT DISTINCT("level description") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | distinct | +| :------------------- | :------------------------ | +| 1970-01-01T00:00:00Z | between 6 and 9 feet | +| 1970-01-01T00:00:00Z | below 3 feet | +| 1970-01-01T00:00:00Z | between 3 and 6 feet | +| 1970-01-01T00:00:00Z | at or greater than 9 feet | + +{{% /expand %}} + + +{{% expand "List the distinct field values associated with each field key in a measurement" %}} + +Return a tabular list of the unique field values for each field key in the `h2o_feet` measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT DISTINCT(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | distinct_level description | distinct_water_level | +| :------------------- | :------------------------- | -------------------: | +| 1970-01-01T00:00:00Z | between 6 and 9 feet | 8.12 | +| 1970-01-01T00:00:00Z | between 3 and 6 feet | 8.005 | +| 1970-01-01T00:00:00Z | at or greater than 9 feet | 7.887 | +| 1970-01-01T00:00:00Z | below 3 feet | 7.762 | + +{{% /expand %}} +--> + +{{< /expand-wrapper >}} + +## INTEGRAL() + +Returns the area under the curve for subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). + +{{% note %}} +`INTEGRAL()` does not support [`fill()`](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill). `INTEGRAL()` supports int64 and float64 field value [data types](/influxdb/v2.6/reference/glossary/#data-type). +{{% /note %}} + +### Syntax + +```sql +SELECT INTEGRAL( [ * | | // ] [ , ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +InfluxDB calculates the area under the curve for subsequent field values and converts those results into the summed area per `unit`. +The `unit` argument is an integer followed by an optional [duration literal](/influxdb/v2.6/reference/syntax/spec/#literals). +If the query does not specify the `unit`, the unit defaults to one second (`1s`). + +`INTEGRAL(field_key)` + +Returns the area under the curve for subsequent field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`INTEGRAL(/regular_expression/)` + +Returns the area under the curve for subsequent field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`INTEGRAL(*)` + +Returns the average field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +The following examples use a subset of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data) data: + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} +{{% expand "Calculate the integral for the field values associated with a field key" %}} + +Return the area under the curve (in seconds) for the field values associated +with the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT INTEGRAL("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | integral | +| :------------------- | --------------: | +| 1970-01-01T00:00:00Z | 4184.8200000000 | + +{{% /expand %}} + +{{% expand "Calculate the integral for the field values associated with a field key and specify the unit option" %}} + +Return the area under the curve (in minutes) for the field values associated +with the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT INTEGRAL("water_level",1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | integral | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 69.7470000000 | + +{{% /expand %}} + +{{% expand "Calculate the integral for the field values associated with each field key in a measurement and specify the unit option" %}} + +Return the area under the curve (in minutes) for the field values associated +with each field key that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has on numeric field: `water_level`. + +```sql +SELECT INTEGRAL(*,1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | integral_water_level | +| :------------------- | -------------------: | +| 1970-01-01T00:00:00Z | 69.7470000000 | + +{{% /expand %}} + +{{% expand "Calculate the integral for the field values associated with each field key that matches a regular expression and specify the unit option" %}} + +Return the area under the curve (in minutes) for the field values associated +with each field key that stores numeric values includes the word `water` in +the `h2o_feet` measurement. + +```sql +SELECT INTEGRAL(/water/,1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | integral_water_level | +| :------------------- | -------------------: | +| 1970-01-01T00:00:00Z | 69.7470000000 | + +{{% /expand %}} + +{{% expand "Calculate the integral for the field values associated with a field key and include several clauses" %}} + +Return the area under the curve (in minutes) for the field values associated +with the `water_level` field key and in the `h2o_feet` measurement in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) between +`2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z`, [grouped](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) results into 12-minute intervals, and +[limit](/influxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of results returned to one. + +```sql +SELECT INTEGRAL("water_level",1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) LIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | integral | +| :------------------- | -------------: | +| 2019-08-18T00:00:00Z | 28.3590000000 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## MEAN() + +Returns the arithmetic mean (average) of [field values](/influxdb/v2.6/reference/glossary/#field-value). `MEAN()` supports int64 and float64 field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +### Syntax + +```sql +SELECT MEAN( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`MEAN(field_key)` +Returns the average field value associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`MEAN(/regular_expression/) + +Returns the average field value associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`MEAN(*)` +Returns the average field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Calculate the mean field value associated with a field key" %}} + +Return the average field value in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 1970-01-01T00:00:00Z | 4.4418674882 | + +{{% /expand %}} + +{{% expand "Calculate the mean field value associated with each field key in a measurement" %}} + +Return the average field value for every field key that stores numeric values +in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT MEAN(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean_water_level | +| :------------------- | ---------------: | +| 1970-01-01T00:00:00Z | 4.4418674882 | + +{{% /expand %}} + +{{% expand "Calculate the mean field value associated with each field key that matches a regular expression" %}} + +Return the average field value for each field key that stores numeric values and +includes the word `water` in the `h2o_feet` measurement. + +```sql +SELECT MEAN(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mean_water_level | +| :------------------- | ---------------: | +| 1970-01-01T00:00:00Z | 4.4418674882 | + +{{% /expand %}} + +{{% expand "Calculate the mean field value associated with a field key and include several clauses" %}} + +Return the average of the values in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +results into 12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `9.01` and +[limit](/influxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to seven and one. + +```sql +SELECT MEAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),* fill(9.01) LIMIT 7 SLIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | mean | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 8.4615000000 | +| 2019-08-18T00:12:00Z | 8.2725000000 | +| 2019-08-18T00:24:00Z | 8.0710000000 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## MEDIAN() + +Returns the middle value from a sorted list of [field values](/influxdb/v2.6/reference/glossary/#field-value). `MEDIAN()` supports int64 and float64 field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +{{% note %}} +**Note:** `MEDIAN()` is nearly equivalent to [`PERCENTILE(field_key, 50)`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile), except `MEDIAN()` returns the average of the two middle field values if the field contains an even number of values. +{{% /note %}} + +### Syntax + +```sql +SELECT MEDIAN( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`MEDIAN(field_key)` + +Returns the middle field value associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`MEDIAN(/regular_expression/)` + +Returns the middle field value associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`MEDIAN(*)` + +Returns the middle field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Calculate the median field value associated with a field key" %}} + +Return the middle field value in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT MEDIAN("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | median | +| :------------------- | -----------: | +| 1970-01-01T00:00:00Z | 4.1240000000 | + +{{% /expand %}} + +{{% expand "Calculate the median field value associated with each field key in a measurement" %}} + +Return the middle field value for every field key that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT MEDIAN(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | median_water_level | +| :------------------- | -----------------: | +| 1970-01-01T00:00:00Z | 4.1240000000 | + +{{% /expand %}} + +{{% expand "Calculate the median field value associated with each field key that matches a regular expression" %}} + +Return the middle field value for every field key that stores numeric values and +includes the word `water` in the `h2o_feet` measurement. + +```sql +SELECT MEDIAN(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | median_water_level | +| :------------------- | -----------------: | +| 1970-01-01T00:00:00Z | 4.1240000000 | + +{{% /expand %}} + +{{% expand "Calculate the median field value associated with a field key and include several clauses" %}} + +Return the middle field value in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +results into 12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `700 `, [limit](/influxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to seven and one, and [offset](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) the series returned by one. + +```sql +SELECT MEDIAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),* fill(700) LIMIT 7 SLIMIT 1 SOFFSET 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | median | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3655000000 | +| 2019-08-18T00:12:00Z | 2.3360000000 | +| 2019-08-18T00:24:00Z | 2.2655000000 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## MODE() + +Returns the most frequent value in a list of [field values](/influxdb/v2.6/reference/glossary/#field-value). `MODE()` supports all field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +{{% note %}} +**Note:** `MODE()` returns the field value with the earliest [timestamp](/influxdb/v2.6/reference/glossary/#timestamp) if there's a tie between two or more values for the maximum number of occurrences. +{{% /note %}} + +### Syntax + +```sql +SELECT MODE( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`MODE(field_key)` + +Returns the most frequent field value associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`MODE(/regular_expression/)` + +Returns the most frequent field value associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`MODE(*)` + +Returns the most frequent field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the mode field value associated with a field key" %}} + +Return the most frequent field value in the `level description` field key and in +the `h2o_feet` measurement. + +```sql +SELECT MODE("level description") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mode | +| :------------------- | :------------------- | +| 1970-01-01T00:00:00Z | between 3 and 6 feet | + +{{% /expand %}} + +{{% expand "Calculate the mode field value associated with each field key in a measurement" %}} + +Return the most frequent field value for every field key in the `h2o_feet` measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT MODE(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mode_level description | mode_water_level | +| :------------------- | :--------------------- | ---------------: | +| 1970-01-01T00:00:00Z | between 3 and 6 feet | 2.6900000000 | + +{{% /expand %}} + +{{% expand "Calculate the mode field value associated with each field key that matches a regular expression" %}} + +Return the most frequent field value for every field key that includes the word +`/water/` in the `h2o_feet` measurement. + +```sql +SELECT MODE(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | mode_water_level | +| :------------------- | ---------------: | +| 1970-01-01T00:00:00Z | 2.6900000000 | + +{{% /expand %}} + +{{% expand "Calculate the mode field value associated with a field key and include several clauses" %}} + +Return the mode of the values associated with the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +results into 12-minute time intervals and per tag. +Then [limis](/influxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series retu +ned tothree and one, and it [offsets](/influxdb/v2.6/query-data/influxql/explore-data +#the-offset-and-soffset-clauses) the series returned by one. + +```sql +SELECT MODE("level description") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),* LIMIT 3 SLIMIT 1 SOFFSET 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | mode | +| :------------------- | :----------- | +| 2019-08-18T00:00:00Z | below 3 feet | +| 2019-08-18T00:12:00Z | below 3 feet | +| 2019-08-18T00:24:00Z | below 3 feet | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## SPREAD() + +Returns the difference between the minimum and maximum [field values](/influxdb/v2.6/reference/glossary/#field-value). `SPREAD()` supports int64 and float64 field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +### Syntax + +```sql +SELECT SPREAD( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`SPREAD(field_key)` + +Returns the difference between the minimum and maximum field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`SPREAD(/regular_expression/)` + +Returns the difference between the minimum and maximum field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`SPREAD(*)` + +Returns the difference between the minimum and maximum field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the spread for the field values associated with a field key" %}} + +Return the difference between the minimum and maximum field values in the +`water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT SPREAD("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | spread | +| :------------------- | ------------: | +| 1970-01-01T00:00:00Z | 10.5740000000 | + +{{% /expand %}} + +{{% expand "Calculate the spread for the field values associated with each field key in a measurement" %}} + +Return the difference between the minimum and maximum field values for every +field key that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT SPREAD(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | spread_water_level | +| :------------------- | -----------------: | +| 1970-01-01T00:00:00Z | 10.5740000000 | + +{{% /expand %}} + +{{% expand "Calculate the spread for the field values associated with each field key that matches a regular expression" %}} + +Return the difference between the minimum and maximum field values for every +field key that stores numeric values and includes the word `water` in the `h2o_feet` measurement. + +```sql +SELECT SPREAD(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | spread_water_level | +| :------------------- | -----------------: | +| 1970-01-01T00:00:00Z | 10.5740000000 | + +{{% /expand %}} + +{{% expand "Calculate the spread for the field values associated with a field key and include several clauses" %}} + +Return the difference between the minimum and maximum field values in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +results into 12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `18`, [lim +ts](/ifluxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to three and one, and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) the series returned by one. + +```sql +SELECT SPREAD("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),* fill(18) LIMIT 3 SLIMIT 1 SOFFSET 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | spread | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.0270000000 | +| 2019-08-18T00:12:00Z | 0.0140000000 | +| 2019-08-18T00:24:00Z | 0.0030000000 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## STDDEV() + +Returns the standard deviation of [field values](/influxdb/v2.6/reference/glossary/#field-value). `STDDEV()` supports int64 and float64 field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +### Syntax + +```sql +SELECT STDDEV( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`STDDEV(field_key)` + +Returns the standard deviation of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`STDDEV(/regular_expression/)` + +Returns the standard deviation of field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`STDDEV(*)` + +Returns the standard deviation of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Calculate the standard deviation for the field values associated with a field key" %}} + +Return the standard deviation of the field values in the `water_level` field key +and in the `h2o_feet` measurement. + +```sql +SELECT STDDEV("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | stddev | +| :------------------- | -----------: | +| 1970-01-01T00:00:00Z | 2.2789744110 | + +{{% /expand %}} + +{{% expand "Calculate the standard deviation for the field values associated with each field key in a measurement" %}} + +Return the standard deviation of numeric fields in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT STDDEV(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | stddev_water_level | +| :------------------- | -----------------: | +| 1970-01-01T00:00:00Z | 2.2789744110 | + +{{% /expand %}} + +{{% expand "Calculate the standard deviation for the field values associated with each field key that matches a regular expression" %}} + +Return the standard deviation of numeric fields with `water` in the field key in the `h2o_feet` measurement. + +```sql +SELECT STDDEV(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | stddev_water_level | +| :------------------- | -----------------: | +| 1970-01-01T00:00:00Z | 2.2789744110 | + +{{% /expand %}} + +{{% expand "Calculate the standard deviation for the field values associated with a field key and include several clauses" %}} + +Return the standard deviation of the field values in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +results into 12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `18000`, [limit](/influxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to two and one, and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) the series returned by one. + +```sql +SELECT STDDEV("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),* fill(18000) LIMIT 2 SLIMIT 1 SOFFSET 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=santa_monica +{{% /influxql/table-meta %}} + +| time | stddev | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.0190918831 | +| 2019-08-18T00:12:00Z | 0.0098994949 | + +{{% /expand %}} +{{< /expand-wrapper >}} + +## SUM() + +Returns the sum of [field values](/influxdb/v2.6/reference/glossary/#field-value). `SUM()` supports int64 and float64 field value [data types](/influxdb/v2.6/reference/glossary/#data-type). + +### Syntax + +```sql +SELECT SUM( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`SUM(field_key)` + +Returns the sum of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`SUM(/regular_expression/)` + +Returns the sum of field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/#regular-expressions). + +`SUM(*)` + +Returns the sums of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Calculate the sum of the field values associated with a field key" %}} + +Return the summed total of the field values in the `water_level` field key and +in the `h2o_feet` measurement. + +```sql +SELECT SUM("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sum | +| :------------------- | ----------------: | +| 1970-01-01T00:00:00Z | 271069.4053333958 | + +{{% /expand %}} + +{{% expand "Calculate the sum of the field values associated with each field key in a measurement" %}} + +Return the summed total of numeric fields in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT SUM(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sum_water_level | +| :------------------- | ----------------: | +| 1970-01-01T00:00:00Z | 271069.4053333958 | + +{{% /expand %}} + +{{% expand "Calculate the sum of the field values associated with each field key that matches a regular expression" %}} + +Return the summed total of numeric fields with `water` in the field key in the `h2o_feet` measurement. + +```sql +SELECT SUM(/water/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sum_water_level | +| :------------------- | ----------------: | +| 1970-01-01T00:00:00Z | 271069.4053333958 | + +{{% /expand %}} + +{{% expand "Calculate the sum of the field values associated with a field key and include several clauses" %}} + +Return the summed total of the field values in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) +results into 12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with 18000, and [limit](/influxdb/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to four and one. + +```sql +SELECT SUM("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m),* fill(18000) LIMIT 4 SLIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | sum | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | 16.9230000000 | +| 2019-08-18T00:12:00Z | 16.5450000000 | +| 2019-08-18T00:24:00Z | 16.1420000000 | + +{{% /expand %}} +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/functions/selectors.md b/content/influxdb/v2.6/query-data/influxql/functions/selectors.md new file mode 100644 index 000000000..4b868d9fa --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/functions/selectors.md @@ -0,0 +1,1301 @@ +--- +title: InfluxQL selector functions +list_title: Selector functions +description: > + Select data with InfluxQL selector functions. +menu: + influxdb_2_6: + name: Selectors + parent: InfluxQL functions +weight: 205 +--- + +Use selector functions to assess, select, and return values in your data. +Selector functions return one or more rows with the selected values from each InfluxQL group. + +Each selector function below covers **syntax**, including parameters to pass to the function, and **examples** of how to use the function. Examples use [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). + +- [BOTTOM()](#bottom) +- [FIRST()](#first) +- [LAST()](#last) +- [MAX()](#max) +- [MIN()](#min) +- [PERCENTILE()](#percentile) +- [SAMPLE()](#sample) +- [TOP()](#top) + +## BOTTOM() + +Returns the smallest `N` [field values](/influxdb/v2.6/reference/glossary/#field-value). `BOTTOM()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +{{% note %}} +**Note:** `BOTTOM()` returns the field value with the earliest timestamp if there's a tie between two or more values for the smallest value. +{{% /note %}} + +### Syntax + +```sql +SELECT BOTTOM([,], )[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`BOTTOM(field_key,N)` +Returns the smallest N field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`BOTTOM(field_key,tag_key,N)` +Returns the smallest field value for N tag values of the [tag key](/influxdb/v2.6/reference/glossary/#tag-key). Add a comma between multiple tag keys: `tag_key,tag_key`. + +`BOTTOM((field_key,N),tag_key,field_key)` +Returns the smallest N field values associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). Add a comma between multiple tag or field keys: `tag_key,tag_key,field_key,field_key`. + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Select the bottom three field values associated with a field key" %}} + +Return the smallest three field values in the `water_level` field key and in the +`h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +```sql +SELECT BOTTOM("water_level",3) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | bottom | +| :------------------- | -----: | +| 2019-08-29T14:30:00Z | -0.610 | +| 2019-08-29T14:36:00Z | -0.591 | +| 2019-08-30T15:18:00Z | -0.594 | + +{{% /expand %}} + +{{% expand "Select the bottom field value associated with a field key for two tags" %}} + +Return the smallest field values in the `water_level` field key for two tag +values associated with the `location` tag key. + +```sql +SELECT BOTTOM("water_level","location",2) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | bottom | location | +| :------------------- | -----: | :----------- | +| 2019-08-29T10:36:00Z | -0.243 | santa_monica | +| 2019-08-29T14:30:00Z | -0.610 | coyote_creek | + +{{% /expand %}} + +{{% expand "Select the bottom four field values associated with a field key and the relevant tags and fields" %}} + +Return the smallest four field values in the `water_level` field key and the +relevant values of the `location` tag key and the `level description` field key. + +```sql +SELECT BOTTOM("water_level",4),"location","level description" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | bottom | location | level description | +| :------------------- | -----: | :----------- | :---------------- | +| 2019-08-29T14:24:00Z | -0.587 | coyote_creek | below 3 feet | +| 2019-08-29T14:30:00Z | -0.610 | coyote_creek | below 3 feet | +| 2019-08-29T14:36:00Z | -0.591 | coyote_creek | below 3 feet | +| 2019-08-30T15:18:00Z | -0.594 | coyote_creek | below 3 feet | + +{{% /expand %}} + +{{% expand "Select the bottom three field values associated with a field key and include several clauses" %}} + +Return the smallest three values in the `water_level` field key for each 24-minute +[interval](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#basic-group-by-time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:54:00Z` with results in +[descending timestamp](/influxdb/v2.6/query-data/influxql/explore-data/order-by/) order. + +```sql +SELECT BOTTOM("water_level",3),"location" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(24m) ORDER BY time DESC +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | bottom | location | +| :------------------- | -----------: | :----------- | +| 2019-08-18T00:54:00Z | 2.172 | santa_monica | +| 2019-08-18T00:54:00Z | 7.510 | coyote_creek | +| 2019-08-18T00:48:00Z | 2.087 | santa_monica | +| 2019-08-18T00:42:00Z | 2.093 | santa_monica | +| 2019-08-18T00:36:00Z | 2.1261441420 | santa_monica | +| 2019-08-18T00:24:00Z | 2.264 | santa_monica | +| 2019-08-18T00:18:00Z | 2.329 | santa_monica | +| 2019-08-18T00:12:00Z | 2.343 | santa_monica | +| 2019-08-18T00:00:00Z | 2.352 | santa_monica | + +Notice that the [GROUP BY time() clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) +does not override the points’ original timestamps. +See [Issue 1](#bottom-with-a-group-by-time-clause) in the section below for a +more detailed explanation of that behavior. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with BOTTOM() + +#### BOTTOM() with a GROUP BY time() clause + +Queries with `BOTTOM()` and a `GROUP BY time()` clause return the specified +number of points per `GROUP BY time()` interval. +For [most `GROUP BY time()` queries](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals), +the returned timestamps mark the start of the `GROUP BY time()` interval. +`GROUP BY time()` queries with the `BOTTOM()` function behave differently; +they maintain the timestamp of the original data point. + +##### Example + +The query below returns two points per 18-minute +`GROUP BY time()` interval. +Notice that the returned timestamps are the points' original timestamps; they +are not forced to match the start of the `GROUP BY time()` intervals. + +```sql +SELECT BOTTOM("water_level",2) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(18m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | bottom | +| :------------------- | -----: | +| 2019-08-18T00:00:00Z | 2.064 | +| 2019-08-18T00:12:00Z | 2.028 | +| 2019-08-18T00:24:00Z | 2.041 | +| 2019-08-18T00:30:00Z | 2.051 | + +_Notice that the first two rows contain the smallest values from the first time interval +and the last two rows contains the smallest values for the second time interval._ + +#### BOTTOM() and a tag key with fewer than N tag values + +Queries with the syntax `SELECT BOTTOM(,,)` can return fewer points than expected. +If the tag key has `X` tag values, the query specifies `N` values, and `X` is smaller than `N`, then the query returns `X` points. + +##### Example + +The query below asks for the smallest field values of `water_level` for three tag values of the `location` tag key. +Because the `location` tag key has two tag values (`santa_monica` and `coyote_creek`), the query returns two points instead of three. + +```sql +SELECT BOTTOM("water_level","location",3) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | bottom | location | +| :------------------- | -----: | :----------- | +| 2019-08-29T10:36:00Z | -0.243 | santa_monica | +| 2019-08-29T14:30:00Z | -0.610 | coyote_creek | + +## FIRST() + +Returns the [field value ](/influxdb/v2.6/reference/glossary/#field-value) with the oldest timestamp. + +### Syntax + +```sql +SELECT FIRST()[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`FIRST(field_key)` +Returns the oldest field value (determined by timestamp) associated with the field key. + +`FIRST(/regular_expression/)` +Returns the oldest field value (determined by timestamp) associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`FIRST(*)` +Returns the oldest field value (determined by timestamp) associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`FIRST(field_key),tag_key(s),field_key(s)` +Returns the oldest field value (determined by timestamp) associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`FIRST()` supports all field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select the first field value associated with a field key" %}} + +Return the oldest field value (determined by timestamp) associated with the +`level description` field key and in the `h2o_feet` measurement. + +```sql +SELECT FIRST("level description") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | first | +| :------------------- | :------------------- | +| 2019-08-17T00:00:00Z | between 6 and 9 feet | + +{{% /expand %}} + +{{% expand "Select the first field value associated with each field key in a measurement" %}} + +Return the oldest field value (determined by timestamp) for each field key in the `h2o_feet` measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT FIRST(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | first_level description | first_water_level | +| :------------------- | :---------------------- | ----------------: | +| 1970-01-01T00:00:00Z | between 6 and 9 feet | 8.120 | + +{{% /expand %}} + +{{% expand "Select the first field value associated with each field key that matches a regular expression" %}} + +Return the oldest field value for each field key that includes the word `level` in the `h2o_feet` measurement. + +```sql +SELECT FIRST(/level/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | first_level description | first_water_level | +| :------------------- | :---------------------- | ----------------: | +| 1970-01-01T00:00:00Z | between 6 and 9 feet | 8.120 | + +{{% /expand %}} + +{{% expand "Select the first value associated with a field key and the relevant tags and fields" %}} + +Return the oldest field value (determined by timestamp) in the `level description` +field key and the relevant values of the `location` tag key and the `water_level` field key. + +```sql +SELECT FIRST("level description"),"location","water_level" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | first | location | water_level | +| :------------------- | :------------------- | :----------- | ----------: | +| 2019-08-17T00:00:00Z | between 6 and 9 feet | coyote_creek | 8.120 | + +{{% /expand %}} + +{{% expand "Select the first field value associated with a field key and include several clauses" %}} + +Returns the oldest field value (determined by timestamp) in the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-17T23:48:00Z` and `2019-08-18T00:54:00Z` and +[groups](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) results into +12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `9.01`, and it [limit](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to four and one. + +```sql +SELECT FIRST("water_level") FROM "h2o_feet" WHERE time >= '2019-08-17T23:48:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(12m),* fill(9.01) LIMIT 4 SLIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | first | +| :------------------- | ----: | +| 2019-08-17T23:48:00Z | 8.635 | +| 2019-08-18T00:00:00Z | 8.504 | +| 2019-08-18T00:12:00Z | 8.320 | +| 2019-08-18T00:24:00Z | 8.130 | + +Notice that the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) overrides the points' original timestamps. +The timestamps in the results indicate the the start of each 12-minute time interval; +the first point in the results covers the time interval between `2019-08-17T23:48:00Z` and just before `2019-08-18T00:00:00Z` and the last point in the results covers the time interval between `2019-08-18T00:24:00Z` and just before `2019-08-18T00:36:00Z`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## LAST() + +Returns the [field value](/influxdb/v2.6/reference/glossary/#field-value) with the most recent timestamp. + +### Syntax + +```sql +SELECT LAST()[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`LAST(field_key)` +Returns the newest field value (determined by timestamp) associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`LAST(/regular_expression/)` +Returns the newest field value (determined by timestamp) associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`LAST(*)` +Returns the newest field value (determined by timestamp) associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`LAST(field_key),tag_key(s),field_key(s)` +Returns the newest field value (determined by timestamp) associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`LAST()` supports all field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select the last field values associated with a field key" %}} + +Return the newest field value (determined by timestamp) associated with the +`level description` field key and in the `h2o_feet` measurement. + +```sql +SELECT LAST("level description") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | last | +| :------------------- | :------------------- | +| 2019-09-17T21:42:00Z | between 3 and 6 feet | + +{{% /expand %}} + +{{% expand "Select the last field values associated with each field key in a measurement" %}} + +Return the newest field value (determined by timestamp) for each field key in the `h2o_feet` measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT LAST(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | last_level description | last_water_level | +| :------------------- | :--------------------- | ---------------: | +| 1970-01-01T00:00:00Z | between 3 and 6 feet | 4.938 | + +{{% /expand %}} + +{{% expand "Select the last field value associated with each field key that matches a regular expression" %}} + +Return the newest field value for each field key that includes the word `level` +in the `h2o_feet` measurement. + +```sql +SELECT LAST(/level/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | last_level description | last_water_level | +| :------------------- | :--------------------- | ---------------: | +| 1970-01-01T00:00:00Z | between 3 and 6 feet | 4.938 | + +{{% /expand %}} + +{{% expand "Select the last field value associated with a field key and the relevant tags and fields" %}} + +Return the newest field value (determined by timestamp) in the `level description` +field key and the relevant values of the `location` tag key and the `water_level` field key. + +```sql +SELECT LAST("level description"),"location","water_level" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | last | location | water_level | +| :------------------- | :------------------- | :----------- | ----------: | +| 2019-09-17T21:42:00Z | between 3 and 6 feet | santa_monica | 4.938 | + +{{% /expand %}} + +{{% expand "Select the last field value associated with a field key and include several clauses" %}} + +Return the newest field value (determined by timestamp) in the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-17T23:48:00Z` and `2019-08-18T00:54:00Z` and +[groups](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) results into +12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `9.01`, and it [limit](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to four and one. + +```sql +SELECT LAST("water_level") FROM "h2o_feet" WHERE time >= '2019-08-17T23:48:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(12m),* fill(9.01) LIMIT 4 SLIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | last | +| :------------------- | ----: | +| 2019-08-17T23:48:00Z | 8.570 | +| 2019-08-18T00:00:00Z | 8.419 | +| 2019-08-18T00:12:00Z | 8.225 | +| 2019-08-18T00:24:00Z | 8.012 | + +Notice that the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) overrides the points' original timestamps. +The timestamps in the results indicate the the start of each 12-minute time interval; +the first point in the results covers the time interval between `2019-08-17T23:48:00Z` and just before `2019-08-18T00:00:00Z` and the last point in the results covers the time interval between `2019-08-18T00:24:00Z` and just before `2019-08-18T00:36:00Z`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## MAX() + +Returns the greatest [field value](/influxdb/v2.6/reference/glossary/#field-value). + +### Syntax + +```sql +SELECT MAX()[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`MAX(field_key)` +Returns the greatest field value associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`MAX(/regular_expression/)` +Returns the greatest field value associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`MAX(*)` +Returns the greatest field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`MAX(field_key),tag_key(s),field_key(s)` +Returns the greatest field value associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`MAX()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select the maximum field value associated with a field key" %}} + +Return the greatest field value in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT MAX("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | max | +| :------------------- | ----: | +| 2019-08-28T07:24:00Z | 9.964 | + +{{% /expand %}} + +{{% expand "Select the maximum field value associated with each field key in a measurement" %}} + +Return the greatest field value for each field key that stores numeric values +in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT MAX(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | max_water_level | +| :------------------- | --------------: | +| 2019-08-28T07:24:00Z | 9.964 | + +{{% /expand %}} + +{{% expand "Select the maximum field value associated with each field key that matches a regular expression" %}} + +Return the greatest field value for each field key that stores numeric values +and includes the word `water` in the `h2o_feet` measurement. + +```sql +SELECT MAX(/level/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | max_water_level | +| :------------------- | --------------: | +| 2019-08-28T07:24:00Z | 9.964 | + +{{% /expand %}} + +{{% expand "Select the maximum field value associated with a field key and the relevant tags and fields" %}} + +Return the greatest field value in the `water_level` field key and the relevant +values of the `location` tag key and the `level description` field key. + +```sql +SELECT MAX("water_level"),"location","level description" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | max | location | level description | +| :------------------- | ----: | :----------- | :------------------------ | +| 2019-08-28T07:24:00Z | 9.964 | coyote_creek | at or greater than 9 feet | + +{{% /expand %}} + +{{% expand "Select the maximum field value associated with a field key and include several clauses" %}} + +Return the greatest field value in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-17T23:48:00Z` and `2019-08-18T00:54:00Z` and +[groups](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) results into +12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `9.01`, and it [limit](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to four and one. + +```sql +SELECT MAX("water_level") FROM "h2o_feet" WHERE time >= '2019-08-17T23:48:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(12m),* fill(9.01) LIMIT 4 SLIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | max | +| :------------------- | ----: | +| 2019-08-17T23:48:00Z | 8.635 | +| 2019-08-18T00:00:00Z | 8.504 | +| 2019-08-18T00:12:00Z | 8.320 | +| 2019-08-18T00:24:00Z | 8.130 | + +Notice that the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) overrides the points’ original timestamps. +The timestamps in the results indicate the the start of each 12-minute time interval; +the first point in the results covers the time interval between `2019-08-17T23:48:00Z` and just before `2019-08-18T00:00:00Z` and the last point in the results covers the time interval between `2019-08-18T00:24:00Z` and just before `2019-08-18T00:36:00Z`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## MIN() + +Returns the lowest [field value](/influxdb/v2.6/reference/glossary/#field-value). + +### Syntax + +```sql +SELECT MIN()[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`MIN(field_key)` +Returns the lowest field value associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`MIN(/regular_expression/)` +Returns the lowest field value associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`MIN(*)` +Returns the lowest field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`MIN(field_key),tag_key(s),field_key(s)` +Returns the lowest field value associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`MIN()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select the minimum field value associated with a field key" %}} + +Return the lowest field value in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT MIN("water_level") FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | min | +| :------------------- | -----: | +| 2019-08-28T14:30:00Z | -0.610 | + +{{% /expand %}} + +{{% expand "Select the minimum field value associated with each field key in a measurement" %}} + +Return the lowest field value for each field key that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT MIN(*) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | min_water_level | +| :------------------- | --------------: | +| 2019-08-28T14:30:00Z | -0.610 | + +{{% /expand %}} + +{{% expand "Select the minimum field value associated with each field key that matches a regular expression" %}} + +Return the lowest field value for each numeric field with `water` in the field +key in the `h2o_feet` measurement. + +```sql +SELECT MIN(/level/) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | min_water_level | +| :------------------- | --------------: | +| 2019-08-28T14:30:00Z | -0.610 | + +{{% /expand %}} + +{{% expand "Select the minimum field value associated with a field key and the relevant tags and fields" %}} + +Return the lowest field value in the `water_level` field key and the relevant +values of the `location` tag key and the `level description` field key. + +```sql +SELECT MIN("water_level"),"location","level description" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | min | location | level description | +| :------------------- | -----: | :----------- | :---------------- | +| 2019-08-28T14:30:00Z | -0.610 | coyote_creek | below 3 feet | + +{{% /expand %}} + +{{% expand "Select the minimum field value associated with a field key and include several clauses" %}} + +Return the lowest field value in the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-17T23:48:00Z` and `2019-08-18T00:54:00Z` and +[groups](/influxdb/v2.6/query-data/influxql/explore-data/group-by/) results into +12-minute time intervals and per tag. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `9.01`, and it [limit](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points and series returned to four and one. + +```sql +SELECT MIN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-17T23:48:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(12m),* fill(9.01) LIMIT 4 SLIMIT 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +tags: location=coyote_creek +{{% /influxql/table-meta %}} + +| time | min | +| :------------------- | ----: | +| 2019-08-17T23:48:00Z | 8.570 | +| 2019-08-18T00:00:00Z | 8.419 | +| 2019-08-18T00:12:00Z | 8.225 | +| 2019-08-18T00:24:00Z | 8.012 | + +Notice that the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) overrides the points’ original timestamps. +The timestamps in the results indicate the the start of each 12-minute time interval; +the first point in the results covers the time interval between `2019-08-17T23:48:00Z` and just before `2019-08-18T00:00:00Z` and the last point in the results covers the time interval between `2019-08-18T00:24:00Z` and just before `2019-08-18T00:36:00Z`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## PERCENTILE() + +Returns the `N`th percentile [field value](/influxdb/v2.6/reference/glossary/#field-value). + +### Syntax + +```sql +SELECT PERCENTILE(, )[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`PERCENTILE(field_key,N)` +Returns the Nth percentile field value associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`PERCENTILE(/regular_expression/,N)` +Returns the Nth percentile field value associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`PERCENTILE(*,N)` +Returns the Nth percentile field value associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`PERCENTILE(field_key,N),tag_key(s),field_key(s)` +Returns the Nth percentile field value associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`N` must be an integer or floating point number between `0` and `100`, inclusive. +`PERCENTILE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select the fifth percentile field value associated with a field key" %}} + +Return the field value that is larger than five percent of the field values in +the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT PERCENTILE("water_level",5) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | percentile | +| :------------------- | ---------: | +| 2019-09-01T17:54:00Z | 1.122 | + +{{% /expand %}} + +{{% expand "Select the fifth percentile field value associated with each field key in a measurement" %}} + +Return the field value that is larger than five percent of the field values in +each field key that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT PERCENTILE(*,5) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | percentile_water_level | +| :------------------- | ---------------------: | +| 2019-09-01T17:54:00Z | 1.122 | + +{{% /expand %}} + +{{% expand "Select fifth percentile field value associated with each field key that matches a regular expression" %}} + +Return the field value that is larger than five percent of the field values in +each numeric field with `water` in the field key. + +```sql +SELECT PERCENTILE(/level/,5) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | percentile_water_level | +| :------------------- | ---------------------: | +| 2019-09-01T17:54:00Z | 1.122 | + +{{% /expand %}} + +{{% expand "Select the fifth percentile field values associated with a field key and the relevant tags and fields" %}} + +Return the field value that is larger than five percent of the field values in +the `water_level` field key and the relevant values of the `location` tag key +and the `level description` field key. + +```sql +SELECT PERCENTILE("water_level",5),"location","level description" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | percentile | location | level description | +| :------------------- | ---------: | :----------- | :---------------- | +| 2019-08-24T10:18:00Z | 1.122 | coyote_creek | below 3 feet | + +{{% /expand %}} + +{{% expand "Select the twentieth percentile field value associated with a field key and include several clauses" %}} + +Return the field value that is larger than 20 percent of the values in the +`water_level` field in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-17T23:48:00Z` and `2019-08-18T00:54:00Z` and [group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) results into 24-minute intervals. +Then [fill](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals-and-fill) +empty time intervals with `15` and [limit](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to two. + +```sql +SELECT PERCENTILE("water_level",20) FROM "h2o_feet" WHERE time >= '2019-08-17T23:48:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(24m) fill(15) LIMIT 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | percentile | +| :------------------- | ---------: | +| 2019-08-17T23:36:00Z | 2.398 | +| 2019-08-18T00:00:00Z | 2.343 | + +Notice that the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) overrides the points’ original timestamps. +The timestamps in the results indicate the the start of each 24-minute time interval; the first point in the results covers the time interval between `2019-08-17T23:36:00Z` and just before `2019-08-18T00:00:00Z` and the last point in the results covers the time interval between `2019-08-18T00:00:00Z` and just before `2019-08-18T00:24:00Z`. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with PERCENTILE() + +#### PERCENTILE() compared to other InfluxQL functions + +- `PERCENTILE(,100)` is equivalent to [`MAX()`](#max). +- `PERCENTILE(, 50)` is nearly equivalent to [`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), except the `MEDIAN()` function returns the average of the two middle values if the field key contains an even number of field values. +- `PERCENTILE(,0)` is not equivalent to [`MIN()`](#min). This is a known [issue](https://github.com/influxdata/influxdb/issues/4418). + +## SAMPLE() + +Returns a random sample of `N` [field values](/influxdb/v2.6/reference/glossary/#field-value). +`SAMPLE()` uses [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling) to generate the random points. + +### Syntax + +```sql +SELECT SAMPLE(, )[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`SAMPLE(field_key,N)` +Returns N randomly selected field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`SAMPLE(/regular_expression/,N)` +Returns N randomly selected field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`SAMPLE(*,N)` +Returns N randomly selected field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`SAMPLE(field_key,N),tag_key(s),field_key(s)` +Returns N randomly selected field values associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`N` must be an integer. +`SAMPLE()` supports all field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select a sample of the field values associated with a field key" %}} + +Return two randomly selected points from the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT SAMPLE("water_level",2) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sample | +| :------------------- | -----: | +| 2019-08-22T03:42:00Z | 7.218 | +| 2019-08-28T20:18:00Z | 2.848 | + +{{% /expand %}} + +{{% expand "Select a sample of the field values associated with each field key in a measurement" %}} + +Return two randomly selected points for each field key in the `h2o_feet` measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT SAMPLE(*,2) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sample_level description | sample_water_level | +| :------------------- | :----------------------- | -----------------: | +| 2019-08-23T17:30:00Z | below 3 feet | | +| 2019-09-08T19:18:00Z | | 8.379 | +| 2019-09-09T03:54:00Z | between 6 and 9 feet | | +| 2019-09-16T04:48:00Z | | 1.437 | + +{{% /expand %}} + +{{% expand "Select a sample of the field values associated with each field key that matches a regular expression" %}} + +Return two randomly selected points for each field key that includes the word +`level` in the `h2o_feet` measurement. + +```sql +SELECT SAMPLE(/level/,2) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sample_level description | sample_water_level | +| :------------------- | :----------------------- | -----------------: | +| 2019-08-19T20:24:00Z | | 4.951 | +| 2019-08-26T06:30:00Z | below 3 feet | | +| 2019-09-10T09:06:00Z | | 1.312 | +| 2019-09-16T21:00:00Z | between 3 and 6 feet | | + +{{% /expand %}} + +{{% expand "Select a sample of the field values associated with a field key and the relevant tags and fields" %}} + +Return two randomly selected points from the `water_level` field key and the +relevant values of the `location` tag and the `level description` field. + +```sql +SELECT SAMPLE("water_level",2),"location","level description" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sample | location | level description | +| :------------------- | -----: | :----------- | :------------------- | +| 2019-08-31T04:30:00Z | 4.954 | santa_monica | between 3 and 6 feet | +| 2019-09-13T01:24:00Z | 3.389 | santa_monica | between 3 and 6 feet | + +{{% /expand %}} + +{{% expand "Select a sample of the field values associated with a field key and include several clauses" %}} + +Return one randomly selected point from the `water_level` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` and +[group](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) +results into 18-minute intervals. + +```sql +SELECT SAMPLE("water_level",1) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(18m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sample | +| :------------------- | -----: | +| 2019-08-18T00:12:00Z | 2.343 | +| 2019-08-18T00:24:00Z | 2.264 | + +Notice that the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) does not override the points' original timestamps. +See [Issue 1](#sample-with-a-group-by-time-clause) in the section below for a +more detailed explanation of that behavior. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with SAMPLE() + +#### SAMPLE() with a GROUP BY time() clause + +Queries with `SAMPLE()` and a `GROUP BY time()` clause return the specified +number of points (`N`) per `GROUP BY time()` interval. +For [most `GROUP BY time()` queries](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals), +the returned timestamps mark the start of the `GROUP BY time()` interval. +`GROUP BY time()` queries with the `SAMPLE()` function behave differently; +they maintain the timestamp of the original data point. + +##### Example + +The query below returns two randomly selected points per 18-minute +`GROUP BY time()` interval. +Notice that the returned timestamps are the points' original timestamps; they +are not forced to match the start of the `GROUP BY time()` intervals. + +```sql +SELECT SAMPLE("water_level",2) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(18m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sample | +| :------------------- | -----: | +| 2019-08-18T00:06:00Z | 2.116 | +| 2019-08-18T00:12:00Z | 2.028 | +| 2019-08-18T00:18:00Z | 2.126 | +| 2019-08-18T00:30:00Z | 2.051 | + +Notice that the first two rows are randomly-selected points from the first time +interval and the last two rows are randomly-selected points from the second time interval. + +## TOP() + +Returns the greatest `N` [field values](/influxdb/v2.6/reference/glossary/#field-value). + +### Syntax + +```sql +SELECT TOP( [,], )[,|] FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`TOP(field_key,N)` +Returns the greatest N field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`TOP(field_key,tag_key(s),N)` +Returns the greatest field value for N tag values of the [tag key](/influxdb/v2.6/reference/glossary/#tag-key). + +`TOP(field_key,N),tag_key(s),field_key(s)` +Returns the greatest N field values associated with the field key in the parentheses and the relevant [tag](/influxdb/v2.6/reference/glossary/#tag) and/or [field](/influxdb/v2.6/reference/glossary/#field). + +`TOP()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +{{% note %}} +**Note:** `TOP()` returns the field value with the earliest timestamp if there's a tie between two or more values for the greatest value. +{{% /note %}} + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Select the top three field values associated with a field key" %}} + +Return the greatest three field values in the `water_level` field key and in the +`h2o_feet` [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +```sql +SELECT TOP("water_level",3) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | top | +| :------------------- | ----: | +| 2019-08-28T07:18:00Z | 9.957 | +| 2019-08-28T07:24:00Z | 9.964 | +| 2019-08-28T07:30:00Z | 9.954 | + +{{% /expand %}} + +{{% expand "Select the top field value associated with a field key for two tags" %}} + +Return the greatest field values in the `water_level` field key for two tag +values associated with the `location` tag key. + +```sql +SELECT TOP("water_level","location",2) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | top | location | +| :------------------- | ----: | :----------- | +| 2019-08-28T03:54:00Z | 7.205 | santa_monica | +| 2019-08-28T07:24:00Z | 9.964 | coyote_creek | + +{{% /expand %}} + +{{% expand "Select the top four field values associated with a field key and the relevant tags and fields" %}} + +Return the greatest four field values in the `water_level` field key and the +relevant values of the `location` tag key and the `level description` field key. + +```sql +SELECT TOP("water_level",4),"location","level description" FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | top | location | level description | +| :------------------- | ----: | :----------- | :------------------------ | +| 2019-08-28T07:18:00Z | 9.957 | coyote_creek | at or greater than 9 feet | +| 2019-08-28T07:24:00Z | 9.964 | coyote_creek | at or greater than 9 feet | +| 2019-08-28T07:30:00Z | 9.954 | coyote_creek | at or greater than 9 feet | +| 2019-08-28T07:36:00Z | 9.941 | coyote_creek | at or greater than 9 feet | + +{{% /expand %}} + +{{% expand "Select the top three field values associated with a field key and include several clauses" %}} + +Return the greatest three values in the `water_level` field key for each 24-minute +[interval](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#basic-group-by-time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:54:00Z` with results in +[descending timestamp](/influxdb/v2.6/query-data/influxql/explore-data/order-by/) order. + +```sql +SELECT TOP("water_level",3),"location" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(24m) ORDER BY time DESC +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | top | location | +| :------------------- | ----: | :----------- | +| 2019-08-18T00:54:00Z | 6.982 | coyote_creek | +| 2019-08-18T00:54:00Z | 2.054 | santa_monica | +| 2019-08-18T00:48:00Z | 7.110 | coyote_creek | +| 2019-08-18T00:36:00Z | 7.372 | coyote_creek | +| 2019-08-18T00:30:00Z | 7.500 | coyote_creek | +| 2019-08-18T00:24:00Z | 7.635 | coyote_creek | +| 2019-08-18T00:12:00Z | 7.887 | coyote_creek | +| 2019-08-18T00:06:00Z | 8.005 | coyote_creek | +| 2019-08-18T00:00:00Z | 8.120 | coyote_creek | + +Notice that the [GROUP BY time() clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) does not override the points’ original timestamps. +See [Issue 1](#top-with-a-group-by-time-clause) in the section below for a more detailed explanation of that behavior. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with `TOP()` + +#### `TOP()` with a `GROUP BY time()` clause + +Queries with `TOP()` and a `GROUP BY time()` clause return the specified +number of points per `GROUP BY time()` interval. +For [most `GROUP BY time()` queries](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals), +the returned timestamps mark the start of the `GROUP BY time()` interval. +`GROUP BY time()` queries with the `TOP()` function behave differently; +they maintain the timestamp of the original data point. + +##### Example + +The query below returns two points per 18-minute +`GROUP BY time()` interval. +Notice that the returned timestamps are the points' original timestamps; they +are not forced to match the start of the `GROUP BY time()` intervals. + +```sql +SELECT TOP("water_level",2) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(18m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | top | +| :------------------- | ----: | +| 2019-08-18T00:00:00Z | 2.064 | +| 2019-08-18T00:06:00Z | 2.116 | +| 2019-08-18T00:18:00Z | 2.126 | +| 2019-08-18T00:30:00Z | 2.051 | + +Notice that the first two rows are the greatest points for the first time interval +and the last two rows are the greatest points for the second time interval. + +#### TOP() and a tag key with fewer than N tag values + +Queries with the syntax `SELECT TOP(,,)` can return fewer points than expected. +If the tag key has `X` tag values, the query specifies `N` values, and `X` is smaller than `N`, then the query returns `X` points. + +##### Example + +The query below asks for the greatest field values of `water_level` for three tag values of the `location` tag key. +Because the `location` tag key has two tag values (`santa_monica` and `coyote_creek`), the query returns two points instead of three. + +```sql +SELECT TOP("water_level","location",3) FROM "h2o_feet" +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | top | location | +| :------------------- | ----: | :----------- | +| 2019-08-29T03:54:00Z | 7.205 | santa_monica | +| 2019-08-29T07:24:00Z | 9.964 | coyote_creek | + + diff --git a/content/influxdb/v2.6/query-data/influxql/functions/technical-analysis.md b/content/influxdb/v2.6/query-data/influxql/functions/technical-analysis.md new file mode 100644 index 000000000..cad5e72e3 --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/functions/technical-analysis.md @@ -0,0 +1,659 @@ +--- +title: InfluxQL analysis functions +list_title: Technical analysis functions +description: > + Use technical analysis functions to apply algorithms to your data. +menu: + influxdb_2_6: + name: Technical analysis + parent: InfluxQL functions +weight: 205 +--- + +Use technical analysis functions to apply algorithms to your data--often used to analyze financial and investment data. + +Each analysis function below covers **syntax**, including parameters to pass to the function, and **examples** of how to use the function. Examples use [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data). + +- [Predictive analysis](#predictive-analysis): + - [HOLT_WINTERS()](#holt_winters) +- [Technical analysis](#technical-analysis-functions): + - [CHANDE_MOMENTUM_OSCILLATOR()](#chande_momentum_oscillator) + - [EXPONENTIAL_MOVING_AVERAGE()](#exponential_moving_average) + - [DOUBLE_EXPONENTIAL_MOVING_AVERAGE()](#double_exponential_moving_average) + - [KAUFMANS_EFFICIENCY_RATIO()](#kaufmans_efficiency_ratio) + - [KAUFMANS_ADAPTIVE_MOVING_AVERAGE()](#kaufmans_adaptive_moving_average) + - [TRIPLE_EXPONENTIAL_MOVING_AVERAGE()](#triple_exponential_moving_average) + - [TRIPLE_EXPONENTIAL_DERIVATIVE()](#triple_exponential_derivative) + - [RELATIVE_STRENGTH_INDEX()](#relative_strength_index) + +## Predictive analysis + +Predictive analysis functions are a type of technical analysis algorithms that +predict and forecast future values. + +### HOLT_WINTERS() + +Returns N number of predicted [field values](/influxdb/v2.6/reference/glossary/#field-value) +using the [Holt-Winters](https://www.otexts.org/fpp/7/5) seasonal method. +Supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). +Works with data that occurs at consistent time intervals. +Requires an InfluxQL function and the `GROUP BY time()` clause to ensure that +the Holt-Winters function operates on regular data. + +Use `HOLT_WINTERS()` to: + +- Predict when data values will cross a given threshold +- Compare predicted values with actual values to detect anomalies in your data + +#### Syntax + +``` +SELECT HOLT_WINTERS[_WITH-FIT]((),,) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`HOLT_WINTERS(function(field_key),N,S)` returns `N` seasonally adjusted +predicted field values for the specified [field key](/influxdb/v2.6/reference/glossary/#field-key). + +The `N` predicted values occur at the same interval as the [`GROUP BY time()` interval](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +If your `GROUP BY time()` interval is `6m` and `N` is `3` you'll +receive three predicted values that are each six minutes apart. + +`S` is the seasonal pattern parameter and delimits the length of a seasonal +pattern according to the `GROUP BY time()` interval. +If your `GROUP BY time()` interval is `2m` and `S` is `3`, then the +seasonal pattern occurs every six minutes, that is, every three data points. +If you do not want to seasonally adjust your predicted values, set `S` to `0` +or `1.` + +`HOLT_WINTERS_WITH_FIT(function(field_key),N,S)` returns the fitted values in +addition to `N` seasonally adjusted predicted field values for the specified field key. + +#### Examples + +{{< expand-wrapper >}} +{{% expand "Predict field values associated with a field key" %}} + +##### Sample data + +The examples use the following subset of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "noaa"."autogen"."h2o_feet" WHERE "location"='santa_monica' AND time >= '2019-08-17T00:00:00Z' AND time <= '2019-08-22T00:00:00Z' +``` + +##### Step 1: Match the trends of the raw data + +Write a `GROUP BY time()` query that matches the general trends of the raw `water_level` data. +Here, we use the [`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first) function: + +```sql +SELECT FIRST("water_level") FROM "noaa"."autogen"."h2o_feet" WHERE "location"='santa_monica' and time >= '2019-08-17T00:00:00Z' AND time <= '2019-08-22T00:00:00Z' GROUP BY time(6h,6h) +``` + +In the `GROUP BY time()` clause, the first argument (`6h`) matches +the length of time that occurs between each peak and trough in the `water_level` data. +The second argument (`6h`) is the +[offset interval](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#advanced-group-by-time-syntax). +The offset interval alters the default `GROUP BY time()` boundaries to +match the time range of the raw data. + +{{< img-hd src="/img/influxdb/2-4-influxql-holtwinters-1.png" alt="Holt Winters base data" />}} + +###### Step 2: Determine the seasonal pattern + +Identify the seasonal pattern in the data using the information from the +query in step 1. + +The pattern in the `water_level` data repeats about every 12 hours. +There are two data points per season, so `2` is the seasonal pattern argument. + +{{< img-hd src="/img/influxdb/2-4-influxql-holtwinters-2.png" alt="Holt Winters seasonal data" />}} + +###### Step 3: Apply the HOLT_WINTERS() function + +Add the Holt-Winters function to the query. +Here, we use `HOLT_WINTERS_WITH_FIT()` to view both the fitted values and the predicted values: + +```sql +SELECT HOLT_WINTERS_WITH_FIT(FIRST("water_level"),10,2) FROM "noaa"."autogen"."h2o_feet" WHERE "location"='santa_monica' AND time >= '2019-08-17 00:00:00' AND time <= '2019-08-22 00:00:00' GROUP BY time(6h,6h) +``` + +In the `HOLT_WINTERS_WITH_FIT()` function, the first argument (`10`) requests 10 predicted field values. +Each predicted point is `6h` apart, the same interval as the first argument in the `GROUP BY time()` clause. +The second argument in the `HOLT_WINTERS_WITH_FIT()` function (`2`) is the seasonal pattern that we determined in the previous step. + +{{< img-hd src="/img/influxdb/2-4-influxql-holtwinters-3.png" alt="Holt Winters predicted data" />}} + +{{% /expand %}} +{{< /expand-wrapper >}} + +#### Common issues with `HOLT_WINTERS()` + +##### Receiving fewer than `N` points + +In some cases, you may receive fewer predicted points than requested by the `N` parameter. +That behavior typically occurs when the math becomes unstable and cannot forecast more +points. In this case, `HOLT_WINTERS()` may not be suited for the dataset or the seasonal adjustment parameter is invalid. + +## Technical analysis functions + +Technical analysis functions apply widely used algorithms to your data. +While they are primarily used in finance and investing, they have +application in other industries. + +For technical analysis functions, consider whether to include the `PERIOD`, `HOLD_PERIOD`, and `WARMUP_TYPE` arguments: + +#### `PERIOD` + +**Required, integer, min=1** + +The sample size for the algorithm, which is the number of historical samples with significant +effect on the output of the algorithm. +For example, `2` means the current point and the point before it. +The algorithm uses an exponential decay rate to determine the weight of a historical point, +generally known as the alpha (α). The `PERIOD` controls the decay rate. + +{{% note %}} +**Note:** Older points can still have an impact. +{{% /note %}} + +#### `HOLD_PERIOD` + +**integer, min=-1** + +How many samples the algorithm needs before emitting results. +The default of `-1` means the value is based on the algorithm, the `PERIOD`, +and the `WARMUP_TYPE`. Verify this value is enough for the algorithm to emit meaningful results. + +_**Default hold periods:**_ + +For most technical analysis functions, the default `HOLD_PERIOD` is +determined by the function and the [`WARMUP_TYPE`](#warmup_type) shown in the following table: + +| Algorithm \ Warmup Type | simple | exponential | none | +| --------------------------------- | ---------------------- | ----------- |:----------: | +| [EXPONENTIAL_MOVING_AVERAGE](#exponential_moving_average) | PERIOD - 1 | PERIOD - 1 | n/a | +| [DOUBLE_EXPONENTIAL_MOVING_AVERAGE](#double_exponential_moving_average) | ( PERIOD - 1 ) * 2 | PERIOD - 1 | n/a | +| [TRIPLE_EXPONENTIAL_MOVING_AVERAGE](#triple_exponential_moving_average) | ( PERIOD - 1 ) * 3 | PERIOD - 1 | n/a | +| [TRIPLE_EXPONENTIAL_DERIVATIVE](#triple_exponential_derivative) | ( PERIOD - 1 ) * 3 + 1 | PERIOD | n/a | +| [RELATIVE_STRENGTH_INDEX](#relative_strength_index) | PERIOD | PERIOD | n/a | +| [CHANDE_MOMENTUM_OSCILLATOR](#chande_momentum_oscillator) | PERIOD | PERIOD | PERIOD - 1 | + +_**Kaufman algorithm default hold periods:**_ + +| Algorithm | Default Hold Period | +| --------- | ------------------- | +| [KAUFMANS_EFFICIENCY_RATIO()](#kaufmans_efficiency_ratio) | PERIOD | +| [KAUFMANS_ADAPTIVE_MOVING_AVERAGE()](#kaufmans_adaptive_moving_average) | PERIOD | + +#### `WARMUP_TYPE` + +**default='exponential'** + +Controls how the algorithm initializes for the first `PERIOD` samples. +It is essentially the duration for which it has an incomplete sample set. + +##### simple + +Simple moving average (SMA) of the first `PERIOD` samples. +This is the method used by [ta-lib](https://www.ta-lib.org/). + +##### exponential + +Exponential moving average (EMA) with scaling alpha (α). +Uses an EMA with `PERIOD=1` for the first point, `PERIOD=2` +for the second point, and so on, until the algorithm has consumed `PERIOD` number of points. +As the algorithm immediately starts using an EMA, when this method is used and +`HOLD_PERIOD` is unspecified or `-1`, the algorithm may start emitting points +after a much smaller sample size than with `simple`. + +##### none + +The algorithm does not perform any smoothing at all. +Method used by [ta-lib](https://www.ta-lib.org/). +When this method is used and `HOLD_PERIOD` is unspecified, `HOLD_PERIOD` +defaults to `PERIOD - 1`. + +{{% note %}} +**Note:** The `none` warmup type is only available with the [`CHANDE_MOMENTUM_OSCILLATOR()`](#chande_momentum_oscillator) function. +{{% /note %}} + +## CHANDE_MOMENTUM_OSCILLATOR() + +The Chande Momentum Oscillator (CMO) is a technical momentum indicator developed by Tushar Chande. +The CMO indicator is created by calculating the difference between the sum of all +recent higher data points and the sum of all recent lower data points, +then dividing the result by the sum of all data movement over a given time period. +The result is multiplied by 100 to give the -100 to +100 range. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). To use `CHANDE_MOMENTUM_OSCILLATOR()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +CHANDE_MOMENTUM_OSCILLATOR([ * | | /regular_expression/ ], [, , [warmup_type]]) +``` + +### Arguments + +- [period](#period) +- (Optional) [hold_period](#hold_period) +- (Optional) [warmup_type](#warmup_type) + +`CHANDE_MOMENTUM_OSCILLATOR(field_key, 2)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Chande Momentum Oscillator algorithm with a 2-value period +and the default hold period and warmup type. + +`CHANDE_MOMENTUM_OSCILLATOR(field_key, 10, 9, 'none')` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Chande Momentum Oscillator algorithm with a 10-value period +a 9-value hold period, and the `none` warmup type. + +`CHANDE_MOMENTUM_OSCILLATOR(MEAN(), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Chande Momentum Oscillator algorithm with a 2-value period +and the default hold period and warmup type. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `CHANDE_MOMENTUM_OSCILLATOR()` function. +{{% /note %}} + +`CHANDE_MOMENTUM_OSCILLATOR(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Chande Momentum Oscillator algorithm with a 2-value period +and the default hold period and warmup type. + +`CHANDE_MOMENTUM_OSCILLATOR(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Chande Momentum Oscillator algorithm with a 2-value period +and the default hold period and warmup type. + +`CHANDE_MOMENTUM_OSCILLATOR()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +## EXPONENTIAL_MOVING_AVERAGE() + +An exponential moving average (EMA) (or exponentially weighted moving average) is a type of moving average similar to a [simple moving average](/influxdb/v2.6/query-data/influxql/functions/transformations/#moving_average), +except more weight is given to the latest data. + +This type of moving average reacts faster to recent data changes than a simple moving average. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `EXPONENTIAL_MOVING_AVERAGE()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +```sql +EXPONENTIAL_MOVING_AVERAGE([ * | | /regular_expression/ ], [, ), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `EXPONENTIAL_MOVING_AVERAGE()` function. +{{% /note %}} + +`EXPONENTIAL_MOVING_AVERAGE(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`EXPONENTIAL_MOVING_AVERAGE(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`EXPONENTIAL_MOVING_AVERAGE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +### Arguments + +- [period](#period) +- (Optional) [hold_period](#hold_period) +- (Optional) [warmup_type](#warmup_type) + +## DOUBLE_EXPONENTIAL_MOVING_AVERAGE() + +The Double Exponential Moving Average (DEMA) attempts to remove the inherent lag +associated with moving averages by placing more weight on recent values. +The name suggests this is achieved by applying a double exponential smoothing which is not the case. +The value of an [EMA](#exponential_moving_average) is doubled. +To keep the value in line with the actual data and to remove the lag, the value "EMA of EMA" +is subtracted from the previously doubled EMA. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `DOUBLE_EXPONENTIAL_MOVING_AVERAGE()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +DOUBLE_EXPONENTIAL_MOVING_AVERAGE([ * | | /regular_expression/ ], [, ), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Double Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `DOUBLE_EXPONENTIAL_MOVING_AVERAGE()` function. +{{% /note %}} + +`DOUBLE_EXPONENTIAL_MOVING_AVERAGE(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Double Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`DOUBLE_EXPONENTIAL_MOVING_AVERAGE(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Double Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`DOUBLE_EXPONENTIAL_MOVING_AVERAGE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +### Arguments + +- [period](#period) +- (Optional) [hold_period](#hold_period) +- (Optional) [warmup_type](#warmup_type) + +## KAUFMANS_EFFICIENCY_RATIO() + +Kaufman's Efficiency Ration, or simply "Efficiency Ratio" (ER), is calculated by +dividing the data change over a period by the absolute sum of the data movements +that occurred to achieve that change. +The resulting ratio ranges between 0 and 1 with higher values representing a +more efficient or trending market. + +The ER is very similar to the [Chande Momentum Oscillator](#chande_momentum_oscillator) (CMO). +The difference is that the CMO takes market direction into account, but if you take the absolute CMO and divide by 100, you you get the Efficiency Ratio. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `KAUFMANS_EFFICIENCY_RATIO()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +KAUFMANS_EFFICIENCY_RATIO([ * | | /regular_expression/ ], [, ]) +``` + +`KAUFMANS_EFFICIENCY_RATIO(field_key, 2)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Efficiency Index algorithm with a 2-value period +and the default hold period and warmup type. + +`KAUFMANS_EFFICIENCY_RATIO(field_key, 10, 10)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Efficiency Index algorithm with a 10-value period and +a 10-value hold period. + +`KAUFMANS_EFFICIENCY_RATIO(MEAN(), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Efficiency Index algorithm with a 2-value period +and the default hold period. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `KAUFMANS_EFFICIENCY_RATIO()` function. +{{% /note %}} + +`KAUFMANS_EFFICIENCY_RATIO(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Efficiency Index algorithm with a 2-value period +and the default hold period and warmup type. + +`KAUFMANS_EFFICIENCY_RATIO(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Efficiency Index algorithm with a 2-value period +and the default hold period and warmup type. + +`KAUFMANS_EFFICIENCY_RATIO()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +**Arguments:** + +- [period](#period) +- (Optional) [hold_period](#hold_period) + +## KAUFMANS_ADAPTIVE_MOVING_AVERAGE() + +Kaufman's Adaptive Moving Average (KAMA) is a moving average designed to +account for sample noise or volatility. +KAMA will closely follow data points when the data swings are relatively small and noise is low. +KAMA will adjust when the data swings widen and follow data from a greater distance. +This trend-following indicator can be used to identify the overall trend, +time turning points and filter data movements. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `KAUFMANS_ADAPTIVE_MOVING_AVERAGE()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +KAUFMANS_ADAPTIVE_MOVING_AVERAGE([ * | | /regular_expression/ ], [, ]) +``` + +`KAUFMANS_ADAPTIVE_MOVING_AVERAGE(field_key, 2)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Kaufman Adaptive Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`KAUFMANS_ADAPTIVE_MOVING_AVERAGE(field_key, 10, 10)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Kaufman Adaptive Moving Average algorithm with a 10-value period +and a 10-value hold period. + +`KAUFMANS_ADAPTIVE_MOVING_AVERAGE(MEAN(), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Kaufman Adaptive Moving Average algorithm with a 2-value period +and the default hold period. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `KAUFMANS_ADAPTIVE_MOVING_AVERAGE()` function. +{{% /note %}} + +`KAUFMANS_ADAPTIVE_MOVING_AVERAGE(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Kaufman Adaptive Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`KAUFMANS_ADAPTIVE_MOVING_AVERAGE(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Kaufman Adaptive Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`KAUFMANS_ADAPTIVE_MOVING_AVERAGE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +**Arguments:** +- [period](#period) +- (Optional) [hold_period](#hold_period) + +## TRIPLE_EXPONENTIAL_MOVING_AVERAGE() + +The triple exponential moving average (TEMA) filters out +volatility from conventional moving averages. +While the name implies that it's a triple exponential smoothing, it's actually a +composite of a [single exponential moving average](#exponential_moving_average), +a [double exponential moving average](#double_exponential_moving_average), +and a triple exponential moving average. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `TRIPLE_EXPONENTIAL_MOVING_AVERAGE()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +TRIPLE_EXPONENTIAL_MOVING_AVERAGE([ * | | /regular_expression/ ], [, ), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Triple Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `TRIPLE_EXPONENTIAL_MOVING_AVERAGE()` function. +{{% /note %}} + +`TRIPLE_EXPONENTIAL_MOVING_AVERAGE(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Triple Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`TRIPLE_EXPONENTIAL_MOVING_AVERAGE(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Triple Exponential Moving Average algorithm with a 2-value period +and the default hold period and warmup type. + +`TRIPLE_EXPONENTIAL_MOVING_AVERAGE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +**Arguments:** +- [period](#period) +- (Optional) [hold_period](#hold_period) +- (Optional) [warmup_type](#warmup_type) + +## TRIPLE_EXPONENTIAL_DERIVATIVE() + +The triple exponential derivative indicator, commonly referred to as "TRIX," is +an oscillator used to identify oversold and overbought markets, and can also be +used as a momentum indicator. +TRIX calculates a [triple exponential moving average](#triple_exponential_moving_average) +of the [log](/influxdb/v2.6/query-data/influxql/functions/transformations/#log) +of the data input over the period of time. +The previous value is subtracted from the previous value. +This prevents cycles that are shorter than the defined period from being considered by the indicator. + +Like many oscillators, TRIX oscillates around a zero line. When used as an oscillator, +a positive value indicates an overbought market while a negative value indicates an oversold market. +When used as a momentum indicator, a positive value suggests momentum is increasing +while a negative value suggests momentum is decreasing. +Many analysts believe that when the TRIX crosses above the zero line it gives a +buy signal, and when it closes below the zero line, it gives a sell signal. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `TRIPLE_EXPONENTIAL_DERIVATIVE()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +TRIPLE_EXPONENTIAL_DERIVATIVE([ * | | /regular_expression/ ], [, ), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Triple Exponential Derivative algorithm with a 2-value period +and the default hold period and warmup type. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `TRIPLE_EXPONENTIAL_DERIVATIVE()` function. +{{% /note %}} + +`TRIPLE_EXPONENTIAL_DERIVATIVE(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Triple Exponential Derivative algorithm with a 2-value period +and the default hold period and warmup type. + +`TRIPLE_EXPONENTIAL_DERIVATIVE(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Triple Exponential Derivative algorithm with a 2-value period +and the default hold period and warmup type. + +`TRIPLE_EXPONENTIAL_DERIVATIVE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +## RELATIVE_STRENGTH_INDEX() + +The relative strength index (RSI) is a momentum indicator that compares the magnitude of recent increases and decreases over a specified time period to measure speed and change of data movements. +Source + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). + +To use `RELATIVE_STRENGTH_INDEX()` with a `GROUP BY time()` clause, see [Advanced syntax](/influxdb/v2.6/query-data/influxql/functions/transformations/#advanced-syntax). + +### Basic syntax + +``` +RELATIVE_STRENGTH_INDEX([ * | | /regular_expression/ ], [, ), 2) ... GROUP BY time(1d)` +Returns the mean of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) +processed using the Relative Strength Index algorithm with a 2-value period +and the default hold period and warmup type. + +{{% note %}} +**Note:** When aggregating data with a `GROUP BY` clause, you must include an [aggregate function](/influxdb/v2.6/query-data/influxql/functions/aggregates/) in your call to the `RELATIVE_STRENGTH_INDEX()` function. +{{% /note %}} + +`RELATIVE_STRENGTH_INDEX(/regular_expression/, 2)` +Returns the field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/) +processed using the Relative Strength Index algorithm with a 2-value period +and the default hold period and warmup type. + +`RELATIVE_STRENGTH_INDEX(*, 2)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +processed using the Relative Strength Index algorithm with a 2-value period +and the default hold period and warmup type. + +`RELATIVE_STRENGTH_INDEX()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +**Arguments:** +- [period](#period) +- (Optional) [hold_period](#hold_period) +- (Optional) [warmup_type](#warmup_type) diff --git a/content/influxdb/v2.6/query-data/influxql/functions/transformations.md b/content/influxdb/v2.6/query-data/influxql/functions/transformations.md new file mode 100644 index 000000000..0fcbb53cd --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/functions/transformations.md @@ -0,0 +1,4265 @@ +--- +title: InfluxQL transformation functions +list_title: Tranfsormation functions +description: > + Use transformation functions modify and return values each row of queried data. +menu: + influxdb_2_6: + name: Transformations + parent: InfluxQL functions +weight: 205 +--- + +InfluxQL transformation functions modify and return values each row of queried data. + +Each transformation function below covers **syntax**, including parameters to pass to the function, and **examples** of how to use the function. Examples use [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data) and data from [sample_test.txt](https://gist.github.com/sanderson/244e3dc2d778d5c37783483c6c2b548a). + +- [ABS()](#abs) +- [ACOS()](#acos) +- [ASIN()](#asin) +- [ATAN()](#atan) +- [ATAN2()](#atan2) +- [CEIL()](#ceil) +- [COS()](#cos) +- [CUMULATIVE_SUM()](#cumulative_sum) +- [DERIVATIVE()](#derivative) +- [DIFFERENCE()](#difference) +- [ELAPSED()](#elapsed) +- [EXP()](#exp) +- [FLOOR()](#floor) +- [HISTOGRAM()](#histogram) +- [LN()](#ln) +- [LOG()](#log) +- [LOG2()](#log2) +- [LOG10()](#log10) +- [MOVING_AVERAGE()](#moving_average) +- [NON_NEGATIVE_DERIVATIVE()](#non_negative_derivative) +- [NON_NEGATIVE_DIFFERENCE](#non_negative_difference) +- [POW](#pow) +- [ROUND](#round) +- [SIN](#sin) +- [SQRT](#sqrt) +- [TAN](#tan) + +## ABS() + +Returns the absolute value of the field value. Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). + +### Basic syntax + +```sql +SELECT ABS( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`ABS(field_key)` +Returns the absolute values of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`ABS(*)` +Returns the absolute values of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`ABS()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the absolute values of field values associated with a field key" %}} + +Return the absolute values of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT ABS("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:15:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | abs | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 8.5040000000 | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 8.4190000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 8.3200000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | + +{{% /expand %}} + +{{% expand "Calculate the absolute values of field values associated with each field key in a measurement" %}} + +Return the absolute values of field values for each field key that stores numeric values in the `data` measurement. +The `h2o_feet` measurement has one numeric field `water_level`. + +```sql +SELECT ABS(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:15:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | abs_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | 8.5040000000 | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 8.4190000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 8.3200000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | + + +{{% /expand %}} + +{{% expand "Calculate the absolute values of field values associated with a field key and include several clauses" %}} + +Return the absolute values of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT ABS("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | abs | +| :------------------- | -----------: | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:24:00Z | 8.1300000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:18:00Z | 8.2250000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT ABS(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `ABS()` function to those results. + +`ABS()` supports the following nested functions: + +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the absolute values of mean values" %}} + +Return the absolute values of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT ABS(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | abs | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 5.4135000000 | +| 2019-08-18T00:12:00Z | 5.3042500000 | +| 2019-08-18T00:24:00Z | 5.1682500000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## ACOS() + +Returns the arccosine (in radians) of the field value. Field values must be between -1 and 1. Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but does not support [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). + +### Basic syntax + +```sql +SELECT ACOS( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`ACOS(field_key)` +Returns the arccosine of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`ACOS(*)` +Returns the arccosine of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`ACOS()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types) with values between -1 and 1. + +#### Examples + +The examples below use a subset of data from [sample_test.txt](https://gist.github.com/sanderson/244e3dc2d778d5c37783483c6c2b548a), which only includes field values within the calculable range (-1 to 1). This value range is required for the `ACOS()` function: + +| time | a | +| :------------------- | -----------------: | +| 2018-06-24T12:01:00Z | -0.774984088561186 | +| 2018-06-24T12:02:00Z | -0.921037167720451 | +| 2018-06-24T12:04:00Z | -0.905980032168252 | +| 2018-06-24T12:05:00Z | -0.891164752631417 | +| 2018-06-24T12:09:00Z | 0.416579917279588 | +| 2018-06-24T12:10:00Z | 0.328968116955350 | +| 2018-06-24T12:11:00Z | 0.263585064411983 | + + +{{< expand-wrapper >}} + +{{% expand "Calculate the arccosine of field values associated with a field key" %}} + +Return the arccosine of field values in the `a` field key in the `data` measurement. + +```sql +SELECT ACOS("a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | acos | +| :------------------- | -----------: | +| 2018-06-24T12:00:00Z | | +| 2018-06-24T12:01:00Z | 2.4574862443 | +| 2018-06-24T12:02:00Z | 2.7415314737 | +| 2018-06-24T12:03:00Z | | +| 2018-06-24T12:04:00Z | 2.7044854503 | +| 2018-06-24T12:05:00Z | 2.6707024029 | +| 2018-06-24T12:06:00Z | | +| 2018-06-24T12:07:00Z | | +| 2018-06-24T12:08:00Z | | +| 2018-06-24T12:09:00Z | 1.1411163210 | +| 2018-06-24T12:10:00Z | 1.2355856616 | +| 2018-06-24T12:11:00Z | 1.3040595066 | + +{{% /expand %}} + +{{% expand "Calculate the arccosine of field values associated with each field key in a measurement" %}} + +Return the arccosine of field values for each field key that stores numeric values in the `data` measurement, field `a` and `b`. + +```sql +SELECT ACOS(*) FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | acos_a | acos_b | +| :------------------- | -----------: | -----------: | +| 2018-06-24T12:00:00Z | | 1.7351786976 | +| 2018-06-24T12:01:00Z | 2.4574862443 | 1.4333294161 | +| 2018-06-24T12:02:00Z | 2.7415314737 | 2.0748091141 | +| 2018-06-24T12:03:00Z | | 1.6438345404 | +| 2018-06-24T12:04:00Z | 2.7044854503 | | +| 2018-06-24T12:05:00Z | 2.6707024029 | 0.7360183965 | +| 2018-06-24T12:06:00Z | | 1.2789990384 | +| 2018-06-24T12:07:00Z | | 2.1522589654 | +| 2018-06-24T12:08:00Z | | 0.6128438977 | +| 2018-06-24T12:09:00Z | 1.1411163210 | | +| 2018-06-24T12:10:00Z | 1.2355856616 | | +| 2018-06-24T12:11:00Z | 1.3040595066 | 1.7595349692 | +| 2018-06-24T12:12:00Z | 1.8681669412 | 2.5213034266 | + +{{% /expand %}} + +{{% expand "Calculate the arccosine of field values associated with a field key and include several clauses" %}} + +Return the arccosine of field values associated with the `a` field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) between `2018-06-24T00:00:00Z` and `2018-06-25T00:00:00Z` with results in [descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) results by two points. + +```sql +SELECT ACOS("a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | acos | +| :------------------- | -----------: | +| 2018-06-24T23:58:00Z | 1.5361053361 | +| 2018-06-24T23:57:00Z | | +| 2018-06-24T23:56:00Z | 0.5211076815 | +| 2018-06-24T23:55:00Z | 1.647695085 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT ACOS(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `ACOS()` function to those results. + +`ACOS()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the arccosine of mean values" %}} + +Return the arccosine of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `a` that are calculated at 3 hour intervals. + +```sql +SELECT ACOS(MEAN("a")) FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' GROUP BY time(3h) +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | acos | +| :------------------- | -----------: | +| 2018-06-24T00:00:00Z | | +| 2018-06-24T03:00:00Z | | +| 2018-06-24T06:00:00Z | | +| 2018-06-24T09:00:00Z | | +| 2018-06-24T12:00:00Z | 1.5651603194 | +| 2018-06-24T15:00:00Z | 1.6489104619 | +| 2018-06-24T18:00:00Z | 1.4851295699 | +| 2018-06-24T21:00:00Z | 1.6209901549 | +| 2018-06-25T00:00:00Z | 1.7149309371 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## ASIN() + +Returns the arcsine (in radians) of the field value. Field values must be between -1 and 1. + +### Basic syntax + +```sql +SELECT ASIN( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`ASIN(field_key)` +Returns the arcsine of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`ASIN(*)` +Returns the arcsine of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`ASIN()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types) with values between -1 and 1. + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `ASIN()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following data from [sample_test.txt](https://gist.github.com/sanderson/244e3dc2d778d5c37783483c6c2b548a). + +The following data from this data set only includes field value within the calculable range (-1 to 1) required for the `ASIN()` function: + +| time | a | +| :------------------- | -----------------: | +| 2018-06-24T12:01:00Z | -0.774984088561186 | +| 2018-06-24T12:02:00Z | -0.921037167720451 | +| 2018-06-24T12:04:00Z | -0.905980032168252 | +| 2018-06-24T12:05:00Z | -0.891164752631417 | +| 2018-06-24T12:09:00Z | 0.416579917279588 | +| 2018-06-24T12:10:00Z | 0.328968116955350 | +| 2018-06-24T12:11:00Z | 0.263585064411983 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the arcsine of field values associated with a field key" %}} + +Return the arcsine of field values in the `a` field key in the `data` measurement. + +```sql +SELECT ASIN("a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | asin | +| :------------------- | ------------: | +| 2018-06-24T12:00:00Z | | +| 2018-06-24T12:01:00Z | -0.8866899175 | +| 2018-06-24T12:02:00Z | -1.1707351469 | +| 2018-06-24T12:03:00Z | | +| 2018-06-24T12:04:00Z | -1.1336891235 | +| 2018-06-24T12:05:00Z | -1.0999060761 | +| 2018-06-24T12:06:00Z | | +| 2018-06-24T12:07:00Z | | +| 2018-06-24T12:08:00Z | | +| 2018-06-24T12:09:00Z | 0.4296800058 | +| 2018-06-24T12:10:00Z | 0.3352106652 | +| 2018-06-24T12:11:00Z | 0.2667368202 | +| 2018-06-24T12:12:00Z | -0.2973706144 | + +{{% /expand %}} + +{{% expand "Calculate the arcsine of field values associated with each field key in a measurement" %}} + +Return the arcsine of field values for each field key that stores numeric values in the `data` measurement. +The `data` measurement has one numeric field: `a`. + +```sql +SELECT ASIN(*) FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | asin_a | asin_b | +| :------------------- | ------------: | ------------: | +| 2018-06-24T12:00:00Z | | -0.1643823708 | +| 2018-06-24T12:01:00Z | -0.8866899175 | 0.1374669107 | +| 2018-06-24T12:02:00Z | -1.1707351469 | -0.5040127873 | +| 2018-06-24T12:03:00Z | | -0.0730382136 | +| 2018-06-24T12:04:00Z | -1.1336891235 | | +| 2018-06-24T12:05:00Z | -1.0999060761 | 0.8347779303 | +| 2018-06-24T12:06:00Z | | 0.2917972884 | +| 2018-06-24T12:07:00Z | | -0.5814626386 | +| 2018-06-24T12:08:00Z | | 0.9579524291 | +| 2018-06-24T12:09:00Z | 0.4296800058 | | +| 2018-06-24T12:10:00Z | 0.3352106652 | | +| 2018-06-24T12:11:00Z | 0.2667368202 | -0.1887386424 | +| 2018-06-24T12:12:00Z | -0.2973706144 | -0.9505070998 | + +{{% /expand %}} + +{{% expand "Calculate the arcsine of field values associated with a field key and include several clauses" %}} + +Return the arcsine of field values associated with the `a` field key in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2018-06-24T00:00:00Z` and `2018-06-25T00:00:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT ASIN("a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | asin | +| :------------------- | -----------: | +| 2018-06-24T23:58:00Z | 0.0346909907 | +| 2018-06-24T23:57:00Z | | +| 2018-06-24T23:56:00Z | 1.0496886453 | +| 2018-06-24T23:55:00Z | 0.0768987583 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT ASIN(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `ASIN()` function to those results. + +`ASIN()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the arcsine of mean values" %}} + +Return the arcsine of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `a`s that are calculated at 3-hour intervals. + +```sql +SELECT ASIN(MEAN("a")) FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' GROUP BY time(3h) +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | asin | +| :------------------- | ------------: | +| 2018-06-24T00:00:00Z | | +| 2018-06-24T03:00:00Z | | +| 2018-06-24T06:00:00Z | | +| 2018-06-24T09:00:00Z | | +| 2018-06-24T12:00:00Z | 0.0056360073 | +| 2018-06-24T15:00:00Z | -0.0781141351 | +| 2018-06-24T18:00:00Z | 0.0856667569 | +| 2018-06-24T21:00:00Z | -0.0501938281 | +| 2018-06-25T00:00:00Z | -0.1441346103 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## ATAN() + +Returns the arctangent (in radians) of the field value. Field values must be between -1 and 1. + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `ATAN()` with a `GROUP BY time()` clause, see the [Advanced syntax](#advanced-syntax). + +### Basic syntax + +```sql +SELECT ATAN( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`ATAN(field_key)` +Returns the arctangent of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + + + +`ATAN(*)` +Returns the arctangent of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`ATAN()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types) with values between -1 and 1. + +#### Examples + +The examples below use a subset of data from [sample_test.txt](https://gist.github.com/sanderson/244e3dc2d778d5c37783483c6c2b548a) that only includes field values within the calculable range (-1 to 1) required for the of the `ATAN()` function. + +{{< expand-wrapper >}} + +{{% expand "Calculate the arctangent of field values associated with a field key" %}} + +Return the arctangent of field values in the `a` field key in the `data` measurement. + +```sql +SELECT ATAN("a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan | +| :------------------- | ------------: | +| 2018-06-24T12:00:00Z | 0.9293622934 | +| 2018-06-24T12:01:00Z | -0.6593001275 | +| 2018-06-24T12:02:00Z | -0.7443170184 | +| 2018-06-24T12:03:00Z | -1.0488818071 | +| 2018-06-24T12:04:00Z | -0.7361091801 | +| 2018-06-24T12:05:00Z | -0.7279122495 | +| 2018-06-24T12:06:00Z | 0.8379907133 | +| 2018-06-24T12:07:00Z | -0.9117032768 | +| 2018-06-24T12:08:00Z | -1.0364006848 | +| 2018-06-24T12:09:00Z | 0.3947172008 | +| 2018-06-24T12:10:00Z | 0.3178167283 | +| 2018-06-24T12:11:00Z | 0.2577231762 | +| 2018-06-24T12:12:00Z | -0.2850291359 | + +{{% /expand %}} + +{{% expand "Calculate the arctangent of field values associated with each field key in a measurement" %}} + +Return the arctangent of field values for each field key that stores numeric values in the `data` measurement--fields `a` and `b`. + +```sql +SELECT ATAN(*) FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan_a | atan_b | +| :------------------- | ------------: | ------------: | +| 2018-06-24T12:00:00Z | 0.9293622934 | -0.1622053541 | +| 2018-06-24T12:01:00Z | -0.6593001275 | 0.1361861379 | +| 2018-06-24T12:02:00Z | -0.7443170184 | -0.4499093122 | +| 2018-06-24T12:03:00Z | -1.0488818071 | -0.0728441751 | +| 2018-06-24T12:04:00Z | -0.7361091801 | 1.0585985451 | +| 2018-06-24T12:05:00Z | -0.7279122495 | 0.6378113578 | +| 2018-06-24T12:06:00Z | 0.8379907133 | 0.2801105336 | +| 2018-06-24T12:07:00Z | -0.9117032768 | -0.5022647489 | +| 2018-06-24T12:08:00Z | -1.0364006848 | 0.6856298940 | +| 2018-06-24T12:09:00Z | 0.3947172008 | -0.8711781065 | +| 2018-06-24T12:10:00Z | 0.3178167283 | -0.8273348593 | +| 2018-06-24T12:11:00Z | 0.2577231762 | -0.1854639556 | +| 2018-06-24T12:12:00Z | -0.2850291359 | -0.6830451940 | + +{{% /expand %}} + +{{% expand "Calculate the arctangent of field values associated with a field key and include several clauses" %}} + +Return the arctangent of field values associated with the `a` field key in +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2017-05-01T00:00:00Z` and `2017-05-09T00:00:00Z` and returns results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT ATAN("a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan | +| :------------------- | ------------: | +| 2018-06-24T23:58:00Z | 0.0346701348 | +| 2018-06-24T23:57:00Z | -0.8582372146 | +| 2018-06-24T23:56:00Z | 0.7144341473 | +| 2018-06-24T23:55:00Z | -0.0766723939 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT ATAN(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `ATAN()` function to those results. + +`ATAN()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples of advanced syntax + +{{< expand-wrapper >}} + +{{% expand "Calculate the arctangent of mean values" %}} + +Return the arctangent of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `a`s that are calculated at 3-hour intervals. + +```sql +SELECT ATAN(MEAN("a")) FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' GROUP BY time(3h) +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan | +| :------------------- | ------------: | +| 2018-06-24T00:00:00Z | | +| 2018-06-24T03:00:00Z | | +| 2018-06-24T06:00:00Z | | +| 2018-06-24T09:00:00Z | | +| 2018-06-24T12:00:00Z | 0.0056359178 | +| 2018-06-24T15:00:00Z | -0.0778769005 | +| 2018-06-24T18:00:00Z | 0.0853541301 | +| 2018-06-24T21:00:00Z | -0.0501307176 | +| 2018-06-25T00:00:00Z | -0.1426603174 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## ATAN2() + +Returns the the arctangent of `y/x` in radians. + +### Basic syntax + +```sql +SELECT ATAN2( [ * | | num ], [ | num ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`ATAN2(field_key_y, field_key_x)` +Returns the arctangent of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key), `field_key_y`, divided by field values associated with `field_key_x`. + +`ATAN2(*, field_key_x)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) +divided by field values associated with `field_key_x`. + +`ATAN2()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `ATAN2()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use [sample_test.txt](https://gist.github.com/sanderson/244e3dc2d778d5c37783483c6c2b548a). + +{{< expand-wrapper >}} + +{{% expand "Calculate the arctangent of field_key_b over field_key_a" %}} + +Return the arctangents of field values in the `a` field key divided by values in the `b` field key. Both are part of the `data` measurement. + +```sql +SELECT ATAN2("a", "b") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan2 | +| :------------------- | ------------: | +| 2018-06-24T12:00:00Z | 1.6923979639 | +| 2018-06-24T12:01:00Z | -1.3957831900 | +| 2018-06-24T12:02:00Z | -2.0537314089 | +| 2018-06-24T12:03:00Z | -1.6127391493 | +| 2018-06-24T12:04:00Z | -0.4711275404 | +| 2018-06-24T12:05:00Z | -0.8770454978 | +| 2018-06-24T12:06:00Z | 1.3174573347 | +| 2018-06-24T12:07:00Z | -1.9730696643 | +| 2018-06-24T12:08:00Z | -1.1199236554 | +| 2018-06-24T12:09:00Z | 2.8043757212 | +| 2018-06-24T12:10:00Z | 2.8478694533 | +| 2018-06-24T12:11:00Z | 2.1893985296 | +| 2018-06-24T12:12:00Z | -2.7959592806 | + +{{% /expand %}} + +{{% expand "Calculate the arctangent of values associated with each field key in a measurement divided by field_key_a" %}} + +Return the arctangents of all numeric field values in the `data` measurement divided by values in the `a` field key. +The `data` measurement has two numeric fields: `a` and `b`. + +```sql +SELECT ATAN2(*, "a") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan2_a | atan2_b | +| :------------------- | ------------: | ------------: | +| 2018-06-24T12:00:00Z | 0.7853981634 | -0.1216016371 | +| 2018-06-24T12:01:00Z | -2.3561944902 | 2.9665795168 | +| 2018-06-24T12:02:00Z | -2.3561944902 | -2.6586575715 | +| 2018-06-24T12:03:00Z | -2.3561944902 | -3.0996498311 | +| 2018-06-24T12:04:00Z | -2.3561944902 | 2.0419238672 | +| 2018-06-24T12:05:00Z | -2.3561944902 | 2.4478418246 | +| 2018-06-24T12:06:00Z | 0.7853981634 | 0.2533389921 | +| 2018-06-24T12:07:00Z | -2.3561944902 | -2.7393193161 | +| 2018-06-24T12:08:00Z | -2.3561944902 | 2.6907199822 | +| 2018-06-24T12:09:00Z | 0.7853981634 | -1.2335793944 | +| 2018-06-24T12:10:00Z | 0.7853981634 | -1.2770731265 | +| 2018-06-24T12:11:00Z | 0.7853981634 | -0.6186022028 | +| 2018-06-24T12:12:00Z | -2.3561944902 | -1.9164296997 | + +{{% /expand %}} + +{{% expand "Calculate the arctangents of field values and include several clauses" %}} + +Return the arctangent of field values associated with the `a` field key divided +by the `b` field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2018-05-16T12:10:00Z` and `2018-05-16T12:10:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT ATAN2("a", "b") FROM "data" WHERE time >= '2018-06-24T00:00:00Z' AND time <= '2018-06-25T00:00:00Z' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan2 | +| :------------------- | ------------: | +| 2018-06-24T23:58:00Z | 0.0166179004 | +| 2018-06-24T23:57:00Z | -2.3211306482 | +| 2018-06-24T23:56:00Z | 1.8506549463 | +| 2018-06-24T23:55:00Z | -0.0768444917 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT ATAN2(, ) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `ATAN2()` function to those results. + +`ATAN2()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate arctangents of mean values" %}} + +Return the arctangents of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `a`s divided by average `b`s. Averages are calculated at 2-hour intervals. + +```sql +SELECT ATAN2(MEAN("b"), MEAN("a")) FROM "data" WHERE time >= '2018-06-24T12:00:00Z' AND time <= '2018-06-25T00:00:00Z' GROUP BY time(2h) +``` + +{{% influxql/table-meta %}} +name: data +{{% /influxql/table-meta %}} + +| time | atan2 | +| :------------------- | ------------: | +| 2018-06-24T12:00:00Z | -0.8233039154 | +| 2018-06-24T14:00:00Z | 1.6676707651 | +| 2018-06-24T16:00:00Z | 2.3853882606 | +| 2018-06-24T18:00:00Z | -1.0180694195 | +| 2018-06-24T20:00:00Z | -0.2601965301 | +| 2018-06-24T22:00:00Z | 2.1893237434 | +| 2018-06-25T00:00:00Z | -2.5572285037 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## CEIL() + +Returns the subsequent value rounded up to the nearest integer. + +### Basic syntax + +```sql +SELECT CEIL( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`CEIL(field_key)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) rounded up to the nearest integer. + +`CEIL(*)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) rounded up to the nearest integer. + +`CEIL()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `CEIL()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the ceiling of field values associated with a field key" %}} + +Return field values in the `water_level` field key in the `h2o_feet` measurement rounded up to the nearest integer. + +```sql +SELECT CEIL("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ceil | +| :------------------- | -----------: | +| 2019-08-17T00:00:00Z | 3.0000000000 | +| 2019-08-17T00:06:00Z | 3.0000000000 | +| 2019-08-17T00:12:00Z | 3.0000000000 | +| 2019-08-17T00:18:00Z | 3.0000000000 | +| 2019-08-17T00:24:00Z | 3.0000000000 | +| 2019-08-17T00:30:00Z | 3.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the ceiling of field values associated with each field key in a measurement" %}} + +Return field values for each field key that stores numeric values in the `h2o_feet` +measurement rounded up to the nearest integer. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT CEIL(*) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ceil_water_level | +| :------------------- | ---------------: | +| 2019-08-17T00:00:00Z | 3.0000000000 | +| 2019-08-17T00:06:00Z | 3.0000000000 | +| 2019-08-17T00:12:00Z | 3.0000000000 | +| 2019-08-17T00:18:00Z | 3.0000000000 | +| 2019-08-17T00:24:00Z | 3.0000000000 | +| 2019-08-17T00:30:00Z | 3.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the ceiling of field values associated with a field key and include several clauses" %}} + +Return field values associated with the `water_level` field key rounded up to the +nearest integer in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT CEIL("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ceil | +| :------------------- | -----------: | +| 2019-08-17T00:18:00Z | 3.0000000000 | +| 2019-08-17T00:12:00Z | 3.0000000000 | +| 2019-08-17T00:06:00Z | 3.0000000000 | +| 2019-08-17T00:00:00Z | 3.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT CEIL(( [ * | | // ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `CEIL()` function to those results. + +`CEIL()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate mean values rounded up to the nearest integer" %}} + +Return the [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals and rounds them up to the nearest integer. + +```sql +SELECT CEIL(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ceil | +| :------------------- | -----------: | +| 2019-08-17T00:00:00Z | 3.0000000000 | +| 2019-08-17T00:12:00Z | 3.0000000000 | +| 2019-08-17T00:24:00Z | 3.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## COS() + +Returns the cosine of the field value. + +### Basic syntax + +```sql +SELECT COS( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`COS(field_key)` +Returns the cosine of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`COS(*)` +Returns the cosine of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`COS()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `COS()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the cosine of field values associated with a field key" %}} + +Return the cosine of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT COS("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cos | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | -0.7041346171 | +| 2019-08-18T00:06:00Z | -0.7230474420 | +| 2019-08-18T00:12:00Z | -0.6977155876 | +| 2019-08-18T00:18:00Z | -0.6876182920 | +| 2019-08-18T00:24:00Z | -0.6390047316 | +| 2019-08-18T00:30:00Z | -0.6413094611 | + +{{% /expand %}} + +{{% expand "Calculate the cosine of field values associated with each field key in a measurement" %}} + +Return the cosine of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT COS(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cos_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | -0.7041346171 | +| 2019-08-18T00:06:00Z | -0.7230474420 | +| 2019-08-18T00:12:00Z | -0.6977155876 | +| 2019-08-18T00:18:00Z | -0.6876182920 | +| 2019-08-18T00:24:00Z | -0.6390047316 | +| 2019-08-18T00:30:00Z | -0.6413094611 | + +{{% /expand %}} + +{{% expand "Calculate the cosine of field values associated with a field key and include several clauses" %}} + +Return the cosine of field values associated with the `water_level` field key +in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT COS("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cos | +| :------------------- | ------------: | +| 2019-08-18T00:18:00Z | -0.6876182920 | +| 2019-08-18T00:12:00Z | -0.6977155876 | +| 2019-08-18T00:06:00Z | -0.7230474420 | +| 2019-08-18T00:00:00Z | -0.7041346171 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT COS(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `COS()` function to those results. + +`COS()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the cosine of mean values" %}} + +Return the cosine of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT COS(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cos | +| ---- | --- | +| 2019-08-18T00:00:00Z | -0.7136560605 | +| 2019-08-18T00:12:00Z | -0.6926839105 | +| 2019-08-18T00:24:00Z | -0.6401578165 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## CUMULATIVE_SUM() + +Returns the running total of subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). + +### Basic syntax + +```sql +SELECT CUMULATIVE_SUM( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`CUMULATIVE_SUM(field_key)` +Returns the running total of subsequent field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`CUMULATIVE_SUM(/regular_expression/)` +Returns the running total of subsequent field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`CUMULATIVE_SUM(*)` +Returns the running total of subsequent field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`CUMULATIVE_SUM()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `CUMULATIVE_SUM()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the cumulative sum of the field values associated with a field key" %}} + +Return the running total of the field values in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT CUMULATIVE_SUM("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cumulative_sum | +| :------------------- | -------------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 4.7310000000 | +| 2019-08-18T00:12:00Z | 7.0740000000 | +| 2019-08-18T00:18:00Z | 9.4030000000 | +| 2019-08-18T00:24:00Z | 11.6670000000 | +| 2019-08-18T00:30:00Z | 13.9340000000 | + +{{% /expand %}} + +{{% expand "Calculate the cumulative sum of the field values associated with each field key in a measurement" %}} + +Return the running total of the field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT CUMULATIVE_SUM(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cumulative_sum_water_level | +| :------------------- | -------------------------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 4.7310000000 | +| 2019-08-18T00:12:00Z | 7.0740000000 | +| 2019-08-18T00:18:00Z | 9.4030000000 | +| 2019-08-18T00:24:00Z | 11.6670000000 | +| 2019-08-18T00:30:00Z | 13.9340000000 | + +{{% /expand %}} + +{{% expand "Calculate the cumulative sum of the field values associated with each field key that matches a regular expression" %}} + +Return the running total of the field values for each field key that stores +numeric values and includes the word `water` in the `h2o_feet` measurement. + +```sql +SELECT CUMULATIVE_SUM(/water/) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cumulative_sum_water_level | +| :------------------- | -------------------------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 4.7310000000 | +| 2019-08-18T00:12:00Z | 7.0740000000 | +| 2019-08-18T00:18:00Z | 9.4030000000 | +| 2019-08-18T00:24:00Z | 11.6670000000 | +| 2019-08-18T00:30:00Z | 13.9340000000 | + +{{% /expand %}} + +{{% expand "Calculate the cumulative sum of the field values associated with a field key and include several clauses" %}} + +Return the running total of the field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT CUMULATIVE_SUM("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cumulative_sum | +| :------------------- | -------------: | +| 2019-08-18T00:18:00Z | 6.8600000000 | +| 2019-08-18T00:12:00Z | 9.2030000000 | +| 2019-08-18T00:06:00Z | 11.5820000000 | +| 2019-08-18T00:00:00Z | 13.9340000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT CUMULATIVE_SUM(( [ * | | // ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `CUMULATIVE_SUM()` function to those results. + +`CUMULATIVE_SUM()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the cumulative sum of mean values" %}} + +Return the running total of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT CUMULATIVE_SUM(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | cumulative_sum | +| :------------------- | -------------: | +| 2019-08-18T00:00:00Z | 2.3655000000 | +| 2019-08-18T00:12:00Z | 4.7015000000 | +| 2019-08-18T00:24:00Z | 6.9670000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## DERIVATIVE() + +Returns the rate of change between subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). + +### Basic syntax + +```sql +SELECT DERIVATIVE( [ * | | // ] [ , ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +InfluxDB calculates the difference between subsequent field values and converts those results into the rate of change per `unit`. +The `unit` argument is an integer followed by a [duration](/influxdb/v2.6/reference/glossary/#duration) and it is optional. +If the query does not specify the `unit` the unit defaults to one second (`1s`). + +`DERIVATIVE(field_key)` +Returns the rate of change between subsequent field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`DERIVATIVE(/regular_expression/)` +Returns the rate of change between subsequent field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`DERIVATIVE(*)` +Returns the rate of change between subsequent field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`DERIVATIVE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `DERIVATIVE()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples in this section use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the derivative between the field values associated with a field key" %}} + +Return the one-second rate of change between the `water_level` field values in the `h2o_feet` measurement. + +```sql +SELECT DERIVATIVE("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative | +| :------------------- | ------------: | +| 2019-08-18T00:06:00Z | 0.0000750000 | +| 2019-08-18T00:12:00Z | -0.0001000000 | +| 2019-08-18T00:18:00Z | -0.0000388889 | +| 2019-08-18T00:24:00Z | -0.0001805556 | +| 2019-08-18T00:30:00Z | 0.0000083333 | + +The first result (`0.0000750000`) is the one-second rate of change between the first two subsequent field values in the raw data. InfluxDB calculates the difference between the field values (subtracts the first field value from the second field value) and then normalizes that value to the one-second rate of change (dividing the difference between the field values' timestamps in seconds (`360s`) by the default unit (`1s`)): + +``` +(2.379 - 2.352) / (360s / 1s) +``` + +{{% /expand %}} + +{{% expand "Calculate the derivative between the field values associated with a field key and specify the unit option" %}} + +Return the six-minute rate of change between the field values in the `water_level` field in the `h2o_feet` measurement. + +```sql +SELECT DERIVATIVE("water_level",6m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative | +| :------------------- | ------------: | +| 2019-08-18T00:06:00Z | 0.0270000000 | +| 2019-08-18T00:12:00Z | -0.0360000000 | +| 2019-08-18T00:18:00Z | -0.0140000000 | +| 2019-08-18T00:24:00Z | -0.0650000000 | +| 2019-08-18T00:30:00Z | 0.0030000000 | + +The first result (`0.0270000000`) is the six-minute rate of change between the first two subsequent field values in the raw data. InfluxDB calculates the difference between the field values (subtracts the first field value from the second field value) and then normalizes that value to the six-minute rate of change (dividing the difference between the field values' timestamps in minutes (`6m`) by the specified interval (`6m`)): + +``` +(2.379 - 2.352) / (6m / 6m) +``` + +{{% /expand %}} + +{{% expand "Calculate the derivative between the field values associated with each field key in a measurement and specify the unit option" %}} + +Returns three-minute rate of change between the field values associated with each field key that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT DERIVATIVE(*,3m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative_water_level | +| :------------------- | ---------------------: | +| 2019-08-18T00:06:00Z | 0.0135000000 | +| 2019-08-18T00:12:00Z | -0.0180000000 | +| 2019-08-18T00:18:00Z | -0.0070000000 | +| 2019-08-18T00:24:00Z | -0.0325000000 | +| 2019-08-18T00:30:00Z | 0.0015000000 | + +The first result (`0.0135000000`) is the three-minute rate of change between the first two subsequent field values in the raw data. + +InfluxDB calculates the difference between the field values (subtracts the first field value from the second field value) and then normalizes that value to the three-minute rate of change (dividing the difference between the field values' timestamps in minutes (`6m`) by the specified interval (`3m`)): + +``` +(2.379 - 2.352) / (6m / 3m) +``` + +{{% /expand %}} + +{{% expand "Calculate the derivative between the field values associated with each field key that matches a regular expression and specify the unit option" %}} + +Return the two-minute rate of change between the field values associated with +each field key that stores numeric values and includes the word `water` in the +`h2o_feet` measurement. + +```sql +SELECT DERIVATIVE(/water/,2m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative_water_level | +| :------------------- | ---------------------: | +| 2019-08-18T00:06:00Z | 0.0090000000 | +| 2019-08-18T00:12:00Z | -0.0120000000 | +| 2019-08-18T00:18:00Z | -0.0046666667 | +| 2019-08-18T00:24:00Z | -0.0216666667 | +| 2019-08-18T00:30:00Z | 0.0010000000 | + +The first result (`0.0090000000`) is the two-minute rate of change between the first two subsequent field values in the raw data. + +InfluxDB calculates the difference between the field values (subtracts the first field value from the second field value) and then normalizes that value to the three-minute rate of change (dividing the difference between the field values' timestamps in minutes (`6m`) by the specified interval (`2m`)): + +``` +(2.379 - 2.352) / (6m / 2m) +``` + +{{% /expand %}} + +{{% expand "Calculate the derivative between the field values associated with a field key and include several clauses" %}} + +Return the one-second rate of change between `water_level` field values in the +`h2o_feet` measurement in [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) the number of points returned to one and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) results by two points. + +```sql +SELECT DERIVATIVE("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' ORDER BY time DESC LIMIT 1 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative | +| -------------------- | ------------ | +| 2019-08-18T00:12:00Z | 0.0000388889 | + +The only result (`0.0000388889`) is the one-second rate of change between the relevant subsequent field values in the raw data. InfluxDB calculates the difference between the field values (subtracts the first field value from the second field value) and then normalizes that value to the one-second rate of change (dividing the difference between the field values' timestamps in seconds (`360`) by the specified rate of change (`1s`)): + +``` +(2.379 - 2.352) / (360s / 1s) +``` + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT DERIVATIVE( ([ * | | // ]) [ , ] ) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `DERIVATIVE()` function to those results. + +The `unit` argument is an integer followed by a [duration](//influxdb/v2.6/reference/glossary/#duration) and it is optional. +If the query does not specify the `unit` the `unit` defaults to the `GROUP BY time()` interval. +Note that this behavior is different from the [basic syntax's](#basic-syntax-1) default behavior. + +`DERIVATIVE()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples +{{< expand-wrapper >}} + +{{% expand "Calculate the derivative of mean values" %}} + +Return the 12-minute rate of change between [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT DERIVATIVE(MEAN("water_level")) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | -0.1375000000 | +| 2019-08-18T00:12:00Z | -0.0295000000 | +| 2019-08-18T00:24:00Z | -0.0705000000 | + +{{% /expand %}} + +{{% expand "Calculate the derivative of mean values and specify the unit option" %}} + +Return the six-minute rate of change between average `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT DERIVATIVE(MEAN("water_level"),6m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | derivative | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | -0.0687500000 | +| 2019-08-18T00:12:00Z | -0.0147500000 | +| 2019-08-18T00:24:00Z | -0.0352500000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## DIFFERENCE() + +Returns the result of subtraction between subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). + +### Syntax + +```sql +SELECT DIFFERENCE( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`DIFFERENCE(field_key)` +Returns the difference between subsequent field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`DIFFERENCE(/regular_expression/)` +Returns the difference between subsequent field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`DIFFERENCE(*)` +Returns the difference between subsequent field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`DIFFERENCE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `DIFFERENCE()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the difference between the field values associated with a field key" %}} + +Return the difference between the subsequent field values in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT DIFFERENCE("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | difference | +| :------------------- | ------------: | +| 2019-08-18T00:06:00Z | 0.0270000000 | +| 2019-08-18T00:12:00Z | -0.0360000000 | +| 2019-08-18T00:18:00Z | -0.0140000000 | +| 2019-08-18T00:24:00Z | -0.0650000000 | +| 2019-08-18T00:30:00Z | 0.0030000000 | + +{{% /expand %}} + +{{% expand "Calculate the difference between the field values associated with each field key in a measurement" %}} + +Return the difference between the subsequent field values for each field key +that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT DIFFERENCE(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | difference_water_level | +| :------------------- | ---------------------: | +| 2019-08-18T00:06:00Z | 0.0270000000 | +| 2019-08-18T00:12:00Z | -0.0360000000 | +| 2019-08-18T00:18:00Z | -0.0140000000 | +| 2019-08-18T00:24:00Z | -0.0650000000 | +| 2019-08-18T00:30:00Z | 0.0030000000 | + +{{% /expand %}} + +{{% expand "Calculate the difference between the field values associated with each field key that matches a regular expression" %}} + +Return the difference between the subsequent field values for each field key +that stores numeric values and includes the word `water` in the `h2o_feet` measurement. + +```sql +SELECT DIFFERENCE(/water/) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | difference_water_level | +| :------------------- | ---------------------: | +| 2019-08-18T00:06:00Z | 0.0270000000 | +| 2019-08-18T00:12:00Z | -0.0360000000 | +| 2019-08-18T00:18:00Z | -0.0140000000 | +| 2019-08-18T00:24:00Z | -0.0650000000 | +| 2019-08-18T00:30:00Z | 0.0030000000 | + +{{% /expand %}} + +{{% expand "Calculate the difference between the field values associated with a field key and include several clauses" %}} + +```sql +SELECT DIFFERENCE("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 2 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | difference | +| :------------------- | -----------: | +| 2019-08-18T00:12:00Z | 0.0140000000 | +| 2019-08-18T00:06:00Z | 0.0360000000 | + +Return the difference between the subsequent field values in the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +They query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to two and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT DIFFERENCE(( [ * | | // ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `DIFFERENCE()` function to those results. + +`DIFFERENCE()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the difference between maximum values" %}} + +Return the difference between [maximum](/influxdb/v2.6/query-data/influxql/functions/selectors/#max) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT DIFFERENCE(MAX("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | difference | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | -0.2290000000 | +| 2019-08-18T00:12:00Z | -0.0360000000 | +| 2019-08-18T00:24:00Z | -0.0760000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## ELAPSED() + +Returns the difference between subsequent [field value's](/influxdb/v2.6/reference/glossary/#field-value) timestamps. + +### Syntax + +```sql +SELECT ELAPSED( [ * | | // ] [ , ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +InfluxDB calculates the difference between subsequent timestamps. +The `unit` option is an integer followed by a [duration](/influxdb/v2.6/reference/glossary/#duration) and it determines the unit of the returned difference. +If the query does not specify the `unit` option the query returns the difference between timestamps in nanoseconds. + +`ELAPSED(field_key)` +Returns the difference between subsequent timestamps associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`ELAPSED(/regular_expression/)` +Returns the difference between subsequent timestamps associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`ELAPSED(*)` +Returns the difference between subsequent timestamps associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`ELAPSED()` supports all field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +#### Examples + +The examples use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the elapsed time between field values associated with a field key" %}} + +Return the elapsed time (in nanoseconds) between subsequent timestamps in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT ELAPSED("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed | +| :------------------- | ----------------------: | +| 2019-08-18T00:06:00Z | 360000000000.0000000000 | +| 2019-08-18T00:12:00Z | 360000000000.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the elapsed time between field values associated with a field key and specify the unit option" %}} + +Return the elapsed time (in minutes) between subsequent timestamps in the `water_level` field key and in the `h2o_feet` measurement. + +```sql +SELECT ELAPSED("water_level",1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed | +| :------------------- | -----------: | +| 2019-08-18T00:06:00Z | 6.0000000000 | +| 2019-08-18T00:12:00Z | 6.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the elapsed time between field values associated with each field key in a measurement and specify the unit option" %}} + +Return the difference (in minutes) between subsequent timestamps associated with +each field key in the `h2o_feet`measurement. +The `h2o_feet` measurement has two field keys: `level description` and `water_level`. + +```sql +SELECT ELAPSED(*,1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed_level description | elapsed_water_level | +| :------------------- | ------------------------: | ------------------: | +| 2019-08-18T00:06:00Z | 6.0000000000 | 6.0000000000 | +| 2019-08-18T00:12:00Z | 6.0000000000 | 6.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the elapsed time between field values associated with each field key that matches a regular expression and specify the unit option" %}} + +Return the difference (in seconds) between subsequent timestamps associated with +each field key that includes the word `level` in the `h2o_feet` measurement. + +```sql +SELECT ELAPSED(/level/,1s) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed_level description | elapsed_water_level | +| :------------------- | ------------------------: | ------------------: | +| 2019-08-18T00:06:00Z | 360.0000000000 | 360.0000000000 | +| 2019-08-18T00:12:00Z | 360.0000000000 | 360.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the elapsed time between field values associated with a field key and include several clauses" %}} + +Return the difference (in milliseconds) between subsequent timestamps in the +`water_level` field key and in the `h2o_feet` measurement in the +[time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-17T00:00:00Z` and `2019-08-17T00:12:00Z` with timestamps in +[descending order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to one and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by one point. + +```sql +SELECT ELAPSED("water_level",1ms) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' ORDER BY time DESC LIMIT 1 OFFSET 1 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed | +| :------------------- | -----------------: | +| 2019-08-18T00:00:00Z | -360000.0000000000 | + +Notice that the result is negative; the [`ORDER BY time DESC` clause](/influxdb/v2.6/query-data/influxql/explore-data/order-by/) sorts timestamps in descending order so `ELAPSED()` calculates the difference between timestamps in reverse order. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Common issues with ELAPSED() + +#### ELAPSED() and units greater than the elapsed time + +InfluxDB returns `0` if the `unit` option is greater than the difference between the timestamps. + +##### Example + +The timestamps in the `h2o_feet` measurement occur at six-minute intervals. +If the query sets the `unit` option to one hour, InfluxDB returns `0`: + +```sql +SELECT ELAPSED("water_level",1h) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:12:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed | +| :------------------- | -----------: | +| 2019-08-18T00:06:00Z | 0.0000000000 | +| 2019-08-18T00:12:00Z | 0.0000000000 | + +#### ELAPSED() with GROUP BY time() clauses + +The `ELAPSED()` function supports the [`GROUP BY time()` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) but the query results aren't particularly useful. +Currently, an `ELAPSED()` query with a nested function and a `GROUP BY time()` clause simply returns the interval specified in the `GROUP BY time()` clause. + +The `GROUP BY time()` clause determines the timestamps in the results; each timestamp marks the start of a time interval. +That behavior also applies to nested selector functions (like [`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first) or [`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max)) which would, in all other cases, return a specific timestamp from the raw data. +Because the `GROUP BY time()` clause overrides the original timestamps, the `ELAPSED()` calculation always returns the same value as the `GROUP BY time()` interval. + +##### Example + +In the codeblock below, the first query attempts to use the `ELAPSED()` function with a `GROUP BY time()` clause to find the time elapsed (in minutes) between [minimum](/influxdb/v2.6/query-data/influxql/functions/selectors/#min) `water_level`s. +Returns 12 minutes for both time intervals. + +To get those results, InfluxDB first calculates the minimum `water_level`s at 12-minute intervals. +The second query in the codeblock shows the results of that step. +The step is the same as using the `MIN()` function with the `GROUP BY time()` clause and without the `ELAPSED()` function. +Notice that the timestamps returned by the second query are 12 minutes apart. +In the raw data, the first result (`2.0930000000`) occurs at `2019-08-18T00:42:00Z` but the `GROUP BY time()` clause overrides that original timestamp. +Because the timestamps are determined by the `GROUP BY time()` interval and not by the original data, the `ELAPSED()` calculation always returns the same value as the `GROUP BY time()` interval. + +```sql +SELECT ELAPSED(MIN("water_level"),1m) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:36:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | elapsed | +| :------------------- | ------------: | +| 2019-08-18T00:36:00Z | 12.0000000000 | +| 2019-08-18T00:48:00Z | 12.0000000000 | + +```sql +SELECT MIN("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:36:00Z' AND time <= '2019-08-18T00:54:00Z' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | min | +| -------------------- | ------------ | +| 2019-08-18T00:36:00Z | 2.0930000000 | +| 2019-08-18T00:48:00Z | 2.0870000000 | + +{{% note %}} +The first point actually occurs at 2019-08-18T00:42:00Z, not 2019-08-18T00:36:00Z. +{{% /note %}} + +## EXP() + +Returns the exponential of the field value. + +### Syntax + +```sql +SELECT EXP( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`EXP(field_key)` +Returns the exponential of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + + + +`EXP(*)` +Returns the exponential of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`EXP()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `EXP()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the exponential of field values associated with a field key" %}} + +Return the exponential of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT EXP("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | exp | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | 10.5065618493 | +| 2019-08-18T00:06:00Z | 10.7941033617 | +| 2019-08-18T00:12:00Z | 10.4124270347 | +| 2019-08-18T00:18:00Z | 10.2676687288 | +| 2019-08-18T00:24:00Z | 9.6214982905 | +| 2019-08-18T00:30:00Z | 9.6504061254 | + +{{% /expand %}} + +{{% expand "Calculate the exponential of field values associated with each field key in a measurement" %}} + +Return the exponential of field values for each field key that stores numeric +values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT EXP(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | exp_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | 10.5065618493 | +| 2019-08-18T00:06:00Z | 10.7941033617 | +| 2019-08-18T00:12:00Z | 10.4124270347 | +| 2019-08-18T00:18:00Z | 10.2676687288 | +| 2019-08-18T00:24:00Z | 9.6214982905 | +| 2019-08-18T00:30:00Z | 9.6504061254 | + +{{% /expand %}} + +{{% expand "Calculate the exponential of field values associated with a field key and include several clauses" %}} + +```sql +SELECT EXP("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | exp | +| :------------------- | ------------: | +| 2019-08-18T00:18:00Z | 10.2676687288 | +| 2019-08-18T00:12:00Z | 10.4124270347 | +| 2019-08-18T00:06:00Z | 10.7941033617 | +| 2019-08-18T00:00:00Z | 10.5065618493 | + +Return the exponentials of field values associated with the `water_level` field key in +the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT EXP(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `EXP()` function to those results. + +`EXP()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the exponential of mean values" %}} + +Return the exponential of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT EXP(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | exp | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | 10.6493621676 | +| 2019-08-18T00:12:00Z | 10.3397945558 | +| 2019-08-18T00:24:00Z | 9.6359413675 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## FLOOR() + +Returns the subsequent value rounded down to the nearest integer. + +### Syntax + +```sql +SELECT FLOOR( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`FLOOR(field_key)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) rounded down to the nearest integer. + +`FLOOR(*)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) rounded down to the nearest integer. + +`FLOOR()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `FLOOR()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the floor of field values associated with a field key" %}} + +Return field values in the `water_level` field key in the `h2o_feet` measurement rounded down to the nearest integer. + +```sql +SELECT FLOOR("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | floor | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:06:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:18:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | +| 2019-08-18T00:30:00Z | 2.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the floor of field values associated with each field key in a measurement" %}} + +Return field values for each field key that stores numeric values in the +`h2o_feet` measurement rounded down to the nearest integer. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT FLOOR(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | floor_water_level | +| :------------------- | ----------------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:06:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:18:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | +| 2019-08-18T00:30:00Z | 2.0000000000 | + +{{% /expand %}} + +{{% expand "Calculate the floor of field values associated with a field key and include several clauses" %}} + +Return field values associated with the `water_level` field key rounded down to +the nearest integer in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT FLOOR("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | floor | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:06:00Z | 2.0000000000 | +| 2019-08-18T00:00:00Z | 2.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT FLOOR(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `FLOOR()` function to those results. + +`FLOOR()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate mean values rounded down to the nearest integer" %}} + +Return the [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals and rounds them up to the nearest integer. + +```sql +SELECT FLOOR(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | floor | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## HISTOGRAM() + +_InfluxQL does not currently support histogram generation. +For information about creating histograms with data stored in InfluxDB, see +[Flux's `histogram()` function](/{{< latest "flux" >}}/stdlib/universe/histogram)._ + +## LN() + +Returns the natural logarithm of the field value. + +### Syntax + +```sql +SELECT LN( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`LN(field_key)` +Returns the natural logarithm of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`LN(*)` +Returns the natural logarithm of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`LN()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `LN()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the natural logarithm of field values associated with a field key" %}} + +Return the natural logarithm of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT LN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ln | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.8552660300 | +| 2019-08-18T00:06:00Z | 0.8666802313 | +| 2019-08-18T00:12:00Z | 0.8514321595 | +| 2019-08-18T00:18:00Z | 0.8454389909 | +| 2019-08-18T00:24:00Z | 0.8171331603 | +| 2019-08-18T00:30:00Z | 0.8184573715 | + +{{% /expand %}} + +{{% expand "Calculate the natural logarithm of field values associated with each field key in a measurement" %}} + +Return the natural logarithm of field values for each field key that stores +numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT LN(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ln_water_level | +| :------------------- | -------------: | +| 2019-08-18T00:00:00Z | 0.8552660300 | +| 2019-08-18T00:06:00Z | 0.8666802313 | +| 2019-08-18T00:12:00Z | 0.8514321595 | +| 2019-08-18T00:18:00Z | 0.8454389909 | +| 2019-08-18T00:24:00Z | 0.8171331603 | +| 2019-08-18T00:30:00Z | 0.8184573715 | + +{{% /expand %}} + +{{% expand "Calculate the natural logarithm of field values associated with a field key and include several clauses" %}} + +```sql +SELECT LN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ln | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 0.8454389909 | +| 2019-08-18T00:12:00Z | 0.8514321595 | +| 2019-08-18T00:06:00Z | 0.8666802313 | +| 2019-08-18T00:00:00Z | 0.8552660300 | + +Return the natural logarithms of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT LN(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `LN()` function to those results. + +`LN()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the natural logarithm of mean values" %}} + +Return the natural logarithm of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT LN(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | ln | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.8609894161 | +| 2019-08-18T00:12:00Z | 0.8484400650 | +| 2019-08-18T00:24:00Z | 0.8177954851 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## LOG() + +Returns the logarithm of the field value with base `b`. + +### Basic syntax + +```sql +SELECT LOG( [ * | ], ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`LOG(field_key, b)` +Returns the logarithm of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) with base `b`. + +`LOG(*, b)` +Returns the logarithm of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) with base `b`. + +`LOG()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `LOG()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the logarithm base 4 of field values associated with a field key" %}} + +Return the logarithm base 4 of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT LOG("water_level", 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.6169440301 | +| 2019-08-18T00:06:00Z | 0.6251776359 | +| 2019-08-18T00:12:00Z | 0.6141784771 | +| 2019-08-18T00:18:00Z | 0.6098553198 | +| 2019-08-18T00:24:00Z | 0.5894369791 | +| 2019-08-18T00:30:00Z | 0.5903921955 | + +{{% /expand %}} + +{{% expand "Calculate the logarithm base 4 of field values associated with each field key in a measurement" %}} + +Return the logarithm base 4 of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT LOG(*, 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | 0.6169440301 | +| 2019-08-18T00:06:00Z | 0.6251776359 | +| 2019-08-18T00:12:00Z | 0.6141784771 | +| 2019-08-18T00:18:00Z | 0.6098553198 | +| 2019-08-18T00:24:00Z | 0.5894369791 | +| 2019-08-18T00:30:00Z | 0.5903921955 | + +{{% /expand %}} + +{{% expand "Calculate the logarithm base 4 of field values associated with a field key and include several clauses" %}} + +Return the logarithm base 4 of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT LOG("water_level", 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 0.6098553198 | +| 2019-08-18T00:12:00Z | 0.6141784771 | +| 2019-08-18T00:06:00Z | 0.6251776359 | +| 2019-08-18T00:00:00Z | 0.6169440301 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT LOG(( [ * | ] ), ) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `LOG()` function to those results. + +`LOG()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the logarithm base 4 of mean values" %}} + +Return the logarithm base 4 of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT LOG(MEAN("water_level"), 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.6210725804 | +| 2019-08-18T00:12:00Z | 0.6120201371 | +| 2019-08-18T00:24:00Z | 0.5899147454 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## LOG2() + +Returns the logarithm of the field value to the base 2. + +### Basic syntax + +```sql +SELECT LOG2( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`LOG2(field_key)` +Returns the logarithm of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) to the base 2. + +`LOG2(*)` +Returns the logarithm of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) to the base 2. + +`LOG2()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `LOG2()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the logarithm base 2 of field values associated with a field key" %}} + +Return the logarithm base 2 of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT LOG2("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log2 | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 1.2338880602 | +| 2019-08-18T00:06:00Z | 1.2503552718 | +| 2019-08-18T00:12:00Z | 1.2283569542 | +| 2019-08-18T00:18:00Z | 1.2197106395 | +| 2019-08-18T00:24:00Z | 1.1788739582 | +| 2019-08-18T00:30:00Z | 1.1807843911 | + +{{% /expand %}} + +{{% expand "Calculate the logarithm base 2 of field values associated with each field key in a measurement" %}} + +```sql +SELECT LOG2(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log2_water_level | +| :------------------- | ---------------: | +| 2019-08-18T00:00:00Z | 1.2338880602 | +| 2019-08-18T00:06:00Z | 1.2503552718 | +| 2019-08-18T00:12:00Z | 1.2283569542 | +| 2019-08-18T00:18:00Z | 1.2197106395 | +| 2019-08-18T00:24:00Z | 1.1788739582 | +| 2019-08-18T00:30:00Z | 1.1807843911 | + +Return the logarithm base 2 of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +{{% /expand %}} + +{{% expand "Calculate the logarithm base 2 of field values associated with a field key and include several clauses" %}} + +Return the logarithm base 2 of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT LOG2("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log2 | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 1.2197106395 | +| 2019-08-18T00:12:00Z | 1.2283569542 | +| 2019-08-18T00:06:00Z | 1.2503552718 | +| 2019-08-18T00:00:00Z | 1.2338880602 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT LOG2(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `LOG2()` function to those results. + +`LOG2()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the logarithm base 2 of mean values" %}} + +Return the logarithm base 2 of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT LOG2(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log2 | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 1.2421451608 | +| 2019-08-18T00:12:00Z | 1.2240402742 | +| 2019-08-18T00:24:00Z | 1.1798294909 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## LOG10() + +Returns the logarithm of the field value to the base 10. + +### Basic syntax + +```sql +SELECT LOG10( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`LOG10(field_key)` +Returns the logarithm of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) to the base 10. + +`LOG10(*)` +Returns the logarithm of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) to the base 10. + +`LOG10()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `LOG10()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the logarithm base 10 of field values associated with a field key" %}} + +Return the logarithm base 10 of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT LOG10("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log10 | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.3714373174 | +| 2019-08-18T00:06:00Z | 0.3763944420 | +| 2019-08-18T00:12:00Z | 0.3697722886 | +| 2019-08-18T00:18:00Z | 0.3671694885 | +| 2019-08-18T00:24:00Z | 0.3548764225 | +| 2019-08-18T00:30:00Z | 0.3554515201 | + +{{% /expand %}} + +{{% expand "Calculate the logarithm base 10 of field values associated with each field key in a measurement" %}} + +Return the logarithm base 10 of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT LOG10(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log10_water_level | +| :------------------- | ----------------: | +| 2019-08-18T00:00:00Z | 0.3714373174 | +| 2019-08-18T00:06:00Z | 0.3763944420 | +| 2019-08-18T00:12:00Z | 0.3697722886 | +| 2019-08-18T00:18:00Z | 0.3671694885 | +| 2019-08-18T00:24:00Z | 0.3548764225 | +| 2019-08-18T00:30:00Z | 0.3554515201 | + +{{% /expand %}} + +{{% expand "Calculate the logarithm base 10 of field values associated with a field key and include several clauses" %}} + +Return the logarithm base 10 of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT LOG10("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log10 | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 0.3671694885 | +| 2019-08-18T00:12:00Z | 0.3697722886 | +| 2019-08-18T00:06:00Z | 0.3763944420 | +| 2019-08-18T00:00:00Z | 0.3714373174 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT LOG10(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `LOG10()` function to those results. + +`LOG10()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the logarithm base 10 of mean values" %}} + +Return the logarithm base 10 of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT LOG10(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | log10 | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.3739229524 | +| 2019-08-18T00:12:00Z | 0.3684728384 | +| 2019-08-18T00:24:00Z | 0.3551640665 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## MOVING_AVERAGE() + +Returns the rolling average across a window of subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). + +### Basic syntax + +```sql +SELECT MOVING_AVERAGE( [ * | | // ] , ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`MOVING_AVERAGE()` calculates the rolling average across a window of `N` subsequent field values. +The `N` argument is an integer and it is required. + +`MOVING_AVERAGE(field_key,N)` +Returns the rolling average across `N` field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`MOVING_AVERAGE(/regular_expression/,N)` +Returns the rolling average across `N` field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`MOVING_AVERAGE(*,N)` +Returns the rolling average across `N` field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`MOVING_AVERAGE()` int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `MOVING_AVERAGE()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the moving average of the field values associated with a field key" %}} + +Return the rolling average across a two-field-value window for the `water_level` field key and the `h2o_feet` measurement. + +```sql +SELECT MOVING_AVERAGE("water_level",2) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | moving_average | +| :------------------- | -------------: | +| 2019-08-18T00:06:00Z | 2.3655000000 | +| 2019-08-18T00:12:00Z | 2.3610000000 | +| 2019-08-18T00:18:00Z | 2.3360000000 | +| 2019-08-18T00:24:00Z | 2.2965000000 | +| 2019-08-18T00:30:00Z | 2.2655000000 | + +The first result (`2.3655000000`) is the average of the first two points in the raw data: (`2.3520000000 + 2.3790000000) / 2`). +The second result (`2.3610000000`) is the average of the second two points in the raw data: (`2.3790000000 + 2.3430000000) / 2`). + +{{% /expand %}} + +{{% expand "Calculate the moving average of the field values associated with each field key in a measurement" %}} + +Return the rolling average across a three-field-value window for each field key +that stores numeric values in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT MOVING_AVERAGE(*,3) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | moving_average_water_level | +| :------------------- | -------------------------: | +| 2019-08-18T00:12:00Z | 2.3580000000 | +| 2019-08-18T00:18:00Z | 2.3503333333 | +| 2019-08-18T00:24:00Z | 2.3120000000 | +| 2019-08-18T00:30:00Z | 2.2866666667 | + +{{% /expand %}} + +{{% expand "Calculate the moving average of the field values associated with each field key that matches a regular expression" %}} + +Return the rolling average across a four-field-value window for each numeric +field with a field key that includes the word `level` in the `h2o_feet` measurement. + +```sql +SELECT MOVING_AVERAGE(/level/,4) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | moving_average_water_level | +| :------------------- | -------------------------: | +| 2019-08-18T00:18:00Z | 2.3507500000 | +| 2019-08-18T00:24:00Z | 2.3287500000 | +| 2019-08-18T00:30:00Z | 2.3007500000 | + +{{% /expand %}} + +{{% expand "Calculate the moving average of the field values associated with a field key and include several clauses" %}} + +Return the rolling average across a two-field-value window for the `water_level` +field key in the `h2o_feet` measurement in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to two and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by three points. + +```sql +SELECT MOVING_AVERAGE("water_level",2) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' ORDER BY time DESC LIMIT 2 OFFSET 3 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | moving_average | +| :------------------- | -------------: | +| 2019-08-18T00:06:00Z | 2.3610000000 | +| 2019-08-18T00:00:00Z | 2.3655000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT MOVING_AVERAGE( ([ * | | // ]) , N ) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `MOVING_AVERAGE()` function to those results. + +`MOVING_AVERAGE()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the moving average of maximum values" %}} + +Return the rolling average across a two-value window of [maximum](/influxdb/v2.6/query-data/influxql/functions/selectors/#max) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT MOVING_AVERAGE(MAX("water_level"),2) FROM "h2o_feet" WHERE "location" = 'santa_monica' AND time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | moving_average | +| :------------------- | -------------: | +| 2019-08-18T00:00:00Z | 2.4935000000 | +| 2019-08-18T00:12:00Z | 2.3610000000 | +| 2019-08-18T00:24:00Z | 2.3050000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## NON_NEGATIVE_DERIVATIVE() + +Returns the non-negative rate of change between subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). +Non-negative rates of change include positive rates of change and rates of change that equal zero. + +### Basic syntax + +```sql +SELECT NON_NEGATIVE_DERIVATIVE( [ * | | // ] [ , ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +InfluxDB calculates the difference between subsequent field values and converts those results into the rate of change per `unit`. +The `unit` argument is an integer followed by a [duration](/influxdb/v2.6/reference/glossary/#duration) and it is optional. +If the query does not specify the `unit`, the unit defaults to one second (`1s`). +`NON_NEGATIVE_DERIVATIVE()` returns only positive rates of change or rates of change that equal zero. + +`NON_NEGATIVE_DERIVATIVE(field_key)` +Returns the non-negative rate of change between subsequent field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`NON_NEGATIVE_DERIVATIVE(/regular_expression/)` +Returns the non-negative rate of change between subsequent field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`NON_NEGATIVE_DERIVATIVE(*)` +Returns the non-negative rate of change between subsequent field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`NON_NEGATIVE_DERIVATIVE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `NON_NEGATIVE_DERIVATIVE()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +See the examples in the [`DERIVATIVE()` documentation](#basic-syntax-8). +`NON_NEGATIVE_DERIVATIVE()` behaves the same as the `DERIVATIVE()` function but `NON_NEGATIVE_DERIVATIVE()` returns only positive rates of change or rates of change that equal zero. + +### Advanced syntax + +```sql +SELECT NON_NEGATIVE_DERIVATIVE( ([ * | | // ]) [ , ] ) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `NON_NEGATIVE_DERIVATIVE()` function to those results. + +The `unit` argument is an integer followed by a [duration](/influxdb/v2.6/reference/glossary/#duration) and it is optional. +If the query does not specify the `unit`, the `unit` defaults to the `GROUP BY time()` interval. +Note that this behavior is different from the [basic syntax's](#basic-syntax-4) default behavior. +`NON_NEGATIVE_DERIVATIVE()` returns only positive rates of change or rates of change that equal zero. + +`NON_NEGATIVE_DERIVATIVE()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +See the examples in the [`DERIVATIVE()` documentation](#advanced-syntax-8). +`NON_NEGATIVE_DERIVATIVE()` behaves the same as the `DERIVATIVE()` function but `NON_NEGATIVE_DERIVATIVE()` returns only positive rates of change or rates of change that equal zero. + +## NON_NEGATIVE_DIFFERENCE() + +Returns the non-negative result of subtraction between subsequent [field values](/influxdb/v2.6/reference/glossary/#field-value). +Non-negative results of subtraction include positive differences and differences that equal zero. + +### Basic syntax + +```sql +SELECT NON_NEGATIVE_DIFFERENCE( [ * | | // ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`NON_NEGATIVE_DIFFERENCE(field_key)` +Returns the non-negative difference between subsequent field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`NON_NEGATIVE_DIFFERENCE(/regular_expression/)` +Returns the non-negative difference between subsequent field values associated with each field key that matches the [regular expression](/influxdb/v2.6/query-data/influxql/explore-data/regular-expressions/). + +`NON_NEGATIVE_DIFFERENCE(*)` +Returns the non-negative difference between subsequent field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`NON_NEGATIVE_DIFFERENCE()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `NON_NEGATIVE_DIFFERENCE()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +See the examples in the [`DIFFERENCE()` documentation](#basic-syntax-9). +`NON_NEGATIVE_DIFFERENCE()` behaves the same as the `DIFFERENCE()` function but `NON_NEGATIVE_DIFFERENCE()` returns only positive differences or differences that equal zero. + +### Advanced syntax + +```sql +SELECT NON_NEGATIVE_DIFFERENCE(( [ * | | // ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `NON_NEGATIVE_DIFFERENCE()` function to those results. + +`NON_NEGATIVE_DIFFERENCE()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +See the examples in the [`DIFFERENCE()` documentation](#advanced-syntax-9). +`NON_NEGATIVE_DIFFERENCE()` behaves the same as the `DIFFERENCE()` function but `NON_NEGATIVE_DIFFERENCE()` returns only positive differences or differences that equal zero. + +## POW() + +Returns the field value to the power of `x`. + +### Basic syntax + +```sql +SELECT POW( [ * | ], ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`POW(field_key, x)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) to the power of `x`. + +`POW(*, x)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) to the power of `x`. + +`POW()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `POW()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate field values associated with a field key to the power of 4" %}} + +Return field values in the `water_level` field key in the `h2o_feet` measurement +multiplied to a power of 4. + +```sql +SELECT POW("water_level", 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | pow | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | 30.6019618652 | +| 2019-08-18T00:06:00Z | 32.0315362489 | +| 2019-08-18T00:12:00Z | 30.1362461432 | +| 2019-08-18T00:18:00Z | 29.4223904261 | +| 2019-08-18T00:24:00Z | 26.2727594844 | +| 2019-08-18T00:30:00Z | 26.4122914255 | + +{{% /expand %}} + +{{% expand "Calculate field values associated with each field key in a measurement to the power of 4" %}} + +Return field values for each field key that stores numeric values in the `h2o_feet` measurement multiplied to the power of 4. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT POW(*, 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | pow_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | 30.6019618652 | +| 2019-08-18T00:06:00Z | 32.0315362489 | +| 2019-08-18T00:12:00Z | 30.1362461432 | +| 2019-08-18T00:18:00Z | 29.4223904261 | +| 2019-08-18T00:24:00Z | 26.2727594844 | +| 2019-08-18T00:30:00Z | 26.4122914255 | + +{{% /expand %}} + +{{% expand "Calculate field values associated with a field key to the power of 4 and include several clauses" %}} + +Return field values associated with the `water_level` field key multiplied to +the power of 4 in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT POW("water_level", 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | pow | +| :------------------- | ------------: | +| 2019-08-18T00:18:00Z | 29.4223904261 | +| 2019-08-18T00:12:00Z | 30.1362461432 | +| 2019-08-18T00:06:00Z | 32.0315362489 | +| 2019-08-18T00:00:00Z | 30.6019618652 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT POW(( [ * | ] ), ) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `POW()` function to those results. + +`POW()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate mean values to the power of 4" %}} + +Return [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals multiplied to the power of 4. + +```sql +SELECT POW(MEAN("water_level"), 4) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | pow | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | 31.3106302459 | +| 2019-08-18T00:12:00Z | 29.7777139548 | +| 2019-08-18T00:24:00Z | 26.3424561663 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## ROUND() + +Returns the subsequent value rounded to the nearest integer. + +### Basic syntax + +```sql +SELECT ROUND( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`ROUND(field_key)` +Returns the field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key) rounded to the nearest integer. + +`ROUND(*)` +Returns the field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement) rounded to the nearest integer. + +`ROUND()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/. To use `ROUND()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Round field values associated with a field key" %}} + +Return field values in the `water_level` field key in the `h2o_feet` measurement +rounded to the nearest integer. + +```sql +SELECT ROUND("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | round | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:06:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:18:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | +| 2019-08-18T00:30:00Z | 2.0000000000 | + +{{% /expand %}} + +{{% expand "Round field values associated with each field key in a measurement" %}} + +Return field values for each numeric field in the `h2o_feet` measurement rounded to the nearest integer. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT ROUND(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | round_water_level | +| :------------------- | ----------------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:06:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:18:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | +| 2019-08-18T00:30:00Z | 2.0000000000 | + +{{% /expand %}} + +{{% expand "Round field values associated with a field key and include several clauses" %}} + +Return field values associated with the `water_level` field key rounded to the +nearest integer in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT ROUND("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | round | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:06:00Z | 2.0000000000 | +| 2019-08-18T00:00:00Z | 2.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT ROUND(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `ROUND()` function to those results. + +`ROUND()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate mean values rounded to the nearest integer" %}} + +Return the [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals and rounds to the nearest integer. + +```sql +SELECT ROUND(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | round | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.0000000000 | +| 2019-08-18T00:12:00Z | 2.0000000000 | +| 2019-08-18T00:24:00Z | 2.0000000000 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## SIN() + +Returns the sine of the field value. + +### Basic syntax + +```sql +SELECT SIN( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`SIN(field_key)` +Returns the sine of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`SIN(*)` +Returns the sine of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`SIN()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `SIN()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the sine of field values associated with a field key" %}} + +Return the sine of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT SIN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sin | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.7100665046 | +| 2019-08-18T00:06:00Z | 0.6907983763 | +| 2019-08-18T00:12:00Z | 0.7163748731 | +| 2019-08-18T00:18:00Z | 0.7260723687 | +| 2019-08-18T00:24:00Z | 0.7692028035 | +| 2019-08-18T00:30:00Z | 0.7672823308 | + +{{% /expand %}} + +{{% expand "Calculate the sine of field values associated with each field key in a measurement" %}} + +Return the sine of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT SIN(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sin_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | 0.7100665046 | +| 2019-08-18T00:06:00Z | 0.6907983763 | +| 2019-08-18T00:12:00Z | 0.7163748731 | +| 2019-08-18T00:18:00Z | 0.7260723687 | +| 2019-08-18T00:24:00Z | 0.7692028035 | +| 2019-08-18T00:30:00Z | 0.7672823308 | + +{{% /expand %}} + +{{% expand "Calculate the sine of field values associated with a field key and include several clauses" %}} + +Return the sine of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT SIN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sin | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 0.7260723687 | +| 2019-08-18T00:12:00Z | 0.7163748731 | +| 2019-08-18T00:06:00Z | 0.6907983763 | +| 2019-08-18T00:00:00Z | 0.7100665046 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT SIN(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `SIN()` function to those results. + +`SIN()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the sine of mean values" %}} + +Return the sine of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT SIN(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sin | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 0.7004962722 | +| 2019-08-18T00:12:00Z | 0.7212412912 | +| 2019-08-18T00:24:00Z | 0.7682434314 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## SQRT() + +Returns the square root of field value. + +### Basic syntax + +```sql +SELECT SQRT( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`SQRT(field_key)` +Returns the square root of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`SQRT(*)` +Returns the square root field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`SQRT()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `SQRT()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the square root of field values associated with a field key" %}} + +Return the square roots of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT SQRT("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sqrt | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 1.5336231610 | +| 2019-08-18T00:06:00Z | 1.5424007261 | +| 2019-08-18T00:12:00Z | 1.5306861207 | +| 2019-08-18T00:18:00Z | 1.5261061562 | +| 2019-08-18T00:24:00Z | 1.5046594299 | +| 2019-08-18T00:30:00Z | 1.5056560032 | + +{{% /expand %}} + +{{% expand "Calculate the square root of field values associated with each field key in a measurement" %}} + +Return the square roots of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT SQRT(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sqrt_water_level | +| :------------------- | ---------------: | +| 2019-08-18T00:00:00Z | 1.5336231610 | +| 2019-08-18T00:06:00Z | 1.5424007261 | +| 2019-08-18T00:12:00Z | 1.5306861207 | +| 2019-08-18T00:18:00Z | 1.5261061562 | +| 2019-08-18T00:24:00Z | 1.5046594299 | +| 2019-08-18T00:30:00Z | 1.5056560032 | + +{{% /expand %}} + +{{% expand "Calculate the square root of field values associated with a field key and include several clauses" %}} + +Return the square roots of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT SQRT("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sqrt | +| :------------------- | -----------: | +| 2019-08-18T00:18:00Z | 1.5261061562 | +| 2019-08-18T00:12:00Z | 1.5306861207 | +| 2019-08-18T00:06:00Z | 1.5424007261 | +| 2019-08-18T00:00:00Z | 1.5336231610 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT SQRT(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `SQRT()` function to those results. + +`SQRT()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the square root of mean values" %}} + +Return the square roots of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT SQRT(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | sqrt | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 1.5380182054 | +| 2019-08-18T00:12:00Z | 1.5283978540 | +| 2019-08-18T00:24:00Z | 1.5051577990 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +## TAN() + +Returns the tangent of the field value. + +### Basic syntax + +```sql +SELECT TAN( [ * | ] ) FROM_clause [WHERE_clause] [GROUP_BY_clause] [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +`TAN(field_key)` +Returns the tangent of field values associated with the [field key](/influxdb/v2.6/reference/glossary/#field-key). + +`TAN(*)` +Returns the tangent of field values associated with each field key in the [measurement](/influxdb/v2.6/reference/glossary/#measurement). + +`TAN()` supports int64 and float64 field value [data types](/influxdb/v2.6/query-data/influxql/explore-data/select/#data-types). + +Supports `GROUP BY` clauses that [group by tags](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-tags) but not `GROUP BY` clauses that [group by time](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals). +To use `TAN()` with a `GROUP BY time()` clause, see [Advanced syntax](#advanced-syntax). + +#### Examples + +The examples below use the following subsample of the [NOAA water sample data](/influxdb/v2.6/reference/sample-data/#noaa-water-sample-data): + +```sql +SELECT "water_level" FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | water_level | +| :------------------- | -----------: | +| 2019-08-18T00:00:00Z | 2.3520000000 | +| 2019-08-18T00:06:00Z | 2.3790000000 | +| 2019-08-18T00:12:00Z | 2.3430000000 | +| 2019-08-18T00:18:00Z | 2.3290000000 | +| 2019-08-18T00:24:00Z | 2.2640000000 | +| 2019-08-18T00:30:00Z | 2.2670000000 | + +{{< expand-wrapper >}} + +{{% expand "Calculate the tangent of field values associated with a field key" %}} + +Return the tangent of field values in the `water_level` field key in the `h2o_feet` measurement. + +```sql +SELECT TAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | tan | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | -1.0084243657 | +| 2019-08-18T00:06:00Z | -0.9553984098 | +| 2019-08-18T00:12:00Z | -1.0267433979 | +| 2019-08-18T00:18:00Z | -1.0559235802 | +| 2019-08-18T00:24:00Z | -1.2037513424 | +| 2019-08-18T00:30:00Z | -1.1964307053 | + +{{% /expand %}} + +{{% expand "Calculate the tangent of field values associated with each field key in a measurement" %}} + +Return the tangent of field values for each numeric field in the `h2o_feet` measurement. +The `h2o_feet` measurement has one numeric field: `water_level`. + +```sql +SELECT TAN(*) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | tan_water_level | +| :------------------- | --------------: | +| 2019-08-18T00:00:00Z | -1.0084243657 | +| 2019-08-18T00:06:00Z | -0.9553984098 | +| 2019-08-18T00:12:00Z | -1.0267433979 | +| 2019-08-18T00:18:00Z | -1.0559235802 | +| 2019-08-18T00:24:00Z | -1.2037513424 | +| 2019-08-18T00:30:00Z | -1.1964307053 | + +{{% /expand %}} + +{{% expand "Calculate the tangent of field values associated with a field key and include several clauses" %}} + +Return the tangent of field values associated with the `water_level` +field key in the [time range](/influxdb/v2.6/query-data/influxql/explore-data/time-and-timezone/#time-syntax) +between `2019-08-18T00:00:00Z` and `2019-08-18T00:30:00Z` with results in +[descending timestamp order](/influxdb/v2.6/query-data/influxql/explore-data/order-by/). +The query also [limits](/influxdb/v2.6/query-data/influxql/explore-data/limit-and-slimit/) +the number of points returned to four and [offsets](/influxdb/v2.6/query-data/influxql/explore-data/offset-and-soffset/) +results by two points. + +```sql +SELECT TAN("water_level") FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' ORDER BY time DESC LIMIT 4 OFFSET 2 +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | tan | +| :------------------- | ------------: | +| 2019-08-18T00:18:00Z | -1.0559235802 | +| 2019-08-18T00:12:00Z | -1.0267433979 | +| 2019-08-18T00:06:00Z | -0.9553984098 | +| 2019-08-18T00:00:00Z | -1.0084243657 | + +{{% /expand %}} + +{{< /expand-wrapper >}} + +### Advanced syntax + +```sql +SELECT TAN(( [ * | ] )) FROM_clause [WHERE_clause] GROUP_BY_clause [ORDER_BY_clause] [LIMIT_clause] [OFFSET_clause] [SLIMIT_clause] [SOFFSET_clause] +``` + +The advanced syntax requires a [`GROUP BY time() ` clause](/influxdb/v2.6/query-data/influxql/explore-data/group-by/#group-by-time-intervals) and a nested InfluxQL function. +The query first calculates the results for the nested function at the specified `GROUP BY time()` interval and then applies the `TAN()` function to those results. + +`TAN()` supports the following nested functions: +[`COUNT()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#count), +[`MEAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean), +[`MEDIAN()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#median), +[`MODE()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mode), +[`SUM()`](/influxdb/v2.6/query-data/influxql/functions/aggregates/#sum), +[`FIRST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#first), +[`LAST()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#last), +[`MIN()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#min), +[`MAX()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#max), and +[`PERCENTILE()`](/influxdb/v2.6/query-data/influxql/functions/selectors/#percentile). + +#### Examples + +{{< expand-wrapper >}} + +{{% expand "Calculate the tangent of mean values" %}} + +Return the tangent of [mean](/influxdb/v2.6/query-data/influxql/functions/aggregates/#mean) `water_level`s that are calculated at 12-minute intervals. + +```sql +SELECT TAN(MEAN("water_level")) FROM "h2o_feet" WHERE time >= '2019-08-18T00:00:00Z' AND time <= '2019-08-18T00:30:00Z' AND "location" = 'santa_monica' GROUP BY time(12m) +``` + +{{% influxql/table-meta %}} +name: h2o_feet +{{% /influxql/table-meta %}} + +| time | tan | +| :------------------- | ------------: | +| 2019-08-18T00:00:00Z | -0.9815600413 | +| 2019-08-18T00:12:00Z | -1.0412271461 | +| 2019-08-18T00:24:00Z | -1.2000844348 | + +{{% /expand %}} + +{{< /expand-wrapper >}} diff --git a/content/influxdb/v2.6/query-data/influxql/manage-data.md b/content/influxdb/v2.6/query-data/influxql/manage-data.md new file mode 100644 index 000000000..6940845ad --- /dev/null +++ b/content/influxdb/v2.6/query-data/influxql/manage-data.md @@ -0,0 +1,149 @@ +--- +title: Manage your data using InfluxQL +description: > + Use InfluxQL data management commands to write and delete data. +menu: + influxdb_2_6: + name: Manage your data + parent: Query with InfluxQL + identifier: manage-database +weight: 204 +--- + +Use the following data management commands to write and delete data with InfluxQL: + +- [Write data with INSERT](#write-data-with-insert) +- [Delete series with DELETE](#delete-series-with-delete) +- [Delete measurements with DROP MEASUREMENT](#delete-measurements-with-drop-measurement) + +## Write data with INSERT + +The `INSERT` statement writes [line protocol](/influxdb/v2.6/reference/syntax/line-protocol/) +to a database and retention policy. + +### Syntax +```sql +INSERT [INTO [.]] +``` + +- The `INTO` clause is optional. + If the command does not include `INTO`, you must specify the + database with `USE ` when using the [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell/) + or with the `db` query string parameter in the + [InfluxDB 1.x compatibility API](/influxdb/v2.6/reference/api/influxdb-1x/) request. + +### Examples + +- [Insert data into the a specific database and retention policy](#insert-data-into-the-a-specific-database-and-retention-policy) +- [Insert data into the a the default retention policy of a database](#insert-data-into-the-a-the-default-retention-policy-of-a-database) +- [Insert data into the currently used database](#insert-data-into-the-currently-used-database) + +#### Insert data into the a specific database and retention policy + +```sql +INSERT INTO mydb.myrp example-m,tag1=value1 field1=1i 1640995200000000000 +``` + +#### Insert data into the a the default retention policy of a database + +```sql +INSERT INTO mydb example-m,tag1=value1 field1=1i 1640995200000000000 +``` + +#### Insert data into the currently used database + +The following example uses the [InfluxQL shell](/influxdb/v2.6/tools/influxql-shell). + +```sql +> USE mydb +> INSERT example-m,tag1=value1 field1=1i 1640995200000000000 +``` + +## Delete series with DELETE + +The `DELETE` statement deletes all points from a [series](/influxdb/v2.6/reference/glossary/#series) in a database. + +### Syntax + +```sql +DELETE FROM WHERE [=''] | [